EEOC Chair Calls AI ‘New Civil Rights Frontier’

WASHINGTON — Companies thinking about using AI to take advantage of efficiencies in the hiring process were put on notice earlier this year when the U.S. Equal Employment Opportunity Commission settled its first-ever AI discrimination-in-hiring lawsuit, reaching an agreement with a company that may have used recruitment software that automatically rejected older applicants.
“This is really a new civil rights frontier,” EEOC Chair Charlotte Burrows told Brookings, a D.C.-based think tank, in a continuing discussion on the use of AI in hiring.
There have been many warnings about the potential for bias in AI algorithms, particularly biases coming from the data on which they are trained, yet the EEOC lawsuit was a first of its kind for AI-powered discrimination.
“The EEOC … is the little agency brought to you by the March on Washington for Jobs and Freedom. We just celebrated that 60th anniversary this year,” Burrows said. “And while we came from that in the 60s … we also understand we’ve got to be nimble enough to do our work now.”
“So much of employment is now becoming automated and included in that automation is artificial intelligence.”
Burrows noted that AI is being used by employers to make crucial decisions, including pre-screening processes, which can inadvertently result in employment discrimination and denials of opportunity. The challenge, she said, lies in the opacity of these processes, as understanding the inner workings of AI systems can be complex.
“[This is] not for any nefarious reason, but just because you need a certain level of expertise to understand what’s happening [with an AI algorithm],” she said.
These AI-generated applications, now used by up to 79% of employers for recruitment and hiring according to a Society for Human Resource Management survey from February 2022, can lead to violations of federal non-discrimination laws.
With the prevalence of AI, the EEOC has been actively addressing bias in the use of artificial intelligence in various aspects of employment, including recruitment, hiring, retention, promotion, performance tracking and dismissal.
Burrows said the agency aims to identify and address disparate impacts on various groups based on factors like race, religion, disability and age. The use of algorithms in decision-making processes that may, even inadvertently, result in biased outcomes, must be rectified.
“The big deal is more and more employers are using artificial intelligence and some other forms of automation in hiring — and in fact before hiring — in recruitment,” Burrows said.
Employers now rely on AI-driven tools to streamline candidate selection and efficiently sift through large pools of applicants.
“The world of those who develop new technologies is not a very diverse world. As we think about what’s being designed, we have to … make sure those who are developing this … understand what their civil rights obligations are,” Burrows said.
In addition to reminding employers that AI tools must comply with existing legislation, such as the Americans with Disabilities Act and Title VII, the EEOC has recently provided guidance to employers, offering a roadmap for utilizing AI while remaining compliant with legislation prohibiting discrimination based on race, ethnicity, sex and disability.
This guidance answers questions employers and tech developers may have about how laws apply to the use of automated systems in employment decisions, assists employers in evaluating whether systems may have an adverse or disparate impact on a basis prohibited by law and even offers suggestions on providing reasonable accommodations for applicants.
Said Burrows, “Just because the Civil Rights Act or the Americans with Disabilities Act did not talk about AI — because it didn’t exist yet — does not mean that when you step into employment, because you have a hiring procedure, you don’t have to worry about your civil rights obligations.”
You can reach us at [email protected] and follow us on Facebook and Twitter