The recent settlement serves as a cautionary tale for employers making use of artificial intelligence, as such tools may cause employers to run afoul of nondiscrimination law based on a disparate impact theory.
Joshua C. Hausman, Esq., Campbell Durrant, P.C. | October 2023
Artificial intelligence (“AI”) has been a hot topic since the release of “ChatGPT” late last year. While use cases of this technology are often flippant and silly—a recent AI-produced example of Frank Sinatra singing a song by the band ‘Green Day’ comes to mind—there is also little doubt that AI has the potential to transform the way in which much of the world works. The most obvious potential of this technology, at least at its current stage of development, is through the automation of tasks that may have previously been viewed as largely mechanical or formulaic. One such example which might arise in the employment context is the use of AI tools to screen job applicants. For several months, the Equal Employment Opportunity Commission (“EEOC”) has cautioned that biases built into these tools—whether intentionally or inadvertently because of a bias inherent in the particular data set on which the AI was developed—could cause employers to inadvertently run afoul of nondiscrimination law.
On September 11, 2023, the EEOC announced that it had reached settlement in an age discrimination case brought against an employer who had used software to automatically screen job applicants. According to the EEOC, the employer—iTutorGroup, which provides online tutoring services—had programmed their application software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. Allegedly, over two hundred (200) otherwise qualified applicants had therefore been rejected because of their age. The practice was apparently discovered when a suspicious applicant lowered their age and made it through the automated screen as a result. As part of the settlement, iTutorGroup will pay $365,000 to applicants rejected due to their age and will also be required to provide training and policy updates to address discriminatory practices.
The Age Discrimination in Employment Act (“ADEA”) makes it unlawful for employers to discriminate against persons forty (40) years of age or older. While it should hopefully be obvious that intentionally excluding older job applicants based on their age would violate the law, the settlement is nevertheless regarded as significant because it represents one of the EEOC’s first successful enforcement actions against a company using automatic hiring tools since the agency announced its Artificial Intelligence and Algorithmic Fairness Initiative last year. Since that time, the EEOC has repeatedly referred to AI as a “new civil rights frontier.” When the EEOC announced the lawsuit against iTutorGroup last year, EEOC Chair Charlotte A. Burrows emphasized: “Even when technology automates the discrimination, the employer is still responsible.” The recent settlement confirms that the EEOC will be taking enforcement action against employers whose automation tools run afoul of nondiscrimination law.
Of primary concern to employers considering the use of AI tools, however, is the fact that discrimination need not be intentional to be unlawful. “Disparate impact” discrimination occurs when facially neutral employment decisions have a disproportionate adverse impact on members of a protected class. As part of the EEOC’s broader AI Initiative, the agency in May of this year published a technical assistance document on “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”
In that guidance, the EEOC cautioned that just as employers should monitor their “traditional decision-making procedures” to determine whether a disparate impact may exist, so too should they monitor non-traditional tools such as AI which are used to make employment decisions. This is because an employer’s “selection procedures”—defined as the procedures by which an employer makes an employment decision including but not limited to hiring, promotion, or firing—may violate Title VII on a disparate impact basis. The result does not change when an employer’s selection procedures are assisted by or even wholly reliant upon AI tools. The EEOC reminded employers that they may be held responsible for discriminatory employment practices engaged in by an outside vendor or agent which the employer has authorized to act on its own behalf.
“Artificial intelligence” is something of a misnomer, at least in the forms in which it is presently available to the public. Large-language models such as ChatGPT are “trained” on datasets, and that data itself may have inherent biases that can manifest in a disparate output from the AI tool if not properly monitored and controlled. In light of this fact, the EEOC suggests that employers making use of such tools first ask the vendors what processes have been put in place to evaluate whether the use of the tool results in a substantially lower selection rate for individuals in a protected class. However, merely asking for this reassurance will not be enough to protect an employer if a disparate impact on a protected class nevertheless occurs.
Therefore, the EEOC recommends that employers also conduct self-analyses on an ongoing basis, including on the outcomes associated with their AI tools, to determine whether their employment practices have a disproportionately large negative impact on a protected class, and to either correct those practices if so, or confirm that the tool is nevertheless job related and consistent with business necessity. Where the use of the tool violates the four-fifths rule (e.g., causes a selection rate in a protected classification which is less than 80% of the rate in another classification), and it is does not meet the job-relatedness and business necessity standard, the employer should revise the practice to eliminate the adverse impact.