EEOC Warns Employers About AI Discrimination Risk

Author: Emily Scace, XpertHR Legal Editor

May 18, 2023

Employers that use AI and algorithmic decision-making tools must be careful that the technology does not systematically disadvantage people based on their race, color, religion, sex or national origin, according to a new guidance document from the Equal Employment Opportunity Commission (EEOC).

In the document, the EEOC explains how AI and other algorithmic systems can run afoul of Title VII and recommends strategies for evaluating these technologies for potential discriminatory effects. While the guidance document does not create new legal obligations for employers, it nevertheless provides important insight into the EEOC's thinking about an emerging issue.

According to the EEOC, if an algorithmic tool adversely impacts individuals of a particular race, color, religion, sex (which includes pregnancy, sexual orientation and gender identity), or national origin - or a certain combination of these characteristics - an employer's use of the tool will violate Title VII unless the employer can show that it is job-related and consistent with business necessity. This is generally true even if the tool has been designed or administered by a third party, such as a software vendor, the agency notes.

The guidance focused on Title VII, so it did not address age or disability, which are protected under the Age Discrimination in Employment Act (ADEA) and the Americans with Disabilities Act (ADA), respectively. However, the EEOC has previously warned of disability discrimination risks associated with the use of AI and similar technologies.

Examples of algorithmic decision-making software include:

  • Resume scanners that prioritize applications using certain keywords;
  • Employee-monitoring software that rates employees based on keystrokes or other factors;
  • Virtual assistants or chatbots that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • Video interviewing software that evaluates candidates based on facial expressions and speech patterns; and
  • Testing software that provides job fit scores based on an applicant's or employee's personality, aptitude, cognitive skills or perceived cultural fit.

The EEOC recommends that employers looking to adopt these technologies ask software vendors if they have evaluated whether the use of the technology causes a substantially lower selection rate along the lines of Title VII protected characteristics. If the vendor indicates that such effects do exist, employers should consider whether the tool is truly a business necessity and whether there may be alternatives that do not result in these adverse effects.

Furthermore, the EEOC emphasizes that if a vendor is incorrect in its own assessment of a tool, the employer may nevertheless be liable for any discriminatory results the tool produces. Thus, employers should not simply rely on assurances from vendors but should do their own evaluations and adapt their practices if any tools or algorithms disproportionately harm certain demographic groups.