Prevent Your AI From Causing Unintentional Disability Discrimination

Author: Robert S. Teachout, XpertHR Legal Editor

August 31, 2022

Artificial intelligence (AI) is transforming HR and the workplace by enabling and improving organizations' ability to make data-based human resource decisions. These new technologies can play a key role that positively impacts an organization's work, workplace and workforce. They can facilitate talent acquisition with smart forms, shape curated and personalized onboarding experiences, facilitate collaboration and even make the workplace more democratic and accessible via AI-driven technology.

When implemented with foresight and care, AI also has the potential to improve compliance and risk management by mitigating discrimination in hiring and promotions. But without proper preparation and precautions, an employer runs the risk of its AI unintentionally screening out qualified candidates for discriminatory reasons, including applicants with disabilities.

The problems in an AI system may arise from inadequate testing, programming bias or the AI tool learning bias based on the data of past employer decisions provided for analysis. To help employers recognize the risk of bias and take steps to mitigate it, the Equal Employment Opportunity Commission (EEOC) released new guidance on the Americans With Disability Act and the use of AI.

Algorithmic tools are often designed to predict whether an applicant can do a job under typical working conditions, or to look for a profile that fits current, successful employees, explained Sharon Rennert, a senior attorney advisor at the EEOC's ADA/GINA Division during an XpertHR webinar.

"Typical working conditions do not include the use of reasonable accommodation," Rennert reminded attendees, "and a typical profile does not reflect the differences between a successful non-disabled employees and a person with a disability who can be successful if provided with a reasonable accommodation."

An unlawful "screen-out" may occur when an AI tool prevents a job applicant or employee from meeting (or lowers their performance on) a selection criterion, resulting in that individual losing a job opportunity. Take, for instance, a situation in which a chatbot program automatically screens out anyone who has a significant employment gap. Rennert asked, "What if the applicant had a six-month gap due to a mental health condition 20 years earlier, but has since had over 19-and-a-half years of uninterrupted service? In reality, this employee with a disability is very qualified." Although the criterion is neutral on its face, in practice the AI has caused an unlawful discriminatory hiring decision.

There also is the risk of AI learning bias based on patterns in the data provided to train the system, patterns that can unintentionally disadvantage an individual on the basis of many characteristics. For example, Amazon developed an automated talent search program to review resumes and vet applicants. Screening was based on patterns the AI learned from the resumes of successful candidates submitted to the company over a 10-year period. Amazon had to quickly halt the program after it became apparent that the system's hiring recommendations were biased against women. It turned out that the sampling of resumes used in the programming were mostly from men, reflecting the male dominance in the tech industry.

To minimize disadvantaging persons with disabilities when using an AI decision-making tool, Rennert recommended:

  • Using tools that have been pre-tested on individuals with wide range of disabilities (not just one or two), including those with mental health disabilities;
  • Ensuring that the decision-making tool only measures abilities and skills that are truly necessary for the specific position;
  • Measuring the necessary abilities and skills directly rather than indirectly by way of characteristics that are correlated as generally related to successful performance.
AI and Disability image

Employers also should minimize the risk of unintentional disability discrimination by instituting positive actions to provide reasonable accommodations. An employer should clearly and prominently inform all individuals being rated that reasonable accommodations are available for individuals with disabilities, and provide clear and accessible instructions on how to request one. Consider including such notice in the job profiles posted to internal and external job boards.

In addition, Rennert said, an employer should notify applicants early in the recruiting and hiring process that an AI tool will be used in the application and hiring process. Rennert advocates placing a notice near the beginning of the job application form, in large or bold letters, and not buried in small print near the end.

The information should describe in plain language (and accessible formats):

  • Traits the algorithm is designed to assess;
  • The assessment method;
  • Variables or factors that may affect the rating; and
  • If known, disabilities that might potentially cause a lower rating.

"Providing this information may help individuals with disabilities determine whether they need to request a reasonable accommodation," Rennert said.

Finally, Rennert explained, it is important for employers to remember that they are accountable for the hiring decisions they make using AI tools, even if they use a third-party provider. HR professionals are responsible when purchasing and implementing a new AI system to question the vendors about the algorithms - how they actually work and what they measure - and to constantly monitor and evaluate the results.

AI excels at recognizing data and information that can help an organization gain a better understanding of what job candidates have to offer. By making diligent preparations and taking necessary precautions in its use of AI tools, an organization will be able to make less biased and more effective recruiting and hiring decisions.