On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) released a new technical assistance document to address concerns over the use of algorithmic software and AI in the hiring process. In this document, the EEOC warned employers who are using this type of software that they may be violating the American with Disabilities Act (“ADA”) and provided some examples best practices. The following summary will help you navigate this new and exciting field of employment software.

What is AI and algorithmic software? How do I know if my company is using it?

The EEOC defines “software” as information technology programs or procedures that provide instruction to a computer on how to perform a given task or function. An “algorithm” is a set of instructions that a computer follows to accomplish a task. So algorithmic software is a program or procedure that uses set parameters to accomplish a task. Algorithmic software can take many shapes and sizes, but two examples are hiring software that is given set instructions for screening applicants and software that gathers various employee data for performance evaluation.

“AI” stands for artificial intelligence, and it operates similarly to algorithmic software. The difference being that while algorithm software uses a set group of parameters or instructions that humans must change, AI can change the parameters by itself. Through complex processes, AI software can change the algorithm that it operates on to accomplish its task more efficiently. This is a very powerful and flexible tool that can help companies predict trends and adjust strategies in real time. Examples of this software include automatic or real-time workflow analysis, performance data gathering, and predictive employee efficiency modeling.

If your company uses software for automatic resume screening, recruiting, video interviewing, performance data gathering, or workflow analytics, for example, you may be using AI or algorithmic software.

I am using AI or algorithmic software. How do I know if I am violating the ADA?

The EEOC explains that the three most common ways that AI or algorithmic software violates the ADA are:

Intentionally or unintentionally screening out an individual for employment, even though that individual can do the job with a reasonable accommodation. “Screen out” occurs when a disability prevents a job applicant or employee from meeting—or lowers their performance on—a selection criterion, and the applicant or employee loses a job opportunity as a result. A disability could have this effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether.

Not providing a reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. For example, an applicant who has limited dexterity because of a disability, and may report that they would have difficulty taking a knowledge test that requires the use of a keyboard, trackpad, or other manual input device. Thus, the test would not accurately measure this applicant’s knowledge. In this instance, employers must provide an accessible version of the test – perhaps by providing the test orally – as a reasonable accommodation (unless doing so would cause undue hardship).

Violating the ADA’s restrictions on disability-related inquiries and medical examination. An algorithmic decision-making tool that could be used to identify an applicant’s medical conditions would violate these restrictions if it were administered prior to a conditional offer of employment. Not all algorithmic decision-making tools that ask for health-related information are “disability-related inquiries or medical examinations,” however. For example, a personality test is not posing “disability-related inquiries” because it asks whether the individual is “described by friends as being ‘generally optimistic,’” even if being described by friends as generally optimistic might somehow be related to some kinds of mental health diagnoses. Note, however, that even if a request for health-related information does not violate the ADA’s restrictions on disability-related inquiries and medical examinations, it still might violate other parts of the ADA. For example, if a personality test asks questions about optimism, and if someone with Major Depressive Disorder (“MDD”) answers those questions negatively and loses an employment opportunity as a result, the test may “screen out” the applicant because of MDD.

What can I do to prevent violating the ADA with my algorithmic software or AI?

The EEOC have given best practices for companies using algorithmic software or AI to make sure they are not violating the ADA:

  • Consistently and regularly evaluating the software used to make sure it is not screening out inappropriately screening out applicants;
  • Advertising specific examples of reasonable accommodations and their availability;
  • Guaranteeing that all computer-based tools are accessible to individuals with disabilities;
  • Providing substantive training and information to applicants and employees regarding the data gathered about them and the metrics that the software will use in its evaluation; and
  • Supplying clear and concrete instructions on how to request reasonable accommodations for applicants and employees.

Every company is different, and best practices do vary. However, employers should be clear and direct about their policies and accommodations, make sure that individuals with disabilities have the space to advocate for themselves, and keep a diligent and informed understanding of their employment software.

The Labor and Employment team wishes to gratefully acknowledge the significant contribution of Zechariah McGugan, a summer associate.