Employment Law Q&A: What is AI and why does it matter for employers?
What is AI?
AI stands for “artificial intelligence.” Generally, AI is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the development of systems endowed with the intellectual processing characteristics of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.
The Equal Employment Opportunities Commission, recently released guidance addressing the application of AI to the Americans with Disabilities Act (ADA). The guidance defines three key terms:
- Software: The EEOC defines software as information technology programs that tell computers how to perform a given task or function. In the employment realm and for the purpose of the guidance, software can be used to screen resumes, workflow, hiring, and retention.
- Algorithms: Algorithms encompass sets of instructions that a computer follows to accomplish an identified task. For example, algorithms may be used to rank, evaluate, or make decisions about candidates or current employees.
- Artifical intellgence: Similar to the definition above, the EEOC defines artificial intelligence as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments...AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems. For a general discussion of AI, which includes machine learning.”
How could an employer use AI?
The guidance from the EEOC provides various examples where AI comes into play for employment decisions:
resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test.
How might the use of AI violate the ADA?
The guidance from the EEOC discusses three areas where an employer’s use of AI could violate the ADA:
- Failure to provide a reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.
For example, the EEOC explains that when an employer uses a test to measure employee knowledge that requires the employee to use a keyboard or trackpad, the employer may need to provide an accessible version of the test to an employee with limited manual dexterity (or, where it is not possible to provide an accessible version of the test, to provide an alternative testing format).
Additionally, where an employer uses a third party, such as a software vendor, to administer and score pre-employment tests, the failure of the vendor to provide a reasonable accommodation required by the ADA would likely result in employer liability, even if the employer was unaware that the applicant reported the need for an accommodation to the vendor.
- The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation.
The EEOC provides the following examples regarding screening out individuals with disabilities:
- Use of video interviewing software that is intended to analyze an applicant's problem-solving ability through speech pattern may screen out qualified applicants with a speech impediment.
- Use of a personality test that delves into an employee’s “optimism” may screen out an applicant with a mental illness such as Major Depressive Disorder who responds to such questions negatively.
- Use of a “chatbot” programmed to reject applicants who indicate they have a significant gap in employment history, where the gap is the result of a disability, which would cause the “chatbot” to screen out the qualified applicant because of their disability.
- The employer adopts and uses an algorithmic decision-making tool with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations. An assessment tool cannot explicitly request medical information from applicants and it cannot be used to identify an applicant's medical condition. Notwithstanding this, assessments that identify broad personality traits (such as personality tests) will usually not violate the ADA if they are not designed to reveal a specific diagnosis or condition.
What should an employer do?
The EEOC's guidance provides several “promising” steps that employers can take to prevent violations of the ADA when using AI:
- Training staff to: (i) recognize and process requests for reasonable accommodation as quickly as possible, including requests to retake a test in an alternative format, or to be assessed in an alternative way, after the individual has already received poor results; and (ii) develop or obtain alternative means of rating job applicants and employees when the current evaluation process is inaccessible or otherwise unfairly disadvantages someone who has requested a reasonable accommodation because of a disability;
- Instructing third-party entities such as software vendors to forward all reasonable accommodation requests to the employer or enter into agreements with such third parties requiring the entities to provide such reasonable accommodations on the employer’s behalf;
- Using algorithmic decision-making tools that have been designed to be accessible to individuals with disabilities;
- Informing all job applicants and employees who are being assessed that reasonable accommodations are available for individuals with disabilities, and providing clear and accessible instructions for requesting such accommodations;
- Using algorithmic decision-making tools that only measure abilities or qualifications that are truly necessary for the job;
- Describing, in plain language and in accessible formats, the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating; and
- Measuring necessary abilities or qualifications directly, rather than measuring characteristics or scores that correlate with those abilities or qualifications.
Employers looking to use AI and other job screening and performance-measuring software should ensure that their applicants and employees are notified that they can request accommodations. Employers should also work with vendors to determine whether their software has been tested to ensure that it does not screen out individuals with disabilities and does not solicit medical conditions. In addition to issues under the ADA, these tools could be used inadvertently, or otherwise, to discriminate against other types of protected characteristics, such as race, national origin, or gender. If you are contracting with a third party to help develop AI tools, these issues need to be reviewed and analyzed prior to putting any AI into practice. Accordingly, it is strongly recommended that you enlist the assistance of experienced employment counsel to navigate this potential minefield of liability prior to committing to using any such tool.