OFFICIAL PUBLICATION OF THE INDIANA BANKERS ASSOCIATION

Vol. 108 2024 Issue 3 (may/June)

HR Topics: Artificial Intelligence

Legal Risks With Use of AI for Employment Purposes

Artificial Intelligence Legal Risks With Use of AI for Employment Purposes

The use of artificial intelligence and algorithmic decision-making tools by employers in connection with employment matters, such as recruiting, hiring, evaluating productivity and performance, and monitoring employees, is becoming more prevalent.

What are some of the ways the technology is being used? An employer may, for example, scan a resume or application for certain keywords and exclude those that do not contain the specified criteria, or use ChatBot or other AI to conduct an initial screening before determining who to interview. Employers are also using technology to track productivity and to rank applicants or employees based on identified criteria. Similar to personality assessments, various technology is also designed to select applicants or employees for preferred personality traits, aptitudes, physical or cognitive abilities, etc.

Last May, the Equal Employment Opportunity Commission issued a technical assistance bulletin, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” to provide guidance to employers on how anti-discrimination laws apply to the use of technology. Discrimination based on protected characteristics, such as race, age, gender, disability, national origin, gender identity, etc., is prohibited by federal and state law. The EEOC has made it clear that it is not trying to prevent the use of AI and other computer-assisted technologies but warns employers about the legal risks and the potential for liability. Employers will be held liable if the AI or any other tools the employer is using have a disparate impact on the basis of protected characteristics, such as race, age, gender, disability, national origin, gender identity, etc.

Disparate impact is a way of proving discrimination absent evidence of intentional discrimination. It is often referred to as “unintentional discrimination.” The criteria being used by the employer are not discriminatory on their face but have a “disproportionately large negative effect” on the basis of a protected characteristic. In other words, the criteria exclude people with a certain protected characteristic at a disproportionately higher rate. Unless an employer can show that the criteria that has a disparate impact is job-related and consistent with business necessity, and that no similarly effective but less discriminatory alternatives are available, the practice is unlawful.

The practice is also unlawful if the criteria is designed to reveal a protected characteristic (e.g., a mental or physical impairment), screen for personal traits connected with some other protected characteristic or make decisions based on those traits.

These same issues have been raised when employers use personality tests or assessments, and employers have been sued under the Americans with Disabilities Act, Title VII and the Age Discrimination in Employment Act. Those lawsuits involved allegations over:

  • questions that implicated protected characteristics;
  • bias in administration and scoring;
  • impermissible medical questions;
  • whether the test suggested a disability or perceived disability based on the results; and
  • administration of the test was discriminatory based on age.

Similar to the use of personality assessments or tests, AI and algorithmic decision-making tools must not be used to weed out persons based on protected characteristics, and the use of the technology (based on the instructions or criteria utilized) must not result in excluding applicants or employees based on a protected characteristic.

In one seminal case, the Seventh Circuit (the federal court of appeals which covers Indiana) found that the use of the MMPI test by an employer had the likely effect of excluding applicants/employees with mental disorders (the test was designed to reveal mental illnesses), and therefore amounted to an impermissible medical examination under the ADA and could not be used in the pre-offer stage of the hiring process. The court also found that the employer’s practice of requiring employees seeking management positions to take the test violated the ADA because the test was designed, at least in part, to reveal mental disorders since elevated scores could be used in diagnoses of certain mental disorders. The court distinguished between tests that are designed to identify a mental disorder or impairment (medical exam) and tests that measure personality traits such as honesty, preferences and habits (not medical exams).

Employers using AI and algorithmic decision-making tools must remember that they have an ongoing obligation to ensure that their processes, whether human or computerized, do not intentionally or unintentionally discriminate against applicants or employees on the basis of any protected characteristic. Outsourcing the processes to a software vendor is not a defense to a claim of discrimination. Therefore, it is important to properly vet vendors and understand what steps the vendors have taken to assess or validate whether their software/tools have a disparate impact on protected groups.

Employers should understand the technology they are using and why they are using the technology. If it is being used to screen applicants for certain qualities or criteria, the employer should be prepared to identify and support the qualities they are requiring for a position (e.g., drive to succeed, dependability, team player, attention to detail, people-focused, etc.), and how the technology serves the employer’s legitimate business needs.

Employers should also actively monitor or self-analyze the technology they are using for potential disparate impacts (e.g., whether an employee is disadvantaged if they have a disability).

Employers must also remember that the anti-discrimination laws, including an employer’s obligation to provide reasonable accommodations for disabilities, apply equally to applicants and employees. Applicants with disabilities have the right to reasonable accommodations in the hiring process. Employers using AI or other technology (e.g., chatbots, video interviewing) may have an obligation to make reasonable accommodations for applicants and employees with visual or hearing impairments.

The information in this article is provided for general information purposes only and does not constitute legal advice or an opinion of any kind. You should consult with legal counsel for advice on your institution’s specific legal issues.

Debbie grew up watching her father practice law and seeing him help people resolve their problems inspired her to become a lawyer. With a focus on employment litigation and counseling, Debbie’s practice includes defending employers against discrimination claims, wage and hour violations, retaliation claims, unfair competition and FLSA collective actions. She also handles a wide range of business litigation matters.

Email Debra at DMastrian@AmundsenDavisLaw.com.

Amundsen Davis Logo

Amundsen Davis LLC is a Diamond Associate Member of the Indiana Bankers Association.

Get Social and Share!

Sign Up to Receive this Publication in your inbox

More In This Issue