Search
Close this search box.

EEOC and Lawmakers Address Discrimination Risks with Artificial Intelligence

With recent developments in science and technology, artificial intelligence (AI) has entered the workplace. AI may be a useful tool for employers when it comes to various workplace tasks, whether it be hiring or evaluating employees or monitoring their performance. AI may be particularly helpful when it comes to recruiting and hiring as AI and algorithmic decision-making tools can save time, money, and resources and make the selection process more efficient. This is especially true today with digital hiring when employers may be inundated with applicants and AI permits them to evaluate and analyze data and information more quickly. The Equal Employment Opportunity Commission (EEOC) cites the follows as examples of AI selection tools:

  • Resume scanners that prioritize applications using certain key words.

  • Virtual assistants or chatbots that ask applicants their preliminary qualification and reject those who do not meet predefined requirements.

  • Video interviewing software evaluating applicants based on facial expressions.

  • Testing software that provides a job fit score based on personality, aptitude, cognitive skills, or perceived cultural fit.

However, there are distinct drawbacks as the use of AI selection and decision-making tools may result in unintended discrimination and promote bias based hiring.

EEOC AI Guidance

As a result of the above concerns, on May 18, 2023, the EEOC responded by issuing a new resource in the form of guidance on artificial intelligence and Title VII of the Civil Rights Act of 1964 (“Title VII”) entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” (“EEOC AI Guidance”) specifically instructing employers that if an algorithmic tool adversely impacts individuals based on protected class state an employer’s use of such a selection tool may violate Title VII unless the employer can show that it is job related and consistent with business necessity.

Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether the use of the procedure causes a selection rate for individuals in that group which is “substantially less” than the selection rate for individuals in a non-protected group. Further, the Guidance advises that employers may face Title VII liability even if the test was developed by an outside software vendor who was acting as an agent of the employer. In light of the EEOC AI Guidance, employers who are determining whether to rely on a software vendor to develop or administer an AI decision-making tool should ask the vendor at a minimum whether steps have been taken to evaluate whether the use of the tool causes a substantially lower selection rate for individuals who are in Title VII protected group. If the vendor states that the tool may result in a substantially lower selection rate for individuals in a certain group, the employer should consider whether the use of the tool is job related and consistent with business necessity and whether there are alternatives that can meet the employer’s needs and that have less of a disparate impact.

If an employer is in the process of developing a selection tool or is presently using a selection tool that results in an adverse impact on individuals in a protected class, the employer should take steps to reduce the impact or select a different tool. Further, the EEOC instructs that employers should conduct a self-analysis on an ongoing basis to determine whether selection practices have a negative effect on a protected group.

This Guidance builds on previous guidance that had been issued by the EEOC on May 12, 2022, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” (“EEOC ADA Guidance”) in which the EEOC warned that an employer’s use of algorithmic decision-making tools may violate the ADA if for example:

  • The employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.

  • The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation.

  • The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.

There are a number of states and local jurisdictions that have proposed or passed laws, including, but not limited to those mentioned below.

New Jersey Proposed AI Bill

To further address these concerns a number of state and local lawmakers are proposing and passing legislation that would regulate the use of automated employment decision tools during the hiring process to minimize employment discrimination that may result from the use of the tools by primarily requiring such tools to be audited for biases as well as requiring employers to notify applications and employees of the use of such tools.

On December 5, 2022, New Jersey lawmakers proposed Assembly Bill 4909. Under the bill, an “automated employment decision tool” means any system the function of which is governed by statistical theory, or systems the parameters of which are defined by inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filter candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.

The bill prohibits the sale of automated employment decision tools in the State unless:

  • The tool is the subject of a bias audit conducted in the past year prior to selling the tool or offering the tool for sale;

  • The sale of the tool includes, at no additional cost, an annual bias audit service that provides the results of the audit to the purchaser; and

  • The tool is sold or offered for sale with a notice stating that the tool is subject to the provisions of the bill.

In addition, the bill provides that any person who uses an automated employment decision tool to screen a candidate for an employment decision is required to notify each candidate of the following within 30 days of the use of the tool:

  • That an automated employment decision tool, which is subject to an audit for bias, was used in connection with the candidate’s application for employment; and

  • The tool assessed the job qualifications or characteristics of the candidate.

The bill provides for civil penalties to be collected for violations of not more than $500 for the first violation and between $500 and $1500 for each subsequent violation.

If enacted, the bill would mirror existing legislation already passed in New York City which is effective July 5, 2023.

NYC AI Law Effective July 5, 2023

The enforcement of the New York City law, Local Law Int. 144, will become effective on July 5, 2023. It restricts employers from using AEDT (automated employment decisions tools) or AI in hiring and promotion decisions unless it has been the subject of a bias audit by an “independent auditor” no more than one year prior to use. The law also imposes certain posting and notice requirements to applicants and employees. Essentially, it makes it unlawful for an employer or an employment agency to use an AEDT (AI) to screen a candidate for employment or employee for promotion unless:

  • The AEDT has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and

  • The employer or employment agency publishes on its website a summary of the results of the most recent bias audit as well as the distribution date of the AEDT to which such audit applies.

While enacted in December 2021, and scheduled to initially take effect in January 2023, enforcement was delayed and final regulations were adopted on April 5, 2023 and enforcement will begin on July 5, 2023.

New York Proposed Law

On January 9, 2023, a proposed law was submitted in New York State, A00567, would add §203-f into the Labor Law which would establish criteria for the use of automated employment decision tools and provide for enforcement for violations of such criteria. Under the law, for an employer to implement or use an automated employment decision tool:

  • No less than annually, a disparate impact analysis must be conducted to assess the actual impact of any automated employment decision tool used by any employer to select candidates for jobs within the state. This disparate impact analysis must be provided to the employer but need not be publicly filed and shall be subject to all applicable privileges.

  • A summary of the most recent disparate impact analysis of such tool as well as the distribution date of the tool to which the analysis applies must be made publicly available on the employer’s or employment agency’s website prior to the implementation of such tool.

  • Any employer using an automated employment decision tool must provide to the department such summary of the most recent disparate impact analysis provided to the employer on that tool.

Takeaways

The use of artificial intelligence is on the rise in the workplace.

Employers should stay on top of these rapidly changing developments to ensure compliance and consider:

  • Ensuring that if they are using such AI tools that such tools comply with the legal requirements and do not result in a disparate impact on a protected group.

  • Reviewing/updating recruiting and hiring policies, practices, and procedures.

  • Training managers and supervisors and those with hiring authority on the parameters of these new laws.

This summary is for informational purposes only and is not intended to constitute legal advice. This information should not be reused without permission.