On January 9, 2025, New Jersey Attorney General Matthew J. Platkin and the Division on Civil Rights (“DCR”) issued “Guidance on Algorithmic Discrimination” (“Guidance”). This Guidance stems from the DCR’s newly launched “Civil Rights and Technology Initiative,” aiming to address the risks of bias-based harassment and discrimination stemming from the use of artificial intelligence and other similar advanced technologies (collectively referred to as “AI”). Governor Murphy’s “Artificial Intelligence Task Force,” created in October 2023, informs the DCR’s initiative.
The DCR’s goal is to provide support to the New Jersey public and covered regulated entities on how the New Jersey Law Against Discrimination (“NJLAD”) applies to the use of AI. The NJLAD is implicated when AI decision-making results in unlawful discrimination. Using AI in decision-making refers to any tool that automates the human decision-making process in part or entirely. Notably, the Guidance does not impose any new requirements but confirms that the NJLAD protects members of the New Jersey public from algorithmic discrimination stemming from the use of AI. The NJLAD prohibits algorithmic discrimination in, among other things, the employment setting. Moreover, the NJLAD applies to algorithmic discrimination the same way it has applied to other forms of discrimination. As such, the impact of an employer’s decision is the central issue when determining discrimination under the NJLAD.
Specifically, discriminatory outcomes from the use of AI decision making tools can come from designing, training, and deploying the AI tool. Algorithmic discrimination happens when AI algorithms engage in unfair decision-making that disadvantage particular groups of people, usually when the algorithms use previous data that supports existing inequities, or when the algorithms use biased data. For example, a 2024 Bloomberg study, OpenAI’s GPT is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias, found algorithmic discrimination when employers used an automated decision-making tool to screen resumes and make hiring decisions, resulting in race and gender-based discrimination. This study showed that the AI tool found correlations between specific job types and protected classes; the tool assumed applicants’ gender and race based on their names then reproduced the correlations.
According to the Guidance, using AI as an automated decision-making tool may result in discrimination if the tool is not cautiously designed, reviewed, and tested. The NJLAD prohibits algorithmic discrimination based on perceived or actual protected classes in the following claims: disparate treatment/impact discrimination, and/or the failure to provide reasonable accommodations:
- Disparate treatment occurs when an employer designs and/or uses AI decision-making tools with an intent to treat members of a protected class differently; or if the AI tool used is discriminatory on its face.
- Disparate impact discrimination occurs when AI decision-making tools make or contribute to making decisions that unequally affect members of a protected class.
- Reasonable accommodation may be affected by the use of an AI decision-making tool if such tool precludes or impedes a person’s provision of reasonable accommodation or creates inaccessibility. An example of inaccessibility includes an AI tool that measures a job candidate’s typing speed but cannot account for a disabled person’s typing speed on a non-traditional keyboard.
This Guidance states that covered regulated entities may be liable for algorithmic discrimination that results from using an AI decision-making tool, even if a third party developed it. Moreover, an employer can be liable under the NJLAD even if there is no intent to discriminate. Thus, an employer or covered regulated entity should proceed cautiously and carefully review the use of any AI decision-making tool employed, as well as remedy any potential bias to avoid discriminatory outcomes.
Recommended best practices for employers include:
- Ensuring the employer understands the AI tools.
- Training employees on AI tools.
- Reviewing and auditing AI tools: While the NJLAD has no audit requirement, it is important to audit to ensure existing and future use of AI tools comply with NJLAD requirements.
- Providing notice to applicants and employees of the employer’s use of AI tools in the decision-making process.
- Monitoring the legal field for updates on the use of AI tools in the decision-making process.
Notably, in 2024, the New Jersey Legislature introduced bills (A. 3854 and A. 3911) seeking to regulate employers’ use of AI technology in the decision-making process for the hiring of employees. A. 3854 would require the sellers of AI decision-making tools to undertake an annual bias audit, as well as require notification to job candidates of the use of such tools and present a summary of their most recent bias audit. A. 3911 would require an employer to obtain a candidate’s consent before using AI-enabled video interviews. New Jersey employers and organizations should keep a close eye on these developments.
This summary is for informational purposes only and is not intended to constitute legal advice. This information should not be reused without permission.