Skip to main content

NYC’s Law Governing Automated Employment Decision Tools Takes Effect July 5

Starting Wednesday, July 5th,  employers in New York City must comply with Local Law 144 and Department of Consumer and Work Protection (DCWP) Rules regulating the use of Automated Employment Decision Tools (AEDT) found in software used during the application or promotion process. The law and regulations govern AEDT, which is defined as any process that is “derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.”

To comply with these new requirements, Employers first need to determine if any of the software their HR professionals use during the hiring or promotion process utilizes an AEDT to either “substantially assist or replace discretionary decision-making” by humans to “score,” classify, or recommend NYC candidates or employees. The NYC law is broader than the laws that Illinois and Maryland enacted several years ago governing the use of facial-recognition software in the hiring process, and covers software that HR departments commonly use during the hiring and promotion processes.

If software using AEDT is used, employers must: (1) confirm that a bias audit has been conducted; (2) provide at least 10 days’ notice to the applicant or employee that software utilizing AEDT is being or will be used; (3) explain the qualifications the AEDT will use during the assessment; (4) provide the data source and type of AEDT being used, and the employer’s data retention policy (if not disclosed elsewhere); and (5) inform the applicant or employee that they may request an alternative means by which to be assessed (or a “reasonable accommodation” under other laws).

The initial bias audit is just the beginning of the employer’s obligations – employers are responsible for ensuring an AEDT audit is conducted on an annual basis. In addition, the employer must publish the results of the bias audit prior to the AEDT’s use, generally on the employer’s website. The new law also requires these bias audits to be conducted by “Independent Auditors” who exercise “objective and impartial judgment on all issues within the scope of a bias audit of an AEDT.” The auditor cannot have a “direct financial interest or material indirect financial interest” in an employer or entity that uses the AEDT or in the vendor that develops or distributes the software.

The new law also provides remedies for violations: civil penalties include a $375 fine for the first violation and at least $500 for the subsequent violations, not to exceed $1,500. Violations of the notice and audit requirements carry separate penalties.

This new regulation comes on the heels of the EEOC’s May 2023 guidance issued to employers that also requires bias audits, notice, and opt-out provisions. It also follows the April 2023 Joint Statement of the EEOC, the Consumer Financial Protection Bureau, the Federal Trade Commission and the Department of Justice, which reiterated federal agencies’ determination to enforce existing civil rights protections in the law should AI-driven software in any “automated system” result in a biased outcome in any arena, not just decisions relating to employment.1

Other jurisdictions are debating similar bills limiting the use of AI in employment-related decision making or requiring bias audits. The District of Columbia is considering the Stop Discrimination by Algorithms Act of 2023, which would prohibit discrimination based upon algorithms. It would also require D.C. employers to conduct annual third-party bias audits of algorithms used in their AI programs, and to notify employees and applicants about the use of AI in employment decisions. California is considering a law that would confer liability upon employers if the AI-driven software used in employment decisions results in a discriminatory impact. This year alone, more than 160 bills or regulations related to AI-driven software have been proposed in various state legislatures.

And, for global employers, the EU’s Artificial Intelligence Act continues to move forward, with the European Parliament adopting its official negotiating position on June 14, 2023. The Act would regulate the use of AI in many areas, not just employment, and creates several categories for regulation based on risk, including Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. It is anticipated that AI used for employment purposes would be classified as “high risk” and therefore subject to greater regulation.


1 Kelly Dobbs Bunting, Greenberg Traurig Labor & Employment Practice shareholder, and Jena M. Valdetero, shareholder and co-chair of Greenberg Traurig’s U.S. Data Privacy and Cybersecurity Practice, interviewed EEOC Commissioner Keith Sunderling on the firm’s Asked & Answered podcast. The two-part conversation broaches AI, data privacy, and employment law issues. Listen here.