Skip to main content

NYC Starts Regulating Employer Use of Artificial Intelligence, Indicating a Potential Trend

Go-To Guide:
  • Employers should assess whether automated tools utilized can be deemed “automated employment decision tools.”
  • Employers utilizing automated systems should assess whether the tools are biased in their predictions and selections.
  • Employers should monitor federal and state laws and regulations to ensure compliance with passed and impending legislation governing the use of artificial intelligence in the workplace.

Employer use of artificial intelligence (AI or automated systems) is becoming increasingly popular. Neither Congress nor the Executive Branch agencies has formally implemented laws or regulations governing employer use of AI in the workplace or the effects of the same. However, several states and local entities have enacted laws[1] or introduced legislation to address the use and/or effects of AI in the workplace. The latest and most sweeping is New York City’s recent enactment of Local Law 144 of 2021 (NYC Law or Law), which prohibits employers and employment agencies from using an automated employment decision tool (AEDT) unless the tool has been subject to a bias audit within one year of the use of the tool; information about the bias audit is publicly available; and certain notices have been provided to employees or job candidates. Employers or employment agencies must comply with the Law’s requirements before using an AEDT to substantially help them assess or screen candidates at any point in the hiring or promotion process. However, if an AEDT is used to assess someone who is not an employee or someone who is being considered for promotion and who has not applied for a specific position for employment, the bias audit and notice requirements do not apply.

Specifically, the NYC Law sets forth that bias audits, at a minimum, must calculate selection rates and impact ratios for categories which include sex, race/ethnicity, and intersectional categories of sex, ethnicity, and race (e.g., Hispanic or Latino female candidates vs. non-Hispanic or Latino Black or African American male candidates). The Law further sets data requirements on historical and test data; requires that results from the bias audit be made publicly available on the employment section of the employers’ website; and requires notice of the same to candidates and employees. The Law does not provide guidance on what constitutes evidence of bias or discrimination in the use of AI.

While there is no private cause of action against employers who do not comply with the NYC Law, employers found in violation of the Law are subject to an initial fine of $500 on the first offense and $1,500 on each offense thereafter. Given the uncertainty as to whether employers will have sufficient data in form and amount to conduct an audit, employers may wish to engage auditors to assist in the audit and reporting of the same. Additionally, employers should review and assess their policies in implementing the use of AI in the workplace.

Although listing fewer protected classes than those under Title VII, which protects employees and job applicants from employment discrimination based on race, color, religion, sex, and national origin, the NYC Law has taken steps that neither Congress nor federal agencies such as the Equal Employment Opportunity Commission (EEOC) have taken in regulating the effects of AI in the workplace. The NYC Law may be a preview of what employers can expect from the EEOC and other federal agencies that have increased efforts to regulate potential employment-related biases arising from AI usage, and employers may need to prepare for these regulations.

Earlier in 2023, the EEOC held a hearing featuring scholars and data scientists titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” centered on the discriminatory issues related to AI and hiring practices. The hearing previewed where the EEOC’s focus will lie regarding employer-utilized AI. Key takeaways from the hearing include:

– Increased guidance from the EEOC is needed with respect to an employer’s duty to explore less discriminatory alternative selection procedures;

– The EEOC should mandate employer audits of any automated hiring systems in use; and

– The EEOC should develop its own automated governance tools in the form of AI or automated systems that could then provide audit services to corporations deploying automated hiring systems.

In April 2023, the EEOC, Consumer Financial Protection Bureau, Department of Justice’s Civil Rights Division, and the Federal Trade Commission issued a joint statement explaining that federal laws and adverse actions apply to AI no matter the purpose and use of the technology. The statement highlighted the different sources of discrimination in automated systems that include data and datasets relying on and incorporating historical bias; the lack of transparency making it difficult for even developers to know whether automated systems are fair; and systems designed on flawed assumptions regarding the system’s end users or the underlying practices or procedures it may replace.

In May 2023 the EEOC provided guidance on the use of AI in the employment space and the need for employers to ensure that AI used does not violate Title VII. This guidance builds upon May 2022 EEOC guidance issued specifically with respect to individuals with disabilities. “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” guidance provides that employers have an obligation to assess whether AI technology displays a bias against individuals with disabilities, and that employers must have a plan to provide reasonable accommodations to individuals (employees and applicants) about whom AI tools are used to make employment decisions. The guidance also prohibits employers from obtaining and utilizing medical information and making disability-related inquiries through AI tools with respect to employment decisions.

The latest guidance issued by the EEOC regarding the applicability of Title VII to AI in eliminating discrimination in employment puts forth the use of the four-fifths rule in assessing whether AI utilized in the workplace is biased toward protected classes. The four-fifths rule first looks at the selection ratio of the protected class group of applicants/employees compared to the ratio of unprotected class groups for a particular position assessed using AI. If the ratio of protected applicants/employees to non-protected applicants/employees is less than 80% then there is a presumption that the AEDT produces biased results. While courts have not always found the four-fifths rule to be appropriate in assessing whether tests are biased, the EEOC has implied that this is the standard by which it will assess whether an AEDT produces biased results. And employers should note that the final regulations of the NYC Law utilize a similar analysis in assessing whether an audit conducted demonstrates biased results from the AI used.

Again, while there has been no federal legislation or regulation governing the use of AI, several state and local jurisdictions have introduced or are exploring laws similar to the NYC law. Thus, employers and other sectors using AI should take heed. The EEOC and other federal agencies’ pledge to vigorously monitor the use of automated systems serves as a warning. The EEOC has filed a lawsuit against an online tutoring agency alleging that the company’s online recruiting service was biased against older individuals. Thus, employers should consider:

  • ensuring that AI tools measure job-related skills and attributes;
  • auditing their automated systems, paying close attention to protected class characteristics when building or using AI models;
  • increasing transparency by providing applicants with information sufficient to understand whether and how they are being assessed by an AI tool or to determine if they require an accommodation;
  • seeking legal guidance for online intermediaries such as online job listings, as they may be delivered in a biased way by virtue of the algorithm.

AI is not going away – it has the potential to increase efficiency, improve accuracy, save costs, and improve a myriad of other business functions. Employers should be cognizant of AI’s uses and implications not only because of the ethical and moral importance but also to remain compliant with local, state, and federal regulations.


[1] Maryland prohibits employers from using automated facial recognition in video job interviews without the express consent of the applicant, and Illinois’ biometric laws place certain prohibitions on employers’ use of employees and applicants’ facial and other biometric identifiers (GT Alert, September 2021, Illinois Appellate Court (First District) Concludes Separate Limitations Periods Apply to Different Violations of the Illinois Biometric Information Privacy Act).