AI Regulation

Navigating the NYC Bias Audit Law for HR Tech platforms

Learn how to ensure compliance, demonstrate fairness, and stay competitive under NYC Local Law 144

Ed Schikurski

Published on
July 25, 2024

Introduction

The use of AI in the workplace is becoming increasingly common. By some estimates, as many as 83 percent of employers and up to 99 percent of Fortune 500 companies now use some form of automated tool to screen candidates for hire. Among other benefits, these AI tools promise to reduce recruitment costs, overcome bias and deliver better candidate matches.

However, applying AI in recruitment is not without challenges. One key challenge is detecting and preventing bias. While AI has the potential to reduce the bias inherent in human-driven decision-making, it also comes with the risk of amplifying existing biases at scale. If left unaddressed, this bias can unfairly hinder individuals’ participation in the economy and increase legal risks for businesses.

The application of AI in recruitment is identified as high-risk in several jurisdictions, leading to rapid developments in laws and regulations in this area. A notable milestone in this regulatory landscape is New York City’s Local Law 144, commonly known as the AI Bias Audit law, which came into effect on July 5, 2023.

What is the NYC AI Bias Audit Law?

The NYC AI Bias Audit Law is a pioneering legislative effort to combat AI bias in recruitment and HR. The law mandates that

  • An independent and impartial bias audit be completed and published before an automated employment decision tool is adopted to support employment and promotion decisions.
  • Employers and employment agencies must notify employees and job candidates who are residents of New York City about the use of such tools.

An automated employment decision tool is defined as a tool that uses machine learning, statistical modelling, data analytics, or artificial intelligence to assist or replace discretionary decision-making in employment.

Although it is a local law, its impact extends beyond the city to any company operating within its jurisdiction, including remote positions associated with an office in New York City.

What is a Bias Audit?

A bias audit is an impartial evaluation by an independent auditor to assess the tool’s disparate impact on individuals based on race/ethnicity and sex.

The NYC AI Bias Audit Law requires that the audit be performed by an independent entity not involved in using, developing, or distributing the automated employment decision tool, and having no direct financial interest in the employer or the vendor.

To ensure continuous compliance with the law, the bias audit should be performed annually and each violation of the law can result in penalties of up to $1,500.

What are the opportunities for AI Vendors?

While the NYC AI Bias Audit Law introduces significant challenges, it also creates substantial opportunities for AI vendors who can efficiently address their clients’ new compliance needs. Developing a value proposition around helping clients stay compliant with current and upcoming regulations, such as the NYC AI Bias Audit Law, can serve as a significant competitive differentiation that can help retain and attract clients who value ethical standards and legal compliance.

In summary, the growing use of AI in recruitment presents both opportunities and challenges. Laws like New York City’s AI Bias Audit law aim to ensure that the benefits of AI are realised without compromising fairness and equity. For AI vendors, this regulatory landscape offers a unique chance to lead in ethical AI deployment and compliance, setting them apart in a competitive market.

How can Warden AI help?

While AI vendors may fall just outside the AI Bias Audit Law, conducting an independent bias audit can be very helpful as it helps builds trust with vendors and employers and is becoming essential to stay competitive in an increasingly regulated market.

Warden AI's auditing platform addresses the NYC Bias Audit Law challenges, helping AI vendors to ensure compliance, transparency, and ethically responsibility. Here’s how Warden AI can support your journey towards compliance and innovation:

  • Automated bias checks: Regularly check AI systems for bias, using various techniques to identify potential areas of non-compliance and recommend corrective actions that help vendors meet the ethical standards demanded by the NYC Bias Law.
  • Proprietary dataset: Overcome insufficient historical data, Warden AI’s platform includes an extensive diverse test dataset to test AI systems for bias issues.
  • Suite of reporting tools: Demonstrate trust and transparency with Warden AI’s live AI Assurance dashboards, comprehensive Audit Reports, and assurance badges.
  • Human oversight: Facilitate human review and intervention, ensuring effective monitoring and control of high-risk AI applications.

Schedule a demo to find out how Warden AI can help you comply with the NYC Bias Audit Law and stay ahead in an increasingly more regulated and competitive market.

Start building AI trust today