State of AI Bias in Talent Acquisition - Download Report

California’s FEHA Amendments: 5 Webinar Insights from Wilson Sonsini

AI Regulation

/

01 Oct 2025

California’s FEHA Amendments: 5 Webinar Insights from Wilson Sonsini

Joined by experts from Wilson Sonsini, we explored what the new FEHA amendments mean for vendors, employers, and the broader HR tech market in this webinar.

California’s Fair Employment and Housing Act (FEHA) amendments are now in effect as of October 1. 

In Warden AI’s recent webinar, joined by experts from Wilson Sonsini, we explored what the new FEHA amendments mean for vendors, employers, and the broader HR tech market.

Our speakers include:

While some companies are only just hearing about the regulation, the reality is already here. If you’re building or buying in HR tech, these are the takeaways that stood out the most from the webinar.

1. Liability is on both vendors and employers

The FEHA amendments expand who can be held liable. The amendments broaden accountability to include ‘agents of the employer,’ which can include vendors.

Nedim explained:

You can’t offload responsibility under FEHA by just adopting ADS and blaming the system. Agents conducting traditional employer functions, like screening or ranking candidates, can be held liable.

But through case law, what we're seeing is that some courts are starting to adopt this idea of applying the federal anti-discrimination law against agents of the employer, including those are are using ADS.

In Mobley vs. Workday, the court found Workday could be treated as an employer’s agent because its AI tool assessed and recommended candidates, something an HR practitioner would traditionally do.

In other words, traditional anti-discrimination laws are being extended to cover agents performing core employer functions. 

2. FEHA broadens the definition of an ADS

Under California’s FEHA, an automated decision system (ADS) is any technology that helps shape or make employment decisions, from resume screeners to video interview tools.

A key concept in how these systems are regulated is whether the ADS is considered a “substantial factor” in a hiring decision. 

“Substantial factor” is wider in scope in California’s FEHA. Across Colorado SB205, NYC LL 144, and the EU AI Act, AI regulation considers whether an ADS is a “substantial factor” in a hiring decision. That nuance doesn’t quite apply under FEHA.

As Maneesha succinctly said: 

While you might say in New York, ‘decisions are human-led, so the AI isn’t a substantial factor,’ you don’t have that argument in California.

This means employers and vendors can’t downplay the role of AI tools to a court. If bias occurs, intentional or not, liability will follow. 

While other jurisdictions regulate AI primarily when the system plays a significant role in a hiring outcome, the state of California can hold employers liable for discrimination despite the ADS playing a less essential, or even minor, role in an outcome. 

‘Substantial factor’ overview across other regulations 

A quick recap on what this means across the different AI regulations we’ve seen recently: 

  • California’s FEHA: Prohibits employers from using an ADS that has a discriminatory outcome for a protected class. This regulation does not require a plaintiff to prove the ADS was a “substantial factor” in the final hiring decision. 

  • Colorado SB 205: Defines a high-risk AI system as one that is a “substantial factor” in making a decision, such as hiring. For these types of systems, bias impact assessments are necessary as is notice to job applicants. 

  • NYC LL 144: Mandated third-party bias audits for any automated employment decision tool that “substantially assists” or replaces discretionary decision making in hiring or promotion. 

  • EU AI Act: Classifies AI systems used for recruitment as “high-risk,” with some exemptions, and imposes obligations on these systems, regardless of their role in the recruitment process.

Learn more about the differences across regulations in our article about California’s FEHA

3. Bias audits are the best defense

Unlike NYC LL 144, California’s FEHA doesn’t mandate bias audits outright, but it’s clear conducting an audit is critical if challenged.

Aren noted:

Relevant to any claim or defense will be the evidence, or lack of evidence, of anti-bias testing, including its quality, recency, and scope.

Maneesha added a caution on privilege:

If you’re going to run bias audits, do them under attorney-client privilege. Otherwise, a negative report could end up in a plaintiff’s hands.

Whether internal or external, bias audits are becoming the standard evidence to defend automated AI systems. 

Regulators will most likely gauge at how recent an ADS has been audited in tandem with hiring decisions, as well as how thorough the auditing was. 

This applies to both vendors and employers deploying ADSs, essentially making it a critical step under the amendments.

4. Recordkeeping matters to the court 

Vendors and employers must now retain four years of ADS data, from inputs to outputs (for example scores and ranking), and even datasets used to customize systems. 

The law covers what needs to be kept generally (e.g. application, personnel records, membership records), but also expands to ADS data. 

ADS data can be defined as any data used in or resulting from the application of ADS, such as data provided by or about individual applicants or employees, or data reflecting employment decisions or outcomes.

As Nedim summarized:

It’s not just applications and resumes anymore. It’s the data flowing through the ADS itself that needs to be preserved.

Warden’s Audit Trail is positioned to provide legal-grade evidence to defend AI systems. 

The audit trail feature is a company's record of fairness. Every audit is backed by continuous monitoring, timestamped logs, versioned datasets, and transparent reporting. 

5. Risk isn’t reduced by using AI, it shifts

The risk for AI in HR is different, and the law is making it more visible.

Nedim explained:

With a human manager, you can more easily control actions. With AI, you don’t know what proxies or algorithms are driving outcomes. That lack of control is where the added risk comes in.

Controlling these systems is inherently difficult. That difficulty, combined with the need to test for bias and ensure the system isn’t relying on any protected characteristics, is where the additional risk comes in.

The final takeaway: more guardrails and less roadblocks

The FEHA amendments don’t ban AI in recruitment. They make clear that AI is held to the same anti-discrimination rules as humans.

Maneesha concludes: 

This doesn’t slow adoption. There’s demand for these tools. But the message is clear: you must stay on top of how they behave. You can’t outsource compliance or close your eyes.

For vendors and employers, the path isn’t simple, but it is navigable: 

  • Audit systems regularly, and under privilege if possible

  • Clarify roles in contracts with vendors and agents

  • Preserve ADS data for at least four years

  • Frame AI tools honestly, not as a “black box,” but as systems subject to AI bias testing
Slide from Wilson Sonsini FEHA Amendments Webinar with Warden AI

Fairer AI systems for HR isn’t an automatic process. But with robust testing and transparency, it’s achievable, and defensible.

Learn more about bias testing from Warden. 

Check out Wilson Sonsini’s deck on the new FEHA amendments. 

Watch the full webinar with Wilson Sonsini here. 

Join the companies
building trust in AI

Request Demo