Joined by experts from Wilson Sonsini, we explored what the new FEHA amendments mean for vendors, employers, and the broader HR tech market in this webinar.
California’s Fair Employment and Housing Act (FEHA) amendments are now in effect as of October 1.
In Warden AI’s recent webinar, joined by experts from Wilson Sonsini, we explored what the new FEHA amendments mean for vendors, employers, and the broader HR tech market.
Our speakers include:
While some companies are only just hearing about the regulation, the reality is already here. If you’re building or buying in HR tech, these are the takeaways that stood out the most from the webinar.
The FEHA amendments expand who can be held liable. The amendments broaden accountability to include ‘agents of the employer,’ which can include vendors.
Nedim explained:
You can’t offload responsibility under FEHA by just adopting ADS and blaming the system. Agents conducting traditional employer functions, like screening or ranking candidates, can be held liable.
But through case law, what we're seeing is that some courts are starting to adopt this idea of applying the federal anti-discrimination law against agents of the employer, including those are are using ADS.
In Mobley vs. Workday, the court found Workday could be treated as an employer’s agent because its AI tool assessed and recommended candidates, something an HR practitioner would traditionally do.
In other words, traditional anti-discrimination laws are being extended to cover agents performing core employer functions.
Under California’s FEHA, an automated decision system (ADS) is any technology that helps shape or make employment decisions, from resume screeners to video interview tools.
A key concept in how these systems are regulated is whether the ADS is considered a “substantial factor” in a hiring decision.
“Substantial factor” is wider in scope in California’s FEHA. Across Colorado SB205, NYC LL 144, and the EU AI Act, AI regulation considers whether an ADS is a “substantial factor” in a hiring decision. That nuance doesn’t quite apply under FEHA.
As Maneesha succinctly said:
While you might say in New York, ‘decisions are human-led, so the AI isn’t a substantial factor,’ you don’t have that argument in California.
This means employers and vendors can’t downplay the role of AI tools to a court. If bias occurs, intentional or not, liability will follow.
While other jurisdictions regulate AI primarily when the system plays a significant role in a hiring outcome, the state of California can hold employers liable for discrimination despite the ADS playing a less essential, or even minor, role in an outcome.
A quick recap on what this means across the different AI regulations we’ve seen recently:
Learn more about the differences across regulations in our article about California’s FEHA.
Unlike NYC LL 144, California’s FEHA doesn’t mandate bias audits outright, but it’s clear conducting an audit is critical if challenged.
Aren noted:
Relevant to any claim or defense will be the evidence, or lack of evidence, of anti-bias testing, including its quality, recency, and scope.
Maneesha added a caution on privilege:
If you’re going to run bias audits, do them under attorney-client privilege. Otherwise, a negative report could end up in a plaintiff’s hands.
Whether internal or external, bias audits are becoming the standard evidence to defend automated AI systems.
Regulators will most likely gauge at how recent an ADS has been audited in tandem with hiring decisions, as well as how thorough the auditing was.
This applies to both vendors and employers deploying ADSs, essentially making it a critical step under the amendments.
Vendors and employers must now retain four years of ADS data, from inputs to outputs (for example scores and ranking), and even datasets used to customize systems.
The law covers what needs to be kept generally (e.g. application, personnel records, membership records), but also expands to ADS data.
ADS data can be defined as any data used in or resulting from the application of ADS, such as data provided by or about individual applicants or employees, or data reflecting employment decisions or outcomes.
As Nedim summarized:
It’s not just applications and resumes anymore. It’s the data flowing through the ADS itself that needs to be preserved.
Warden’s Audit Trail is positioned to provide legal-grade evidence to defend AI systems.
The audit trail feature is a company's record of fairness. Every audit is backed by continuous monitoring, timestamped logs, versioned datasets, and transparent reporting.
The risk for AI in HR is different, and the law is making it more visible.
Nedim explained:
With a human manager, you can more easily control actions. With AI, you don’t know what proxies or algorithms are driving outcomes. That lack of control is where the added risk comes in.
Controlling these systems is inherently difficult. That difficulty, combined with the need to test for bias and ensure the system isn’t relying on any protected characteristics, is where the additional risk comes in.
The FEHA amendments don’t ban AI in recruitment. They make clear that AI is held to the same anti-discrimination rules as humans.
Maneesha concludes:
This doesn’t slow adoption. There’s demand for these tools. But the message is clear: you must stay on top of how they behave. You can’t outsource compliance or close your eyes.
For vendors and employers, the path isn’t simple, but it is navigable:
Fairer AI systems for HR isn’t an automatic process. But with robust testing and transparency, it’s achievable, and defensible.
Learn more about bias testing from Warden.
Check out Wilson Sonsini’s deck on the new FEHA amendments.
Watch the full webinar with Wilson Sonsini here.