Download our new CCPA Guide for Staffing & Recruitment

Jack & Jill Stress-Test Agentic AI for Bias Risk with Warden Assurance

Jack & Jill Stress-Test Agentic AI for Bias Risk with Warden Assurance

See how Jack & Jill tackle AI bias, proxy discrimination, and emerging HR regulations with dual-bias testing and independent Warden Assurance.

Jack & Jill’s mission to support candidates and employers

The pressure on recruiters and candidates is real. High application volume, fragmented workflows, and inconsistent evaluation standards create friction on both sides of the hiring process.

Jack & Jill’s mission is to reduce that friction by introducing structured, AI-driven infrastructure into hiring. At the center of the platform are two AI agents, Jack & Jill, designed to support both candidates and employers throughout the process.

These agents guide structured interactions, evaluate responses against predefined, job-relevant criteria, and generate outputs intended to bring greater consistency to hiring decisions.

AI trust is paramount in agentic AI

Matt mentions trust is both the solution and the problem.

Many teams do not fully trust the AI systems they are using. They do not know whether recommendations are fair, whether they introduce bias, or whether they would stand up to scrutiny.

That scepticism is understandable. Most teams deploying AI in recruitment cannot fully answer the questions that matter most. Is this recommendation fair? Could it be introducing bias we are not aware of? If a candidate challenged this decision, would it hold up? The honest answer, in many cases, is that they do not know.

That uncertainty creates operational friction. Recruiters override recommendations. Teams duplicate review processes. Organizations carry both human and algorithmic bias risk without fully understanding either.

Fairness matters in agentic AI especially, because talent leaders need evidence that systems are behaving as intended.

The need for fairness across the hiring ecosystem

Jill works on the enterprise side, matching candidates to specific roles, so it is crucial that Jill has fairness baked in to alleviate buyer concerns.

“How do you know your AI recruiter is not biased?" has been the most asked question from talent leaders.

Additionally, talent leaders using AI in HR are seeing real regulatory frameworks come into force. The EU AI Act, California FEHA, FCRA, and NYC Local Law 144 all introduce new expectations around transparency, accountability, and monitoring.

Candidates must be informed, decisions must be explainable, and organizations are needing evidence that systems are behaving as expected.

Keeping proxy bias at bay

Proxy bias is one of the more insidious challenges in AI-driven hiring, and one of the easiest to overlook. Proxy bias is when a system appears to ignore protected characteristics (like race, gender, age), but still discriminates because it uses other variables that stand in for them.

The problem is more widespread than many teams realize. An applicant's name can signal ethnicity or gender. A zip code can serve as a proxy for race or socioeconomic background. Even the phrasing used in a resume or cover letter can carry demographic signals that quietly shape how a system scores a candidate, without it being immediately visible in the outputs.

To combat this, Jack & Jill redacts information about the candidate to reduce the potential for proxy bias to seep in.

Bias auditing is an effective way to stress-test whether proxy signals for protected characteristics are influencing outcomes.

Jack & Jill stress-test with AI bias auditing

As part of becoming a Warden Assured platform, Jack & Jill implements a dual-bias testing methodology. This means the system is evaluated using two complementary statistical techniques, disparate impact analysis and counterfactual testing.

Disparate impact testing evaluates if the AI system is adversely impacting a protected group compared to another protected group.

Counterfactual testing systematically stress-tests model outcomes by modifying or removing proxy information (for example, names, gendered terms, or age signals) to determine whether those proxies materially influence decisions.

Jack and Jill implements both techniques, in addition to redaction before candidates are then matched, creating a stress-testing environment in the platform.

Why Jack & Jill chose ongoing, AI assurance

Jack & Jill did not want fairness to rest on internal validation alone.

Internal reassurance is rarely sufficient. Buyers, legal teams, and regulators expect independent assurance, particularly when AI systems influence employment outcomes.

The company therefore sought an AI assurance partner with specific expertise in talent acquisition systems, rather than a generalist auditor.

The objective was to not obtain a one-time certification but continuous oversight into the company’s AI system.

As Matt explains:

We consult with the experts. Having Warden onboard is reassuring to our customers. We wanted to show we go beyond the minimum requirement and give confidence that our AI system is safe and fair

Independent, continuous validation signals maturity.

Jack & Jill's AI Assurance Dashboard

The road ahead for Jack & Jill

Jack & Jill are moving from being a “fair-by-design” to an enterprise-safe, defensible AI platform.

The emphasis on independent audits, ongoing monitoring, and regulation signals that their next phase is about continuing to earn and prove trust under scrutiny.

Read Jack & Jill’s report on bias in recruitment here.

Learn more about AI bias auditing in our AI Bias Auditing Lexicon.

Join the companies
building trust in AI

Book Demo