Age discrimination in hiring remains a pressing issue, often excluding older candidates from equal employment opportunities. A 2023 AARP study revealed that 1 in 5 US adults over 50 report that they have experienced age discrimination since turning 40. Job seekers over the age of 50 face twice the job search duration compared to younger workers, according to OECD. In an AI-driven hiring process, this discrimination may emerge subtly, making it more difficult to detect. This is where AI bias audits play a crucial role—they help identify age biases embedded within automated systems, ultimately contributing to fairer hiring practices.
What is age bias in AI hiring?
In an AI-driven hiring process, age bias happens when automated systems or algorithms used in recruitment processes unintentionally favour certain age groups over others. For instance, in job ad targeting, AI-driven ad platforms might show job ads mainly to younger age groups because the job ads are set to target people aged 18-35, causing older applicants to may not even see the openings and limiting their chances to apply. AI systems may also screen resumes by prioritising filters that contain certain keywords “junior”, “tech-savvy”, or “recent graduate” and other “younger” age-related information, causing older candidates to be screened out automatically, even if they’re highly qualified. This can lead to a limited talent pool that doesn’t reflect the diverse age ranges that bring valuable perspectives and experience to a company.
Ensuring equitable hiring practices is crucial in today’s digital hiring landscape, where AI is increasingly used to evaluate candidates. A recent survey found that nearly 70% of employers plan to implement AI tools in the coming year to screen and eliminate applicants without human oversight, and some may even conduct entire interviews using AI. As a result, ensuring fairness is not only the right choice; it helps companies meet compliance standards, expand their talent pool, and build a more diverse workforce.
The role of AI age bias assurance and auditing
AI assurance is the outcome of a bias audit, which is a structured review process designed to identify and evaluate biases in AI systems, especially in areas like recruitment. In the context of age bias, an AI audit evaluates how the AI system’s outputs compare across different age groups.
Technical details about AI bias auditing
The AI’s decisions are tested across different demographic groups to check for patterns of bias. Auditors use fairness metrics to measure whether the AI consistently applies fair criteria, and they compare outcomes across groups to see if any one group is unfairly penalised. For Warden AI's audits, we use a black-box testing approach to evaluate an AI system’s real-world impact. Black-box testing allows us to focus on actual inputs and outputs rather than the internal mechanics of how the AI was developed. This approach is especially valuable in bias audits, as it enables us to see how the AI performs across various demographic groups without needing to understand every aspect of its underlying code or algorithms.There are two methods of bias evaluation that we use:
1. Disparate Impact Analysis
This analysis helps us detect whether an AI system disproportionately affects certain demographic groups. This approach checks if the AI inadvertently “discriminates” by causing certain groups to be less successful in the hiring process, despite similar qualifications or experience.
2. Counterfactual Analysis
In counterfactual analysis, we assess how the AI system would respond if a specific attribute, such as age or gender, were hypothetically altered. For instance, if an older candidate were presented to the AI with the same qualifications as a younger one, would the AI make the same decision? This method allows us to see if outcomes shift based purely on changing demographic factors.
Building trust with transparent AI practices
Overlooking AI bias can lead to several disadvantages. Companies found guilty of age discrimination in the hiring processes using AI may face reputation damage, legal repercussions, and a loss of trust among prospective candidates. For instance, in 2022, The EEOC (Equal Employment Opportunity Commission) settled a lawsuit with iTutorGroup, a Chinese tutoring company, for using AI software that automatically rejected older applicants, violating the Age Discrimination in Employment Act. The lawsuit revealed that women over 55 and men over 60 were particularly affected by this age bias. This case, which resulted in a $365,000 settlement, marks a significant example of the legal consequences that companies may face for age discrimination in AI-driven hiring processes.
Conduct continuous AI bias auditing
AI companies can prevent reputation and legal risks by conducting regular and continuous AI audits to identify and address bias early. Through consistent auditing, companies can evaluate their products for bias and fairness and catch potential issues before they escalate, which helps create a safer, more equitable hiring environment.
Publish transparent report of AI audit and assurance
By transparently communicating findings of the AI audit and assurance, a company shows its commitment to fairness and accountability. Companies like Beamery and Popp have demonstrated trust by transparently publishing findings of their AI audit and assurance through Warden’s Assurance Platform. This transparency is not just a benefit to the company’s reputation and helps build trust with candidates, but also strengthens its position in meeting regulatory compliance and maintaining competitive edge.
Conclusion
Incorporating fairness into AI-driven hiring processes is essential for non-discriminatory recruitment, especially in combating age bias. As this issue becomes more visible, the need for AI bias audits to ensure fair outcomes across all age demographics grows. Through regular audits and transparent reporting, companies not only safeguard themselves from potential legal and reputational repercussions but also foster trust with candidates and stakeholders alike.
By partnering with platforms like Warden AI, organisations can identify and mitigate bias, ensuring their hiring practices remain compliant, diverse, and inclusive. If you use AI in your hiring tools and are interested in learning more about AI audits and assurance, schedule a call with us today.