Learn about disability bias in hiring and how AI can help
Disability bias in hiring is a crucial issue affecting equitable employment opportunities worldwide. The global employment rate for disabled individuals is half that of non-disabled people. According to the World Economic Forum, over 1 billion people across the globe live with a disability, yet employment rates for this group remain disproportionately low.
This reality highlights the systemic barriers and discrimination preventing many disabled individuals from accessing meaningful employment opportunities globally. Exclusivity is expensive and the cost of excluding people with disabilities is as much as 7% of a country’s GDP.
Disability bias in hiring is a deeply ingrained issue stemming from stereotypes, misconceptions, and systemic barriers. Globally, around 1.3 billion people have a disability, or 16% of the global population.
Despite their capabilities, disabled individuals are often excluded from the workforce. For instance, in the U.S., only 19.1% of people with disabilities were employed in 2022, compared to 63.7% of people without disabilities.
In an AI-driven hiring process, disability bias happens when automated systems or algorithms used in the hiring process blindly favour specific groups of people while penalising others.
Take for example an AI interviewing software. Even when a candidate has disclosed a disability to their employer, an AI system may not fully account for how a disability could influence these factors.
As a result, candidates might be unfairly evaluated based on traits that are unrelated to their qualifications or ability to do the job.
This example underscores the importance of equitable and fair hiring solutions, and is especially the case if the solution uses AI in the process.
This reinforces the need for AI tools that are inclusive and fair, particularly given the widespread adoption of such technologies—Harvard Business Review reports that 86% of employers are reducing or removing human oversight in early hiring stages in favour of AI systems.
AI models may unintentionally incorporate biases present in historical data or societal patterns, which can lead to disability-based discrimination. For example, training data might be skewed toward candidates without disabilities if historical hiring data underrepresented individuals with disabilities.
If a company’s hiring history has generally overlooked candidates with disabilities by applying implicit or explicit filters that disadvantage them, the training data used to develop AI recruitment tools will reflect this trend.
The AI model learns from patterns in past hiring decisions, so if candidates without disabilities were more commonly hired, the system may interpret this as a desirable trend, reinforcing an unintended bias against individuals with disabilities.
For example, historical data might show fewer hires of candidates who requested workplace accommodations or disclosed disabilities, which the AI model could then treat as a negative indicator in applicants.
When left unchecked, AI has the potential to amplify these biases on a large scale, leading to hiring discrimination against individuals with disabilities.
In the EU AI Act, there is now a provision specifically mentioning accessibility. Article 16 of this new act states that no “high-risk” AI can be deployed in the EU unless it meets the required accessibility standards safeguarding people with disabilities from inherent bias and discrimination.
Additionally, recital 57 in the Act explicitly states that AI systems in work related scenarios must be scrutinised for their potential to discriminate against individuals with disabilities.
Since recruitment systems are included in the “high-risk” category, companies can ensure compliance by conducting bias audits from companies like Warden AI. But more on that later…
In the US, Colorado SB205 requires transparency in the use of AI for recruitment, ensuring candidates are informed about how AI systems evaluate them and how discrimination risks are mitigated.
Disability status is among the 13 protected characteristics requiring risk monitoring and mitigation under this law.
Both acts provide robust legal frameworks to prevent workplace discrimination, including the use of AI systems in hiring.
They align with responsible AI principles by mandating that automated systems avoid outcomes that could lead to bias or discrimination.
AI assurance is the outcome of a bias audit, which is a structured review process designed to identify and evaluate biases in AI systems, especially in areas like recruitment. In the context of disability bias, an AI audit evaluates how the AI system’s outputs compare across people who identify as disabled or not.
At Warden AI, we test AI systems for bias with real-world data and counterfactual analysis. This black-box testing approach helps evaluate an AI system’s real-world impact, which Warden reports on transparency through live dashboards and reports.
This approach allows for a thorough evaluation on specific inputs and outputs rather than the internal mechanics of an AI system. This approach is valuable in bias audits as it allows for an evaluation of the AI’s performance across various groups without needing to understand the minutiae of how it was developed.
Warden AI conducts technical bias audits in two ways:
This analysis helps us detect whether an AI system disproportionately affects certain demographic groups. This approach checks if the AI inadvertently “discriminates” by causing certain groups to be less successful in the hiring process, despite similar qualifications or experience.
In counterfactual analysis, we assess how the AI system would respond if a specific attribute, such as age or gender, were hypothetically altered. For instance, if an older candidate were presented to the AI with the same qualifications as a younger one, would the AI make the same decision? This method allows us to see if outcomes shift based purely on changing demographic factors.
Incorporating fairness into AI-driven hiring processes is essential for non-discriminatory recruitment, especially in combating disability bias.
As this issue becomes more visible, the need for AI bias audits to ensure fair outcomes across disability status grows. Through regular audits and transparent reporting, companies not only safeguard themselves from potential legal repercussions, but also foster trust with candidates and stakeholders alike.
By partnering with platforms like Warden AI, organizations can identify and mitigate bias, ensuring their hiring practices remain compliant, diverse, and inclusive.
If you use AI in your hiring tools and are interested in learning more about AI bias audits and assurance, schedule a call with our team today.