AI Bias

How the EU AI Act addresses AI bias

Learn about EU AI Act's measures for mitigating bias in AI systems

Eduard Schikurski

Published on
September 12, 2024

As AI systems become increasingly adopted across industries, they are often seen as tools for enhancing efficiency and reducing human biases. By relying on data-driven decision-making, these systems have the potential to mitigate discrimination and produce fairer outcomes. However, this promise is only as good as the AI's design, and if not carefully crafted, AI can replicate the same systemic biases present in its training data.

AI bias in action: the Workday lawsuit

A notable example is the recent class action lawsuit against Workday, an ATS provider, accusing it of AI hiring discrimination based on race, age, and disability. The lawsuit claims that Workday used historic data to train its AI models without addressing potential existing discrimination within the dataset.

The EU AI Act’s stance on AI Bias

Recognizing the risks of bias in AI, the European Union has introduced the EU AI Act, which came into force on August 1, 2024. While the Act does not explicitly define AI bias, it includes numerous provisions aimed at detecting and mitigating bias in AI systems. The Act identifies AI bias as a substantial risk to individuals’ rights and mandates that companies take active steps to ensure fairness and prevent discriminatory outcomes.

Provider obligations to mitigate AI bias under the EU AI Act

For high-risk AI systems, the Act sets out obligations for providers to identify and mitigate bias:

Design and data management

  • Risk management: Implement a system to assess and monitor AI systems for bias at every lifecycle stage.
  • Impact assessment: Assess AI’s potential negative impact on vulnerable groups, considering aspects such as gender equality and accessibility.
  • Inclusive design: Ensure diverse development teams and active stakeholder participation.
  • Data quality: Ensure training, validation, and testing datasets are relevant, sufficiently representative, and as error-free as possible.
  • Statistical properties: Ensure that data has the appropriate statistical properties to mitigate possible biases.

Operational and documentation

  • Continual learning: For systems that continue to learn post-deployment, design them to minimize biased outputs.
  • Detailed documentation: Provide thorough documentation for deployers covering foreseeable misuse, system performance, and risks.
  • Sensitive data handling:
    • Data re-use: Implement technical measures to limit data re-use.
    • Security measures: Use state-of-the-art privacy methods like pseudonymization.
    • Access controls: Secure data with strict access control and document all access.

Deployer obligations to mitigate AI bias under the EU AI Act

Deployers of AI systems also have significant responsibilities to ensure the ethical and fair operation of AI:

  • Usage according to instructions: Ensure that AI systems are used as instructed by providers.
  • Data relevance: Make sure that input data is representative and relevant.
  • Monitoring: Record and monitor the system continuously including for biased outcomes.
  • Data protection assessments: Carry out data protection impact assessments.
  • Incident reporting: Inform providers of any serious incidents including those related to AI bias or discrimination.

Many deployers may also implement assurance mechanisms like independent bias audits to further mitigate risks and enhance fairness and transparency.

The EU AI Act’s exception to GDPR

The AI Act introduces an exception allowing providers of high-risk AI systems to process special categories of personal data when necessary to detect and correct biases. This is permitted only when alternatives like anonymous or synthetic data are inadequate, and the processing must comply with GDPR, ensuring the data is deleted once bias is corrected or reaches its retention limit.

This exception applies solely to high-risk systems, underscoring the need to balance AI capabilities with individuals' privacy rights.

How Warden can help with EU AI act compliance

Warden’s auditing platform addresses many key requirements of the EU AI Act, helping AI vendors ensure compliance and ethical responsibility:

  • Pre-market testing: Warden rigorously tests AI systems before and after market release, identifying non-compliance and recommending corrective actions.
  • Bias detection and mitigation: Warden analyzes AI systems for biases and suggests modifications to improve fairness.
  • Transparency and explainability: Warden helps vendors provide clear explanations of how their AI systems make decisions, building trust with users.
  • Post-market monitoring: Warden’s platform continuously monitors AI systems, enabling providers and deployers to keep their systems in check over time.

Schedule a demo to learn how Warden AI can help your organization comply with the EU AI Act and stay ahead in an increasingly regulated market.

Start building AI trust today