EN CZ
Bitter

When Algorithms Discriminate: AI Bias in the Real World

In 2018, Joy Buolamwini and Timnit Gebru published their groundbreaking Gender Shades study, revealing that commercial facial recognition systems from IBM, Microsoft, and Face++ had error rates of up to 34.7% for dark-skinned women, compared to less than 1% for light-skinned men. This research exposed a systemic pattern: AI systems trained predominantly on data from certain demographics fail dramatically for others (Buolamwini & Gebru, 2018).

The problem extends far beyond facial recognition. In healthcare, a widely-used algorithm in US hospitals was found to systematically underestimate the health needs of Black patients. The system used healthcare spending as a proxy for illness, but because Black patients historically had less access to care, the algorithm effectively recommended less treatment for sicker patients. An estimated 45% of medical AI models show bias traceable to unfair training datasets (Obermeyer et al., 2019; Frontiers in AI, 2025).

In hiring and HR, the pattern repeats. Amazon famously scrapped its AI recruiting tool in 2018 after discovering it penalized resumes containing the word "women's." More recently, research published in 2024–2025 shows that AI-driven HR analytics continue to perpetuate discrimination against women, non-binary individuals, racial minorities, and persons with disabilities, often by encoding historical biases present in training data (ScienceDirect, 2025).

The legal system is also affected. The COMPAS recidivism prediction tool, used by US courts, was shown by ProPublica to be twice as likely to falsely label Black defendants as future criminals compared to white defendants. An estimated 60% of criminal justice AI models exhibit bias because they rely on historical crime data that reflects decades of discriminatory policing (ProPublica, 2016; AI Multiple, 2025).

Legal consequences are mounting. In 2024, a $2.3 million class-action settlement was reached after plaintiffs proved that AI-based tenant screening systems disproportionately excluded low-income Black and Hispanic applicants, violating the Fair Housing Act. The EU AI Act, enforced from 2025, classifies AI in HR and criminal justice as high-risk applications requiring strict fairness audits (EU AI Act, 2024).

Key Sources

  • Buolamwini J., Gebru T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81.
  • Obermeyer Z. et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464).
  • Angwin J. et al. (2016). Machine Bias. ProPublica.
  • EU AI Act (2024). Regulation (EU) 2024/1689.

You may also like