EN CZ
Bitter

Automation Bias: Why Humans Blindly Follow AI Recommendations — Even When Wrong

Automation bias — the tendency to over-rely on automated systems even when they produce errors — is one of the most well-documented risks of human-AI interaction.

Key research findings across multiple studies:

  • Skitka et al. (1999): Pilots using automated cockpit systems failed to notice critical errors because they trusted the automation
  • Goddard et al. (2012): Systematic review of clinical decision support systems found consistent omission errors (failing to act when AI doesn't flag a problem) and commission errors (following incorrect AI advice)
  • Automation bias increases with trust: The more reliable a system appears, the less users verify its outputs
  • Expert users are not immune: Even experienced professionals show automation bias

The AI Amplification

Modern LLMs amplify automation bias because they present information with confident, authoritative language — even when hallucinating. Unlike older systems that presented data, AI presents narratives and arguments, making errors harder to detect.

Sources

Goddard, K. et al. (2012). JAMIA, 19(1), 121-127.
Skitka, L. J. et al. (1999). Int J Human-Computer Studies, 51(5), 991-1006.

Connected Research

You may also like