EN CZ
Bitter

The Hallucination Problem: A Comprehensive Survey of Why AI Confidently Makes Things Up

Huang et al. (2023) published a comprehensive survey on the hallucination problem in large language models — the tendency of AI to generate plausible but factually incorrect information:

Types of Hallucination

  • Factual hallucination: Generating false facts with high confidence
  • Faithfulness hallucination: Contradicting the source material or prior context
  • Input-conflicting: Generating outputs that diverge from the provided input

Why It Matters

  • LLMs hallucinate in 3-27% of responses depending on the task and model
  • Hallucinations are fluent and confident, making them difficult for users to detect
  • In high-stakes domains (healthcare, law, finance), even rare hallucinations can have severe consequences
  • Users who experience automation bias are particularly vulnerable to accepting hallucinated content as fact

Source

Huang, L. et al. (2023). A Survey on Hallucination in Large Language Models. arXiv:2311.05232.

Connected Research

You may also like