Instances where AI systems, particularly generative models, produce content that appears plausible but is factually incorrect or fabricated. These outputs are presented with confidence despite having no basis in the model’s training data or factual reality.
AI hallucinations occur because generative models are trained to produce statistically likely outputs based on patterns in their training data, not to represent factual truth. When faced with uncertainty or gaps in knowledge, these models may generate plausible sounding but invented information rather than acknowledging uncertainty. Hallucinations present significant challenges for enterprise applications where factual accuracy is critical, necessitating verification mechanisms and careful system design.
A customer service AI confidently providing an incorrect product return policy or citing a non-existent company policy when asked about an edge case not covered in its training data.