Skip to content
AI ConceptCapabilities & Behaviorcore

Hallucination

Hallucination is when a model produces text that sounds correct and confident but is factually wrong. The model isn't lying — it's completing a plausible-sounding pattern that happens to be false. This is a direct consequence of how models work: they're optimized to generate fluent continuations, not verified facts. Fluency and accuracy are different things, and the model has no way to know the difference.

Videos explaining this concept

Understand first

Related concepts