Hallucination
Hallucination is when a model produces text that sounds correct and confident but is factually wrong. The model isn't lying — it's completing a plausible-sounding pattern that happens to be false. This is a direct consequence of how models work: they're optimized to generate fluent continuations, not verified facts. Fluency and accuracy are different things, and the model has no way to know the difference.
Videos explaining this concept
E008Notes on AI
Is AI a Student or an Actor
We often mistake AI outputs for fact-checked answers from a reasoning process. In reality, AI generates completions—continuations of patterns, similar to an Improvisational Actor following the rule...
E009Notes on AI
What AI Is Good At vs Bad At
This episode introduces a practical framework for understanding when to use AI by framing it as a "special employee" with three distinct characteristics.
E019Notes on AI
Why AI Sounds Confident
LLMs are optimized to produce fluent, grammatically correct, structurally coherent language. This is what they were trained to do. But fluency and accuracy are two different things. The model does ...