Skip to content
AI Basics
21 / 24
E021

Grounding

Show how grounding reduces hallucinations by constraining sources.

Pasting a document into AI gives the model access to it — but the model still draws on everything it was trained on and fills gaps silently. Grounding adds a boundary with two instructions: tell the model to answer only from the document, and tell it to say "I don't know" when something is missing. Without the second sentence, the model transitions from your source to its training without any signal. Two practical limits: documents larger than the context window can't be fully read, and scanned PDFs are images — the model can't read the words inside.

Full Explanation

Grounding is the practice of constraining an AI model to a specific document by giving it explicit instructions. Without grounding, adding a document to a conversation gives the model access to that text — but the model still draws on everything it was trained on, fills gaps silently, and produces answers that sound like they came from your document when they did not.

Two instructions change this: "Answer only from the text above" creates the boundary. "If the answer is not in the text, say you don't know" closes the gap. Without the second sentence, the model transitions smoothly from your source to its training patterns without signalling that it has done so. With it, a missing answer becomes explicit — which is exactly the information you need when accuracy to a specific source matters.

---