Full Explanation
Grounding is the practice of constraining an AI model to a specific document by giving it explicit instructions. Without grounding, adding a document to a conversation gives the model access to that text — but the model still draws on everything it was trained on, fills gaps silently, and produces answers that sound like they came from your document when they did not.
Two instructions change this: "Answer only from the text above" creates the boundary. "If the answer is not in the text, say you don't know" closes the gap. Without the second sentence, the model transitions smoothly from your source to its training patterns without signalling that it has done so. With it, a missing answer becomes explicit — which is exactly the information you need when accuracy to a specific source matters.
---


