Hallucination
When an LLM generates plausible-sounding text that is factually wrong or invented.
Last updated: April 26, 2026
Definition
A hallucination is content the model produces that has no basis in fact or in the provided context. The model still uses its statistical pattern matching, but the pattern produced something false. Common triggers: questions the model lacks training data for, ambiguous prompts that invite guessing, citations of nonexistent papers or URLs. The standard mitigation is RAG (ground answers in retrieved real text) plus explicit instructions to refuse when unsure. Hallucination rate has dropped sharply with frontier models but never to zero.
When To Use
Always assume your model can hallucinate. Add grounding (RAG), explicit refusal instructions, and human review for high-stakes outputs.
Related Terms
Building with Hallucination?
I've shipped this pattern in real production systems. If you want a second pair of eyes on your architecture, that's what I do.