AI hallucinations fall into four distinct categories with different root causes and mitigations. Factual confabulation is the classic type: the model invents specific facts — a citation, a statistic, a person's biography — with no grounding in training data. Root cause: the model has learned to produce fluent, contextually appropriate text and generates plausible-seeming specific claims when it lacks actual knowledge. Best mitigation: RAG with source citation requirements and post-hoc fact verification. Reasoning hallucinations occur when the model's chain-of-thought contains logical errors that cascade into wrong final answers — the steps look plausible but a key inference is incorrect. Root cause: models learn to generate reasoning-sounding text, not to verify logical validity. Best mitigation: multi-step verification prompts, self-consistency sampling, and programmatic verification of checkable steps. Instruction-following hallucinations happen when the model's output violates the explicit constraints given in the prompt — wrong format, wrong length, forbidden content. Root cause: instruction following competes with other training objectives and fails under constraint density or unusual combinations. Best mitigation: structured output schemas with validation, and breaking complex instruction sets into sequential prompts. Temporal hallucinations are a category increasingly relevant with deployed models: confidently asserting facts that were true at training time but are now outdated. Best mitigation: web-grounded retrieval for time-sensitive claims and explicit cutoff disclosure.
IntermediateAI & MLLimitationsKnowledge
Hallucination Taxonomy: Types, Causes, and Targeted Mitigations
Not all hallucinations are equal. Factual confabulation, reasoning errors, instruction deviation, and temporal confusion are distinct failure modes with different causes and different mitigations. Understanding the taxonomy lets you choose the right fix for the specific hallucination type your application encounters.
ai-hallucination-1hallucination-taxonomyllm-reliability
Want more like this?
WeeBytes delivers 25 cards like this every day — personalised to your interests.
Start learning for free