Skip to main content

Production & Evaluation

Detecting and Preventing Hallucinations

0:00
LearnStep 1/3

Types of Hallucinations

The Enemy of Production AI

Hallucinations are when an LLM generates plausible-sounding but false information. In production, this destroys user trust.

Hallucination Categories

TypeExampleSeverity
Factual"Your order shipped yesterday" (it didn't)Critical
EntityInventing a product that doesn't existHigh
TemporalWrong dates, delivery estimatesHigh
NumericWrong prices, quantitiesCritical
PolicyMaking up return policiesCritical

Why LLMs Hallucinate

Detection Strategies

python

Prevention Strategies

The Grounding Principle: Never let the LLM generate facts. Only let it format and present facts from verified sources.
python