Skip to main content

MCP & Context Engineering

Zero-Hallucination Patterns

0:00
LearnStep 1/4

Why Agents Hallucinate

The Hallucination Problem

At Alhena AI and similar companies, hallucination is the #1 enemy. A customer support agent that makes up order statuses or invents return policies is worse than no agent at all.

Types of Hallucinations

TypeExampleDanger Level
Fabrication"Your order #999 shipped yesterday" (order doesn't exist)🔴 Critical
ConflationMixing up two different orders' details🟠 High
Extrapolation"Based on similar cases..." (making up policies)🟡 Medium
OverconfidenceStating guesses as facts🟡 Medium

Root Causes

  1. No grounding: Agent responds from memory, not data
  2. Ambiguous context: Multiple interpretations possible
  3. Pressure to answer: Agent doesn't know how to say "I don't know"
  4. Poor tool design: Tools return vague data

Zero-Hallucination Principles

Principle 1: Never respond without verifying against source data

Principle 2: Always cite where information came from

Principle 3: When uncertain, say so explicitly

Principle 4: Design tools to return complete, unambiguous data

Grounding Pattern

python