The Hallucination Problem
At Alhena AI and similar companies, hallucination is the #1 enemy. A customer support agent that makes up order statuses or invents return policies is worse than no agent at all.
Types of Hallucinations
| Type | Example | Danger Level |
|---|---|---|
| Fabrication | "Your order #999 shipped yesterday" (order doesn't exist) | 🔴 Critical |
| Conflation | Mixing up two different orders' details | 🟠High |
| Extrapolation | "Based on similar cases..." (making up policies) | 🟡 Medium |
| Overconfidence | Stating guesses as facts | 🟡 Medium |
Root Causes
- No grounding: Agent responds from memory, not data
- Ambiguous context: Multiple interpretations possible
- Pressure to answer: Agent doesn't know how to say "I don't know"
- Poor tool design: Tools return vague data
Zero-Hallucination Principles
Principle 1: Never respond without verifying against source data
Principle 2: Always cite where information came from
Principle 3: When uncertain, say so explicitly
Principle 4: Design tools to return complete, unambiguous data
Principle 2: Always cite where information came from
Principle 3: When uncertain, say so explicitly
Principle 4: Design tools to return complete, unambiguous data