- AI Hallucination
- A phenomenon where a large language model (LLM) generates plausible-sounding but factually incorrect or nonsensical information.
**Definition**: AI Hallucination occurs when an artificial intelligence model, particularly a Large Language Model (LLM), generates output that is factually incorrect, logically inconsistent, or entirely fabricated, while often maintaining a confident and persuasive tone.
**What is an AI Hallucination?**: An AI hallucination is when a language model produces information that sounds plausible but is not grounded in reality, training data, or provided context. The AI confidently presents false information as if it were true.
Types of AI Hallucinations
1. Factual Hallucinations: The AI provides incorrect dates, names, prices, or specifications that do not match the source data.
2. Logical Hallucinations: The AI reaches incorrect conclusions or provides contradictory advice within the same conversation.
3. Source Hallucinations: The AI cites non-existent documents, links, or sources to support its claims.
Why AI Hallucinations Happen: LLMs are probabilistic engines designed to predict the next most likely token. Without proper 'grounding' or RAG (Retrieval-Augmented Generation), they may prioritize linguistic fluency over factual accuracy.
Mitigating Hallucinations in AI Agents: To prevent hallucinations in customer-facing AI agents, organizations use techniques such as: - RAG (Retrieval-Augmented Generation): Providing the model with specific context from a knowledge base. - Guardrails: Strict rules that limit the AI's response range. - Real-time Observability: Platforms like Oversai that monitor and flag hallucinations as they happen.
Detecting and mitigating hallucinations is a core component of AI Agent Quality Assurance.
