Home › Glossary › AI Hallucination
AI hallucination is when a large language model generates plausible-sounding but factually incorrect information presented with high confidence. A known limitation of current LLMs.
LLMs work by predicting likely next tokens based on training data patterns, not by retrieving facts from a verified database. When the model lacks information, it can produce statistically plausible but invented content — fake citations, incorrect dates, fabricated quotes, non-existent products, or wrong technical details. Hallucination rates have decreased with newer models but remain a fundamental limitation.
A user asks an LLM about a specific case law citation; the LLM responds with a confidently-stated case name, year, and ruling that doesn't exist. Or asks about an obscure scientific paper; the LLM invents a paper title, author, and journal that match the user's query but aren't real. Users in legal, medical, and academic contexts have been disciplined for citing AI-hallucinated sources.
Mitigations: (1) Use grounded AI tools that cite real sources (Perplexity, search-grounded ChatGPT) for factual queries. (2) Always verify AI-cited sources independently. (3) Use Retrieval-Augmented Generation (RAG) systems for domain-specific accuracy. (4) For critical facts, treat AI as a draft generator, not a fact source.