Model hallucination refers to the phenomenon where an artificial intelligence system, particularly a large language model (LLM), generates output that is syntactically correct and confidently presented but factually incorrect, irrelevant, or entirely fabricated. These errors often occur when the model lacks reliable context or when it attempts to predict plausible responses without grounding in verifiable data. Hallucinations can range from minor inaccuracies to misleading or even harmful content in business, legal, or healthcare settings.
Understanding and mitigating model hallucinations is critical for the responsible deployment of generative AI tools. Techniques like retrieval-augmented generation (RAG), grounding in curated data, and post-processing validation can help reduce their frequency. However, hallucinations remain one of the core limitations of current-generation AI models and a significant concern for enterprise use cases requiring trust, accuracy, and compliance.