Overview

Hallucinations occur because LLMs are predictive models that determine the most likely next word based on patterns, rather than accessing a database of facts. This can lead to the confident assertion of false information.

Causes

  • Training Data Gaps: Lack of information on a specific topic.
  • Pattern Over-reliance: The model prioritizing linguistic patterns over factual accuracy.
  • Contextual Confusion: Misinterpreting complex or ambiguous prompts.

Mitigation

  • Using RAG to provide factual context.
  • Implementing verification techniques to check outputs against reliable sources.
  • Adjusting 'temperature' settings to control how creative or predictable the AI's responses are.

Related Terms