Search Articles

Search Results: Hallucination

Publishers vs. Hallucinations: The Fight for Reliable AI in Academic Medicine

Publishers vs. Hallucinations: The Fight for Reliable AI in Academic Medicine

Researchers propose a dual strategy—combining retrieval-augmented generation with publisher-built academic LLMs—to combat AI's dangerous tendency to fabricate citations in medical literature. This response to critical feedback underscores the non-negotiable need for accuracy where flawed references could impact patient care. The path forward demands unprecedented collaboration between AI developers, publishers, and human experts to safeguard scientific integrity.
Google's SLED: Tapping Every Layer to Combat LLM Hallucinations

Google's SLED: Tapping Every Layer to Combat LLM Hallucinations

Google Research introduces SLED, a novel decoding technique that improves LLM factuality by leveraging outputs from all transformer layers. The method reduces hallucinations by 16% on benchmarks without external data or fine-tuning, offering a lightweight solution to AI's accuracy crisis.
New Toolkit Quantifies LLM Hallucination Risk Without Model Retraining

New Toolkit Quantifies LLM Hallucination Risk Without Model Retraining

Researchers have released an open-source framework that calculates precise hallucination risks for OpenAI models using prompt re-engineering. The toolkit mathematically determines when an LLM should answer or refuse queries based on information-theoretic guarantees, offering developers auditable safety margins.
The Unavoidable Hallucinations of Large Language Models

The Unavoidable Hallucinations of Large Language Models

Large Language Models hallucinate not as a bug but as a consequence of their fundamental training paradigm. Continua AI engineers reveal how context management failures—like sliding windows and stale data—exacerbate this behavior, with startling real-world examples showing even top models like GPT-4o stumble.