Hallucination Estimator

Estimate hallucination risk in LLM-generated content

What are LLM Hallucinations?

Hallucinations occur when LLMs generate content that sounds plausible but is factually incorrect or fabricated. This is one of the most significant challenges in deploying AI systems, especially for factual tasks.

This estimator analyzes textual features correlated with hallucination risk. High specificity (dates, measurements, quotes) without hedging language suggests potential fabrication.

Risk Indicators

Specific Dates/Numbers

LLMs often hallucinate precise dates, years, and statistics. The Eiffel Tower was actually completed in 1889, not 1890.

Direct Quotes

LLMs frequently fabricate quotes. Always verify quoted text against original sources.

Hedging Language

Words like "approximately," "likely," or "may" indicate the model is less certain—actually a good sign.

FAQ

Can this detect all hallucinations?

No. This uses heuristics. True hallucination detection requires fact-checking against authoritative sources.

What's a safe risk level?

For factual content, anything above "low" warrants verification. For creative writing, higher scores may be acceptable.