The know-it-all AI

THE MODELSCHATGPTCLAUDEEDUCATION

9/6/20251 min read

Based on this article by Lakshmi Varanasi at Business Insider.

OpenAI researchers claim they've found the root cause of AI hallucinations: the models are designed to guess. According to a new paper, large language models are trained to prioritize answering questions correctly, even if they're not sure, because evaluation metrics reward certainty over honesty.

Essentially, these AIs are "faking it till they make it," leading them to confidently generate false information. The good news is that by redesigning these evaluation methods to penalize guessing, we may finally be able to teach AI to admit when it doesn't know the answer.

Check out the article.

Related Stories