
Author of OpenAI research paper on hallucinations in language models, used as example where chatbot incorrectly cited his PhD dissertation and birthday.
How media typically covers Adam Tauman Kalai
Referenced in coverage
Language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty, and benchmarks should prioritize abstention over incorrect answers.
“Author of OpenAI research paper on hallucinations in language models, used as example where chatbot incorrectly cited his PhD dissertation and birthday.”