
Georgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI
Deep Papers
00:00
How Pre‑training Encourages Hallucinations
Santosh explains how maximum likelihood pre‑training matches training distribution and why that leads to hallucinations on unseen facts.
Play episode from 04:04
Transcript


