Deep Papers cover image

Georgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI

Deep Papers

00:00

How Pre‑training Encourages Hallucinations

Santosh explains how maximum likelihood pre‑training matches training distribution and why that leads to hallucinations on unseen facts.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app