Microsoft Research Podcast cover image

AI Frontiers: The Physics of AI with Sébastien Bubeck

Microsoft Research Podcast

00:00

Hallucination and Factual Error in Language Models

When it's making arithmetic mistakes, you can also view it as some kind of hallucination. It just thought it hallucinated that this step is not necessary and that it can move on to the next stage immediately. There could be many ways to resolve those hallucinations. Maybe we want to look inside the model a little bit more. maybe we want to change the training pipeline a little bit. The reinforcement learning with human feedback can help. We do not know the answer to that question.

Play episode from 41:30
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app