Mixture of Experts cover image

Why language models hallucinate, revisiting Amodei’s code prediction and AI in the job market

Mixture of Experts

00:00

Navigating Language Model Hallucinations

This chapter examines a recent paper from OpenAI that discusses language model hallucinations and their impact on reliability. The speakers highlight the need for improved training methods and evaluation metrics to better balance accuracy and uncertainty while addressing these issues. Additionally, they explore the complexities of creativity in language models, suggesting that some hallucinations can lead to innovative insights.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app