80,000 Hours Podcast cover image

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

80,000 Hours Podcast

00:00

Reassessing AI Alignment Literature

This chapter critiques prominent works on the AI alignment problem, arguing that key texts fail to address modern machine learning challenges. It also explores the speakers' personal journeys and experiences that shaped their understanding of AI issues.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app