80,000 Hours Podcast cover image

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

80,000 Hours Podcast

00:00

Misconceptions in AI Safety: The Flaw of Worst-Case Reasoning

This chapter addresses common misunderstandings of AI safety stemming from limited online sources. It emphasizes the need for context in assessing AI alignment strategies, cautioning against deriving conclusions from worst-case scenarios.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app