Doom Debates cover image

His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore

Doom Debates

00:00

Navigating AI Alignment and Safety

This chapter explores the complexities of safely developing artificial intelligence, particularly in light of misaligned AGI. It examines the potential risks and benefits of monitoring AI companies and discusses the implications of AI alignment with human values. The conversation highlights the challenges of predicting AI behavior, the necessity of adapting beliefs based on new evidence, and the geopolitical risks associated with competing superintelligences.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app