Doom Debates cover image

Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position

Doom Debates

00:00

Why Would AI Diverge from Human Goals?

Robert asks why AI will stop serving us; Liron argues we've never achieved true alignment and explains feedback-loop risks.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app