AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

00:00

The Basic Organizing Framework for Doom

In the last three months, i've been working on a very particular case where i currently think existing techniques would lead to doom. And so while i have this hope, i'm kind of just willing to say, like, aw, here's a wild case, or like a very unrealistic thing that gradient descent might learn. But that's still enough of a challenge that i want to change or like, tdo sin an augrtm that addresses that case. Because i hope it's, like, working with really simple cases like that helps guide us towards,. if there is any nice, simple augrithm that never tries to kill you.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app