AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

00:00

Is It a Ranking Over Policies?

I am most often thinking of it as there's some set of problems that kind of seem necessary for outer alignment. I don't really believe that the problems are gong to split until, like, these are the outer alinement problems. Of it more is like the outer alignment problems, or the things that are sort of obviously necessary for outer alignement, are more likely to be like useful stepping stones, or like a warm up problem or something. Unlike the outer alignment part, which i'm doing more in this warm up problem perspective, i think of it in terms of high stakes versus low stakes decisions. If you have a reward function that captures whe humans care about well enough, and

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app