AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

CHAPTER

Is It a Ranking Over Policies?

I am most often thinking of it as there's some set of problems that kind of seem necessary for outer alignment. I don't really believe that the problems are gong to split until, like, these are the outer alinement problems. Of it more is like the outer alignment problems, or the things that are sort of obviously necessary for outer alignement, are more likely to be like useful stepping stones, or like a warm up problem or something. Unlike the outer alignment part, which i'm doing more in this warm up problem perspective, i think of it in terms of high stakes versus low stakes decisions. If you have a reward function that captures whe humans care about well enough, and

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner