3min chapter

AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

CHAPTER

Is It a Ranking Over Policies?

I am most often thinking of it as there's some set of problems that kind of seem necessary for outer alignment. I don't really believe that the problems are gong to split until, like, these are the outer alinement problems. Of it more is like the outer alignment problems, or the things that are sort of obviously necessary for outer alignement, are more likely to be like useful stepping stones, or like a warm up problem or something. Unlike the outer alignment part, which i'm doing more in this warm up problem perspective, i think of it in terms of high stakes versus low stakes decisions. If you have a reward function that captures whe humans care about well enough, and

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode