3min chapter

AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

CHAPTER

The Role of Humans in AI Alignment

I would be excited to be replaced by AI. I think humans should always stay in the loop somehow. There's this quote from planning for AGI and beyond that says, it's possible that AGI capable enough to accelerate its own progress because major changes happen surprisingly quickly. And then it says we think a slower takeoff is easier to make safe. So one thing I wonder is like if we make this really smart or, you know, human level alignment researcher that we then like effectively 10x or 100x or something, does that end up playing into this like recursive self improvement loop? You can't have recursion without also improving your alignment a lot. That's just no way that that

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode