AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

The Alignment Problem

I worry about the most is not that our systems aren't aligned enough but that we don't actually really know how aligned they are. I've significantly like increased my optimism over the last two years that this is a very practical problem that we can just do. Even if I turned out to be wrong and it did turn out to be much harder than we thought I think that would still be really useful and evidence about the problem.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app