AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

The Importance of Scalable Oversight

I think actually we are in the place where we can measure a lot of progress on alignment and so for our legit specifically we could like make various interventions. It has a good chance to get us to the goal that we actually want to get to which is this automated alignment researcher that is roughly human level. The current paradigm of language model pre-training is pretty well suited to the kind of alignment plan that I'm super excited about.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app