AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

The Importance of Scalable Oversight

I think actually we are in the place where we can measure a lot of progress on alignment and so for our legit specifically we could like make various interventions. It has a good chance to get us to the goal that we actually want to get to which is this automated alignment researcher that is roughly human level. The current paradigm of language model pre-training is pretty well suited to the kind of alignment plan that I'm super excited about.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app