AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

How to Scalable Oversight With Alignment Research

I think there is some really important property that alignment research has that we can leverage for scalable oversight. I think it's fundamentally easier to evaluate alignment research than it is to do it. And so, for example, if you think he might like request reward modeling where the basic idea is you have some kind of like AI system that use as an assistant to help you evaluate some other AI system. So now because evaluation is easier than generation, like the task that the assistant has to do is a simpler task.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app