AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

CHAPTER

How to Scalable Oversight With Alignment Research

I think there is some really important property that alignment research has that we can leverage for scalable oversight. I think it's fundamentally easier to evaluate alignment research than it is to do it. And so, for example, if you think he might like request reward modeling where the basic idea is you have some kind of like AI system that use as an assistant to help you evaluate some other AI system. So now because evaluation is easier than generation, like the task that the assistant has to do is a simpler task.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner