AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

How Important Is the Human Level Qualifier in Alignment Research?

The more you can accelerate what we do there, it will actually have a large impact. We don't know how to rely on super intelligence or even systems that are significantly smarter than humans. The question is really like how risky is it to run that system on the task of alignment research? It's doing so much stuff that we can't all look at ourselves. I think another key capability here is self-explanation. So how good would the model be at breaking the security precautions and like accessing its own weights and trying to copy it somewhere else on the internet?"

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app