AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

CHAPTER

How to Automate Alignment Research

i don't think we have particularly strong candidates t now, for ways in which you can use an ag to prevent scaling up to dangerous regimes. i feel uncertain about how like difficult or extreme governants interventions would need to be in order to actually get the world to think, he let's slow down a bit. Let's belike, much more careful. But it still feels plausible that pivotor action is a little bit ofa misnome. As the world becomes more sort of wakes up to the scale and scope of the problem.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner