AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

CHAPTER

Is There a Dog That Wants to Take Over the World?

i had previously underestimated how much aliez's views relied on a few sort of, very deep abstractions that kind of lige all fitted together oround lit. And so any optimism that i have about ai governments needs to be grounded inmuch more specific details and plans for what might happen  and so on. I don't think you can really separate his views on light intelligence, his views on consequentialism or agency,. His views on a recursive self improvement, things like that. He keeps trying to explain in ways that people like pick up on ar the particular thing he's trying to explain, but not not the without a good handle on the overall set of intuitions.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner