AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

00:00

Is There a Dog That Wants to Take Over the World?

i had previously underestimated how much aliez's views relied on a few sort of, very deep abstractions that kind of lige all fitted together oround lit. And so any optimism that i have about ai governments needs to be grounded inmuch more specific details and plans for what might happen  and so on. I don't think you can really separate his views on light intelligence, his views on consequentialism or agency,. His views on a recursive self improvement, things like that. He keeps trying to explain in ways that people like pick up on ar the particular thing he's trying to explain, but not not the without a good handle on the overall set of intuitions.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app