AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

00:00

The Limit of Intelligence

i think the theyeare a kind of two opposing mistakes that i think different groups of people are making. So eleezer and mary more generally, really does feel like theyare thinking about systems that are so idealized that they aren't very a t. On the other hand, it feels like a bunch of a more machine learning focused alignment researches don't take this idea of optimization pressure seriously enough. The key thing that i would like to see from these types of researchergenders is just like a statement of what the cor insight is, because i feel pretty uncertain about that for a lot of existing research.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app