AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

00:00

Aiht, Is There a G I?

Aiht: How good or bad should we expect a g i to be, in terms of the it's impact on the world and what we care about? Aiht: The overall effect is likely to be dominated by the possibility of these extreme risks and areos. It seems like there's a good chance that we're going to build a g i that m like, i lik, e, kill everyone, dosit, slave everyoner.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app