Future of Life Institute Podcast cover image

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

Future of Life Institute Podcast

00:00

The Naive Safety Effort Hypothesis

The naive safety effort assumption is that companies will be racing forward to develop the best tech in the least amount of time. It's basically saying that, well, I'm going to tell a story in this post in which the system is trained in a certain way and that causes it to want to take over the world. So what does this mean? Yeah, so I call this the naive safety efforts assumption.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app