Future of Life Institute Podcast cover image

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

Future of Life Institute Podcast

CHAPTER

The Naive Safety Effort Hypothesis

The naive safety effort assumption is that companies will be racing forward to develop the best tech in the least amount of time. It's basically saying that, well, I'm going to tell a story in this post in which the system is trained in a certain way and that causes it to want to take over the world. So what does this mean? Yeah, so I call this the naive safety efforts assumption.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner