The Next Big Idea cover image

LONGTERMISM: Why You Should Care About Future People

The Next Big Idea

00:00

The Risks of Artificial General Intelligence

Sally Kohn: I've been haunted by Nick Bostrom's vulnerable world hypothesis ever since I first read about it. She says we need to be thinking about well in advance before these technologies are already widely deployed, because COVID-19 could be a hundred times worse again. The risks that arise when we get to a level of AI systems that are as good as human beings, or better, at basically all human tasks is really scary she adds.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app