Hear This Idea cover image

Bonus: Preventing an AI-Related Catastrophe

Hear This Idea

00:00

Is There a Risk of an Existential Catastrophe in AI?

Experts disagree on the degree to which AI poses an existential risk. Two of the leading labs developing AI, DeepMind and OpenAI, also have teams dedicated to figuring out how to solve technical safety issues. Over half of researchers thought the chance of an existential catastrophe was greater than 5%. We think this problem remains highly neglected with only around 300 people working directly on the issue worldwide.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app