2min chapter

Hear This Idea cover image

Bonus: Preventing an AI-Related Catastrophe

Hear This Idea

CHAPTER

Is There a Risk of an Existential Catastrophe in AI?

Experts disagree on the degree to which AI poses an existential risk. Two of the leading labs developing AI, DeepMind and OpenAI, also have teams dedicated to figuring out how to solve technical safety issues. Over half of researchers thought the chance of an existential catastrophe was greater than 5%. We think this problem remains highly neglected with only around 300 people working directly on the issue worldwide.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode