3min chapter

80,000 Hours Podcast cover image

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

80,000 Hours Podcast

CHAPTER

The Future of Artificial Intelligence

Coyak: We've been worrying about ways that artificial intelligence could go wrong for ten or 15 years. But i think the main st mans of culture has also gradually started to see that a can really big deal. And when it's deployed in important functions, if you haven't probarbly figured out how it's going to behave in these in these ways, and fully uderstand all the consequences that it can have, then that thingscan really go allright. Coyak: Is there anything more that humanity should kind of urgently do that we haven't already done to address the risk of asteroids or comets, or super volcanoes or other natural phenomena like that? I see

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode