4min chapter

Hear This Idea cover image

Bonus: Preventing an AI-Related Catastrophe

Hear This Idea

CHAPTER

AI Development

We think transformative AI is sufficiently likely in the next 20 to 80 years that it is well worth it, in expected value terms, to work on this issue now. Most future generations will take care of it, and all the work we do now will be in vain,. We hope so, but it might not be prudent to take that risk. The next argument? We might need to solve alignment anyway to make AI useful. Making something have goals aligned with human designers' ultimate objectives seems like very related problems.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode