Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

CHAPTER

The Importance of AGI in Life

We don't have to have super intelligent AI it's not a requirement of happy existence we can do all the things we want including life extension with much less intelligent systems. maybe building them is a very bad idea and we should not do that so is it because that such a super intelligence will be running over a long period of time increasing the cumulative risk of failure over say decades or centuries that we can't accept even a tiny probability of failure for these systems? I would suspect it would be a very quick process expecting something to be 100% safe is just unrealistic in any field.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode