Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

CHAPTER

The Importance of Long-Term Planning

All of the horrible things that are going on right now which an AGI system might be able to help with. The running level of existential risks from other factors so I mentioned new clear and and engineered pandemics do you find that this pushes you in the direction of saying we should accept the higher level of risk when we're thinking about whether to deploy AGI? If there was an asteroid coming and we could not stop it by any other way then maybe it would make sense in nine and a half years to press this button when we have nothing left to lose it becomes a very profitable bet. It's an interesting fact of the world that we haven't thought hard about these questions what level

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode