3min chapter

The Logan Bartlett Show cover image

EP 63: Eliezer Yudkowsky (AI Safety Expert) Explains How AI Could Destroy Humanity

The Logan Bartlett Show

CHAPTER

The Future of Deep Learning

If we had 50 years and a limited retries to figure out how to align a super intelligence, it actually wouldn't be all that worried about it. The thing I'm worried about with super intelligence is you get that wrong and then you don't get to learn from your mistakes because you're dead. If we have the textbook from the future with all the simple things that actually work for lining super intelligence, we'd probably just do it and we'd just work on the first try. It's horrifying to be told get this right on the first Try or humanity dies. Why do we have to get this right? Because otherwise the super intelligence is on a line that kills you.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode