The Logan Bartlett Show cover image

EP 63: Eliezer Yudkowsky (AI Safety Expert) Explains How AI Could Destroy Humanity

The Logan Bartlett Show

CHAPTER

The Future of Deep Learning

If we had 50 years and a limited retries to figure out how to align a super intelligence, it actually wouldn't be all that worried about it. The thing I'm worried about with super intelligence is you get that wrong and then you don't get to learn from your mistakes because you're dead. If we have the textbook from the future with all the simple things that actually work for lining super intelligence, we'd probably just do it and we'd just work on the first try. It's horrifying to be told get this right on the first Try or humanity dies. Why do we have to get this right? Because otherwise the super intelligence is on a line that kills you.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner