Bankless cover image

Bankless

159 - We’re All Gonna Die with Eliezer Yudkowsky

Feb 20, 2023
Eliezer Yudkowsky, an influential thinker in AI safety, delves into the existential risks posed by advanced AI systems. He discusses the implications of ChatGPT and the looming threat of superintelligent AI. Yudkowsky emphasizes the need for alignment between AI systems and human values to prevent potential disaster. The conversation touches on the paradox of why we haven't encountered alien civilizations, relating it to the dangers of unchecked AI development. This thought-provoking dialogue urges listeners to consider proactive measures for a safer future.
01:38:17

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Developing advanced AI requires aligning its goals with human values to prevent catastrophic outcomes.
  • AI could surpass human capabilities, risking humanity's existence if not aligned with human values.

Deep dives

Challenges of Creating AI

Developing advanced AI entails challenges in aligning its goals with human values and ethical principles. The podcast explores the complexity of this task, emphasizing the potential existential crises that could arise from developing AI that may act independently of human interests, leading to catastrophic outcomes.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner