Bankless

159 - We’re All Gonna Die with Eliezer Yudkowsky

148 snips
Feb 20, 2023
Eliezer Yudkowsky, an influential thinker in AI safety, delves into the existential risks posed by advanced AI systems. He discusses the implications of ChatGPT and the looming threat of superintelligent AI. Yudkowsky emphasizes the need for alignment between AI systems and human values to prevent potential disaster. The conversation touches on the paradox of why we haven't encountered alien civilizations, relating it to the dangers of unchecked AI development. This thought-provoking dialogue urges listeners to consider proactive measures for a safer future.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

ChatGPT's Potential

  • ChatGPT is not smart enough to cause significant harm or benefit.
  • Its potential, while untapped, is unlikely to reach world-dominating levels.
INSIGHT

AGI vs. Superintelligence

  • Artificial General Intelligence (AGI) refers to AI with broad applicability, requiring minimal reprogramming for new tasks.
  • Superintelligence surpasses human and collective human intelligence in all cognitive tasks.
ANECDOTE

Market Efficiency Analogy

  • The efficient market hypothesis demonstrates that market prices generally outsmart individuals.
  • Similarly, superintelligence outperforms humanity, making its actions unpredictable.
Get the Snipd Podcast app to discover more snips from this episode
Get the app