London Real cover image

London Real

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

Jul 19, 2024
Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.
01:18:06

Podcast summary created with Snipd AI

Quick takeaways

  • Creating super intelligent machines poses existential, societal, and labor meaning risks.
  • Developing narrow AI solutions is crucial to prevent uncontrollable super intelligent systems.

Deep dives

Potential Danger of Uncontrolled and Advanced AI

Creating super intelligent machines that may be smarter than us presents a potential danger beyond personal financial concerns or employment. This paradigm shift involves creating autonomous agents rather than tools, making technology potentially very dangerous without human intervention.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner