London Real cover image

London Real

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

Jul 19, 2024
Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.
01:18:06

Podcast summary created with Snipd AI

Quick takeaways

  • Creating super intelligent machines poses existential, societal, and labor meaning risks.
  • Developing narrow AI solutions is crucial to prevent uncontrollable super intelligent systems.

Deep dives

Potential Danger of Uncontrolled and Advanced AI

Creating super intelligent machines that may be smarter than us presents a potential danger beyond personal financial concerns or employment. This paradigm shift involves creating autonomous agents rather than tools, making technology potentially very dangerous without human intervention.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode