undefined

Roman Yampolskiy

Computer scientist, AI safety researcher, and professor at the University of Louisville. Author of 'Considerations on the AI Endgame' and 'AI: Unexplained, Unpredictable, Uncontrollable'.

Top 5 podcasts with Roman Yampolskiy

Ranked by the Snipd community
undefined
487 snips
Jun 2, 2024 • 0sec

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Roman Yampolskiy, an AI safety and security researcher and author, shares his insights into the profound dangers of superintelligent AI. He discusses the chilling potential for AI to lead to human extinction and the urgent need for error-proof safety measures. Roman explores the complexities of open-source AI, likening its risks to nuclear weapons, and highlights the ethics of AI consciousness. He delves into the philosophical implications of AI integration, emphasizing the duality of its benefits and existential threats for humanity.
undefined
467 snips
Jul 3, 2025 • 2h 22min

#2345 - Roman Yampolskiy

Dr. Roman Yampolskiy, a computer scientist and AI safety researcher, discusses pressing concerns about artificial intelligence and its potential threats. He examines the dangers of superintelligent AI, including deep fakes and the risk of human dependency on technology. Yampolskiy also explores global unity against AI-induced challenges and urges careful oversight in AI development to prevent catastrophic outcomes. The conversation dives into the philosophical implications of AI, proposing that future superintelligent beings could perceive humans as threats.
undefined
10 snips
May 26, 2023 • 1h 42min

Roman Yampolskiy on Objections to AI Safety

Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Objections to AI safety 15:06 Will robots make AI risks salient? 27:51 Was early AI safety research useful? 37:28 Impossibility results for AI 47:25 How much risk should we accept? 1:01:21 Exponential or S-curve? 1:12:27 Will AI accidents increase? 1:23:56 Will we know who was right about AI? 1:33:33 Difference between AI output and AI model Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
undefined
Jul 19, 2024 • 1h 18min

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.
undefined
Sep 15, 2023 • 55min

[AI Futures] A Debate on What AGI Means for Society and the Species - with Roko Mijic and Roman Yampolskiy

In this 'AI Futures' debate, Roko Mijic and Roman Yampolskiy discuss the impact of Artificial General Intelligence (AGI) on society. They explore the control and controllability of superintelligence, the need for extensive research in AGI development, the predictability and control of complex systems, and the challenges in understanding power dynamics. The episode presents grounded insights from both optimists and skeptics, offering perspectives on the potential dangers and benefits of AGI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app