undefined

Roman Yampolskiy

Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety

Top 3 podcasts with Roman Yampolskiy

Ranked by the Snipd community
undefined
365 snips
Jun 2, 2024 • 0sec

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Roman Yampolskiy, an AI safety researcher and author, discusses the existential risks of AGI, the dangers and complexities of superintelligent AI, issues with AI aligning human values, potential catastrophic consequences, and the challenges of controlling superintelligent AI systems. The conversation dives into creating virtual universes for agents, manipulating suffering through technology, and the implications of open-sourcing AI technology. It also touches on the risks of AI surpassing human intelligence, challenges in AI verification processes, and the balance between AI capabilities and safety in a capitalist society.
undefined
Sep 15, 2023 • 55min

[AI Futures] A Debate on What AGI Means for Society and the Species - with Roko Mijic and Roman Yampolskiy

In this 'AI Futures' debate, Roko Mijic and Roman Yampolskiy discuss the impact of Artificial General Intelligence (AGI) on society. They explore the control and controllability of superintelligence, the need for extensive research in AGI development, the predictability and control of complex systems, and the challenges in understanding power dynamics. The episode presents grounded insights from both optimists and skeptics, offering perspectives on the potential dangers and benefits of AGI.
undefined
Jul 19, 2024 • 1h 18min

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.