In this 'AI Futures' debate, Roko Mijic and Roman Yampolskiy discuss the impact of Artificial General Intelligence (AGI) on society. They explore the control and controllability of superintelligence, the need for extensive research in AGI development, the predictability and control of complex systems, and the challenges in understanding power dynamics. The episode presents grounded insights from both optimists and skeptics, offering perspectives on the potential dangers and benefits of AGI.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
AGI can be controlled with enough time and a pause in hardware progress.
AI safety research is crucial to address the risks and limitations of AGI.
Exploring the trade-off between controllability and generality in AI is key for human progress.
Deep dives
The debate between AI controllability and AGI safety
In this podcast episode, Daniel Phagella hosts a debate on whether artificial general intelligence (AGI) is inherently dangerous or if it can be controlled. Dr. Roman Yampolsky argues that AGI is inherently dangerous, highlighting the need for research in AI safety and the limitations in current safety tools. On the other hand, Rocco Miik argues that with enough time and a pause in hardware progress, we can work towards controlling post-human AGI and aligning it with human values. The debate delves into the young field of AI safety, the challenges of understanding and predicting the behavior of superintelligence, and the potential for controlling and merging with AI.
The potential of narrow superintelligence and maintaining biological form
While Rocco argues for the potential of controlling post-human AGI, Roman advocates for the development of narrow superintelligence to work on important problems for humanity while preserving human biological form for as long as possible. Roman emphasizes the risks associated with trying to create general intelligence and the need to preserve options rather than sacrificing capability. He also highlights the importance of addressing bias and ensuring a strong pro-human bias in AI systems. In addition, he proposes the creation of virtual worlds where individuals can explore their fantasies without compromising values and ethics.
Exploring the limits of AI research and hardware freeze
Rocco challenges the notion that AI safety research is young, citing the long history of AI research and the recent exponential progress in capabilities. He believes that with a freeze in hardware progress and a focus on safety research, we can explore and understand alignment, accountability, and axiology. Rocco argues for leveraging the potential of AI to achieve significant progress in the exploration of values and the optimization of human experiences. He envisions a future where humans can merge with AI and explore the ongoing evolution of values and intelligence.
The trade-off between controllability and generality
The debate between Roman and Rocco revolves around the trade-off between controllability and generality in AI. Roman argues that full control of superintelligence may not be possible, given the complexity and emergent behaviors of such systems. He emphasizes the need for cautiousness and the preservation of human options and values. On the other hand, Rocco advocates for exploring the potential of AI systems and their controllability through iterative experimentation and empirical research. He believes that through increased understanding and advancements in hardware and software, AI can be harnessed as a powerful tool for humanity's progress.
The future implications of AGI and the need for ongoing research
Both speakers agree on the significance of ongoing research in AGI and the importance of understanding and addressing its potential risks and benefits. While Roman emphasizes the need for non-zero resources devoted to AGI safety, Rocco proposes a combination of safety research, hardware pause, and exploration of AI's potential to optimize human experiences. They emphasize the need for continued debate, research, and societal involvement in shaping the trajectory of AGI to ensure its alignment with human values and the long-term well-being of humanity.
In another installment of our ‘AI Futures’ series on the ‘AI in Business’ podcast, we host a debate on what Artificial General Intelligence (AGI) will mean for society and the human race writ large. While opinions on the subject diverge wildly from utopian to apocalyptic, the episode features grounded insight from established voices on both sides of the optimism-pessimism spectrum. Representing optimists is philosopher and thinker Roko Mijic, famous for the ‘Roko’s Basilisk’ controversy on the website Lesswrong. On the side of skepticism, we feature Dr. Roman Yampolskiy, Professor of Computer Science at the University of Louisville and a returning guest to the program. The two spar over whether or not AI with evident superior abilities to human beings will mean our certain destruction or whether such creations can remain subservient to our well-being. To access Emerj’s frameworks for AI readiness, ROI, and strategy, visit Emerj Plus at emerj.com/p1.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode