In 'The Age of Spiritual Machines,' Ray Kurzweil presents a prophetic blueprint for the future where the capabilities of computers and humans become increasingly intertwined. The book explores the exponential growth of technology, particularly in artificial intelligence, and predicts a future where computers will exceed human intelligence. Kurzweil discusses the blurring of lines between human and machine, the emergence of new forms of intelligence, and the potential for humans to migrate their consciousness into machines. The book is a thought-provoking analysis of human and artificial intelligence and their evolving relationship in the 21st century.
E.T. Jaynes's 'Probability Theory: The Logic of Science' offers a comprehensive and rigorous treatment of probability theory, emphasizing its logical foundations. Jaynes argues that probability is not merely a measure of subjective belief or long-run frequencies, but rather a framework for logical reasoning under conditions of incomplete information. The book presents a coherent and consistent approach to probability, integrating Bayesian methods and emphasizing the importance of prior knowledge in statistical inference. It challenges traditional frequentist interpretations and provides a powerful alternative for scientific modeling and decision-making. Jaynes's work has had a profound impact on various fields, including physics, statistics, and artificial intelligence.
David Duvenaud is a professor of Computer Science at the University of Toronto, co-director of the Schwartz Reisman Institute for Technology and Society, former Alignment Evals Team Lead at Anthropic, an award-winning machine learning researcher, and a close collaborator of Dr. Geoffrey Hinton. He recently co-authored Gradual Disempowerment.
We dive into David’s impressive career, his high P(Doom), his recent tenure at Anthropic, his views on gradual disempowerment, and the critical need for improved governance and coordination on a global scale.
00:00 Introducing David
03:03 Joining Anthropic and AI Safety Concerns
35:58 David’s Background and Early Influences
45:11 AI Safety and Alignment Challenges
54:08 What’s Your P(Doom)™
01:06:44 Balancing Productivity and Family Life
01:10:26 The Hamming Question: Are You Working on the Most Important Problem?
01:16:28 The PauseAI Movement
01:20:28 Public Discourse on AI Doom
01:24:49 Courageous Voices in AI Safety
01:43:54 Coordination and Government Role in AI
01:47:41 Cowardice in AI Leadership
02:00:05 Economic and Existential Doom
02:06:12 Liron’s Post-Show
Show Notes
David’s Twitter — https://x.com/DavidDuvenaud
Schwartz Reisman Institute for Technology and Society — https://srinstitute.utoronto.ca/
Jürgen Schmidhuber’s Home Page — https://people.idsia.ch/~juergen/
Ryan Greenblatt's LessWrong comment about a future scenario where there's a one-time renegotiation of power and heat from superintelligent AI projects causes the oceans to boil: https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from?commentId=T7KZGGqq2Z4gXZsty
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit
lironshapira.substack.com