Uncontrollable examines artificial intelligence as a concept and technology, describing what AI is, how image generators and language models work, and the risks associated with artificial superintelligence. The book is divided into three parts: 'What is Happening?' which provides an overview of the current AI landscape; 'What are the Problems?' which details potential existential risks; and 'What can we Do?' which discusses ways to mitigate these risks. McKee uses engaging analogies and relatable examples to make the topic accessible to readers without a technical background. Despite its warning about the dangers of AI, the book is ultimately hopeful and provides concrete suggestions for society and citizens to create a safer future with AI.
In this book, Max Tegmark presents his mathematical universe hypothesis, which posits that reality is not just described by mathematics but is actually a mathematical structure. The book is divided into three parts: 'Zooming Out' explores our location in the cosmos and multiverse, 'Zooming In' delves into quantum mechanics and particle physics, and 'Stepping Back' discusses Tegmark's speculative ideas about the mathematical nature of reality. Tegmark introduces four levels of multiverse, culminating in the 'Level IV multiverse,' where all possible mathematical structures have physical existence. The book is written in an accessible and engaging style, using anecdotes and clear explanations to make complex scientific concepts understandable to a broad audience.
In this book, Gregory Clark addresses profound questions about global economic disparities. He argues that the Industrial Revolution and subsequent economic growth in eighteenth-century England were driven by cultural changes, such as the adoption of middle-class values like hard work, rationality, and education. Clark challenges prevailing theories by suggesting that these cultural shifts, rather than institutional or geographical factors, explain the wealth and poverty of nations. The book also discusses the Malthusian trap and how Britain's unique demographic and social dynamics allowed it to break out of this cycle and achieve significant economic growth.
In this book, Nick Bostrom delves into the implications of creating superintelligence, which could surpass human intelligence in all domains. He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values. The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks[3][5][4].
Dr. Peter Berezin is the Chief Global Strategist and Director of Research at BCA Research, the largest Canadian investment research firm. He’s known for his macroeconomics research reports and his frequent appearances on Bloomberg and CNBC.
Notably, Peter is one of the only macroeconomists in the world who’s forecasting AI doom! He recently published a research report estimating a “ more than 50/50 chance AI will wipe out all of humanity by the middle of the century”.
00:00 Introducing Peter Berezin
01:59 Peter’s Economic Predictions and Track Record
05:50 Investment Strategies and Beating the Market
17:47 The Future of Human Employment
26:40 Existential Risks and the Doomsday Argument
34:13 What’s Your P(Doom)™
39:18 Probability of non-AI Doom
44:19 Solving Population Decline
50:53 Constraining AI Development
53:40 The Multiverse and Its Implications
01:01:11 Are Other Economists Crazy?
01:09:19 Mathematical Universe and Multiverse Theories
01:19:43 Epistemic vs. Physical Probability
01:33:19 Reality Fluid
01:39:11 AI and Moral Realism
01:54:18 The Simulation Hypothesis and God
02:10:06 Liron’s Post-Show
Show Notes
Peter’s Twitter: https://x.com/PeterBerezinBCA
Peter’s old blog — https://stockcoach.blogspot.com
Peter’s 2021 BCA Research Report: “Life, Death and Finance in the Cosmic Multiverse” — https://www.bcaresearch.com/public/content/GIS_SR_2021_12_21.pdf
M.C. Escher’s “Circle Limit IV” — https://www.escherinhetpaleis.nl/escher-today/circle-limit-iv-heaven-and-hell/
Zvi Mowshowitz’s Blog (Liron’s recommendation for best AI news & analysis) — https://thezvi.substack.com
My Doom Debates episode about why nuclear proliferation is bad — https://www.youtube.com/watch?v=ueB9iRQsvQ8
Robin Hanson’s “Mangled Worlds” paper — https://mason.gmu.edu/~rhanson/mangledworlds.html
Uncontrollable by Darren McKee (Liron’s recommended AI x-risk book) — https://www.amazon.com/dp/B0CNNYKVH1
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit
lironshapira.substack.com