
80,000 Hours Podcast
#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- Will MacAskill warns that the rapid advancement of AI could compress decades of change into years, demanding urgent societal preparation.
- The podcast highlights the ethical dilemmas posed by AGI, emphasizing the necessity for a structured coexistence framework for humans and AI.
- Caskill discusses the potential political and economic inequities caused by AI advancements, calling for international collaboration to address these disparities.
- Incremental development strategies are proposed as crucial for democratizing AI technology, fostering a more inclusive approach to its governance.
- The need for aligning AI with human values is emphasized, focusing on moral frameworks to mitigate risks and ensure ethical outcomes.
- Caskill presents an optimistic perspective on AI's future, encouraging proactive engagement and innovation to enhance human flourishing amidst rapid changes.
Deep dives
The Future of AGI and Society
Many companies are currently focusing on developing Artificial General Intelligence (AGI) that matches human capabilities. This transition raises crucial questions about what a societal framework should look like with both humans and a multitude of intelligent AI entities. The lack of a clear and ethical vision for coexistence presents a significant challenge, as society races towards this uncertain future. Without guiding principles, the integration of AGIs into daily life could lead to ethical dilemmas and a loss of control over both technology and governance.
AGI Preparedness and Ethical Challenges
Wilmar Caskill emphasizes the urgency of preparing for AGI by examining the implications of human-level AI in the near future. His research explores not only the dynamics and challenges that AGI might introduce but also envisions a society where humans and AGIs can coexist ethically. One strategy for navigating potential challenges involves modeling what life would look like post-AGI and steering current technological developments towards a preferred reality. The lack of serious discourse on how to achieve a morally acceptable society in the advent of AGI is alarming and signals a need for urgent attention.
The Urgency of Understanding AI Dynamics
Caskill identifies the rapidly advancing capabilities of AI as a critical reason for reevaluating timelines for achieving AGI. Current technological strides suggest that we may be closer to human-level AI than many anticipate, prompting a need for preemptive planning. Ignoring the potential for an intelligence explosion could result in unforeseen consequences that lose sight of ethical considerations. A deeper understanding of how AI might evolve can help society prepare for and mitigate possible risks.
Concerns About Historical Perspectives
Caskill contrasts historical skepticism around crucial moments in history with the need to recognize our current period as significant for AI development. Earlier beliefs that humanity is unlikely to be in a pivotal moment fail to account for the unprecedented pace of technological evolution. By reevaluating past historical reasoning, he underscores the need to anticipate the acceleration of change driven by AI. This invites both philosophical contemplation and practical actions to align society with ethical AI deployment.
Rethinking Perspectives on AI and Governance
The discussion includes the importance of reexamining how global governance structures may or may not keep pace with rapid advancements in AI technology. There is potential for countries and corporations to seize power through advanced AI, creating a lopsided geopolitical landscape. To counteract this, collective international efforts toward establishing norms and agreements are necessary to prevent the monopolization of advanced technologies. Creating equitable spaces for all countries, including those with fewer resources, will be critical in preventing further disparities.
The Role of Incremental Advances
Caskill notes the importance of incremental technological development in guiding society towards better futures. While people often focus on radical innovations, the cumulative impact of gradual advancements should not be overlooked. This approach can facilitate more democratic approaches to technological adoption and governance, allowing for broader participation. Additionally, fostering collaboration between various stakeholders will be vital to ensuring that the benefits of AI are universally accessible.
Value Alignment in AI Development
The conversation explores the challenges of aligning AI with human values and concerns about potential risks associated with value misalignment. Caskill emphasizes the importance of investing in ways to ensure that AGI development considers ethical implications and promotes humane values. Various methodologies of training AI to adopt moral frameworks can be explored in anticipation of this technology’s future role. Ultimately, the design of AI should not only focus on performance but also on embedding moral values.
Identifying Priorities and Approaches
Amid discussions about AI and society, a need arises to prioritize the most pressing challenges and actions. Caskill highlights the significance of collaborative research and the alignment of values to promote beneficial outcomes in the future. Better Futures focuses on tackling the philosophical complexities of AI while considering how societal structures can capitalize on positives and mitigate negatives. A framework that calls for both short-term actions and long-term foresight is crucial in shaping an ethical future with AI.
The Case for Iterative Value Loading
One promising approach to improving the chances of beneficial outcomes from an eventual AGI is value loading, which entails instilling moral preferences into AI. While potential risks exist, such as AI taking irrational actions based on misaligned values, ensuring AI reflects humane principles could mitigate negative outcomes. A carefully balanced design is needed, allowing AI to maintain accountability while aligning with ethical standards. By incorporating diverse moral viewpoints, researchers can work toward more comprehensive value-loading strategies.
Opportunities for Entrepreneurship in AI
Amidst the rapid development of AI technologies, a significant opportunity arises for entrepreneurship focused on improving societal outcomes. Founding companies that leverage AI to enhance human capabilities and decision-making processes can create more ethical, integrative solutions. Caskill encourages individuals to become involved in facilitating constructive use of AI for societal benefits. This type of innovation is crucial and can help establish a cultural foundation for future AI governance.
Reflections on the Future and Hope
Finally, Caskill presents an optimistic view of the future, driven by the potential of competent AI to enhance human flourishing while reducing risks associated with misalignment. By researching and nurturing technology that benefits humanity, it is possible to steer towards a brighter outcome. He states that caution should not lead to despair, as there are numerous avenues for positive engagement and proactive strategy development. The evolving landscape of AI necessitates collaborative action and commitment to improving the future for all.
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.
That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.
Links to learn more, highlights, video, and full transcript.
The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.
Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.
What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.
In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:
- Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought
- The three different types of intelligence explosions that occur in order
- Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights
- How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity
- Ways AI could radically improve human coordination and decision making
- Why we should aim for truly flourishing futures, not just avoiding extinction
Chapters:
- Cold open (00:00:00)
- Who’s Will MacAskill? (00:00:46)
- Why Will now just works on AGI (00:01:02)
- Will was wrong(ish) on AI timelines and hinge of history (00:04:10)
- A century of history crammed into a decade (00:09:00)
- Science goes super fast; our institutions don't keep up (00:15:42)
- Is it good or bad for intellectual progress to 10x? (00:21:03)
- An intelligence explosion is not just plausible but likely (00:22:54)
- Intellectual advances outside technology are similarly important (00:28:57)
- Counterarguments to intelligence explosion (00:31:31)
- The three types of intelligence explosion (software, technological, industrial) (00:37:29)
- The industrial intelligence explosion is the most certain and enduring (00:40:23)
- Is a 100x or 1,000x speedup more likely than 10x? (00:51:51)
- The grand superintelligence challenges (00:55:37)
- Grand challenge #1: Many new destructive technologies (00:59:17)
- Grand challenge #2: Seizure of power by a small group (01:06:45)
- Is global lock-in really plausible? (01:08:37)
- Grand challenge #3: Space governance (01:18:53)
- Is space truly defence-dominant? (01:28:43)
- Grand challenge #4: Morally integrating with digital beings (01:32:20)
- Will we ever know if digital minds are happy? (01:41:01)
- “My worry isn't that we won't know; it's that we won't care” (01:46:31)
- Can we get AGI to solve all these issues as early as possible? (01:49:40)
- Politicians have to learn to use AI advisors (02:02:03)
- Ensuring AI makes us smarter decision-makers (02:06:10)
- How listeners can speed up AI epistemic tools (02:09:38)
- AI could become great at forecasting (02:13:09)
- How not to lock in a bad future (02:14:37)
- AI takeover might happen anyway — should we rush to load in our values? (02:25:29)
- ML researchers are feverishly working to destroy their own power (02:34:37)
- We should aim for more than mere survival (02:37:54)
- By default the future is rubbish (02:49:04)
- No easy utopia (02:56:55)
- What levers matter most to utopia (03:06:32)
- Bottom lines from the modelling (03:20:09)
- People distrust utopianism; should they distrust this? (03:24:09)
- What conditions make eventual eutopia likely? (03:28:49)
- The new Forethought Centre for AI Strategy (03:37:21)
- How does Will resist hopelessness? (03:50:13)
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore