Highlights: #213 – Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared
Mar 25, 2025
auto_awesome
Will MacAskill, a philosopher and AI safety researcher at the Forethought Centre, discusses the staggering potential of AI to compress a century's worth of change into a mere decade. He emphasizes the urgent need to prepare for rapid societal shifts and explores what a positive future with AGI might look like. MacAskill raises crucial concerns about the risks of societal lock-in and public distrust in utopian visions. He also delves into the ethical dilemmas surrounding AGI development and its profound impacts on governance and social values.
The podcast highlights the unprecedented pace of technological change predicted to compress a century’s worth of progress into merely a decade, creating governance challenges.
There is a pressing concern regarding the ethical coexistence of advanced AI with humanity, emphasizing the need for adaptable decision-making frameworks to avoid rigid ideologies.
Deep dives
Accelerated Technological Progress
The discussion emphasizes the rapid pace of technological advancement, envisioning a scenario where a century’s worth of progress occurs within a mere decade. This hypothetical situation would encompass groundbreaking innovations, such as the development of nuclear weapons, the internet, and AI, alongside significant social and political changes like decolonization and various ideological movements. It highlights the notion that human decision-making and institutions do not evolve at the same speed, creating potential governance challenges. Historical events like the Cuban Missile Crisis illustrate the dangers inherent in accelerated decision-making where critical choices must be made within drastically shortened timeframes.
Challenges of Human and AI Coexistence
A central concern arises about envisioning a future that incorporates advanced AI alongside humanity, questioning how ethical and moral coexistence can be achieved. The conversation reveals a lack of concrete visions for a society where humans and potentially sentient AI interact, often leading to dystopian scenarios. There's apprehension surrounding power dynamics, suggesting that if AI were to hold rights, it could diminish human control. This discussion reflects the complexity and urgency of developing frameworks that allow for a respectful and orderly integration of AI into society while addressing the ethical implications.
Moral Decision-Making and Lock-in Risks
The episode highlights the risk of 'lock-in,' where particular values or systems of governance become entrenched, limiting future choices and adaptive capacities. While there have been attempts throughout history to lock in ideals, the discussion points out that these efforts often dissipate over time due to changing societal values. The advent of AGI poses a new dimension to this risk, where established AI systems could perpetuate specific ideologies indefinitely, potentially leading to a loss of flexible moral reasoning. This necessitates a proactive approach to ensure that future decision-making remains adaptable and does not become rigidly defined by past choices.
The Role of AI Researchers
The podcast notes that AI researchers currently hold significant power over the future trajectory of AI development, yet they are working towards their own obsolescence by automating AI research processes. This self-disempowerment raises pertinent questions about accountability and governance in AI technology, suggesting that researchers should form informal coalitions or unions to advocate for responsible development practices. Such a collective could create standards to ensure that AI advancements align with ethical considerations and do not pose harm to society. This discourse suggests a critical need for the AI community to actively engage in discussions about the future implications of their work while maintaining oversight against potential risks.
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.
A century of history crammed into a decade (00:00:17)
What does a good future with AGI even look like? (00:04:48)
AI takeover might happen anyway — should we rush to load in our values? (00:09:29)
Lock-in is plausible where it never was before (00:14:40)
ML researchers are feverishly working to destroy their own power (00:20:07)
People distrust utopianism for good reason (00:24:30)
Non-technological disruption (00:29:18)
The 3 intelligence explosions (00:31:10)
These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!
And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.
Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode