
The Valmy
#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- Technology opens doors for new possibilities but is heavily influenced by competitive military and economic forces driving adoption.
- The Frontier Safety and Governance team at Google DeepMind focuses on evaluating AI capabilities and developing risk mitigation strategies.
- Allan Dafoe emphasizes the need for collaboration among AI teams to effectively address safety and governance challenges.
- Understanding the interplay between human agency and technological capabilities is critical for shaping effective AI policy and development.
- Cooperative AI models are essential for optimizing interactions while minimizing risks associated with multi-agent systems.
Deep dives
The Role of Technology in Shaping Society
Technology acts as a facilitator for new ways of living and interacting, but it is not the sole determinant of societal behavior. Steps taken by various groups in response to emerging technologies are influenced significantly by competitive pressures, particularly in military and economic contexts. If one group harnesses a new technology effectively, other groups may be coerced into adopting it to remain viable. This competitive dynamic suggests that while technology opens doors, the forces driving groups through those doors are rooted in broader social and political structures.
Frontier Safety and Governance at Google DeepMind
The Frontier Safety and Governance team focuses on three main pillars: frontier safety, governance, and planning. This group evaluates emerging capabilities of large general-purpose AI models, seeking to forecast potential risks and develop strategies for risk mitigation. They provide insights into norms, policies, and regulations that should guide the safe use of these powerful AI systems. Their planning efforts aim to identify upcoming considerations as AI technologies evolve toward artificial general intelligence.
Collaborative Culture Within Google DeepMind
Google DeepMind emphasizes collaboration among its various specialized teams to advance the goals of safety and governance effectively. This involves active partnerships with technical safety and policy teams to navigate the complex landscape of AI development. The integration between DeepMind and other Google entities enhances the overarching mission of addressing safety concerns and developing safe AI technologies. Currently, opportunities exist within the team for those interested in working on frontier AI challenges.
From Governance of AI to DeepMind
Alan Defoe's transition from being the founding director of the Center for the Governance of AI to his role at Google DeepMind was driven by the desire for greater impact. He recognized the importance of being embedded within a prominent AI organization to influence decision-making directly. Defoe believes that advising key decision-makers during pivotal historical moments is essential for shaping the future trajectory of AI development. His experience at DeepMind allows him to address challenges related to AI safety and governance more effectively.
The Complexity of Technological Determinism
Technological determinism presents a debate in understanding how technology shapes historical progress and societal changes. Defoe identifies two perspectives: one that emphasizes technology's autonomy in societal development and another that considers human agency and decisions. The challenge lies in reconciling these viewpoints to comprehend how different technologies emerge and influence society. For effective policy development, it is critical to recognize when human decisions interact with technological capabilities to shape historical outcomes.
Differential Technological Development
Differential technological development refers to the targeted advancements that enhance one's societal or economic positioning in response to emerging capabilities. The discussion around this notion implies that timely advancements in safety and alignment measures can foster beneficial outcomes. However, Defoe cautions about the feasibility of recognizing viable pathways amidst rapid technological change. He advocates for increased collaboration and concentration of efforts on technologies beneficial to society as a whole.
Cooperative AI's Significance in the Future
Cooperative AI is poised to play a vital role in ensuring that AI models interact harmoniously, optimizing benefits while mitigating risks. Defoe highlights the importance of investing in cooperative skills to improve future AI interactions within both human communities and AI agents themselves. The principle of cooperatively intelligent models addresses the potential for unanticipated negative consequences arising from interactions between various AI systems. By committing to reinforcing these collaborative capabilities, society can better navigate the complexities of AI deployment.
Evaluating Frontier Models and Dangerous Capabilities
The evaluation of frontier models is pivotal in determining their capabilities and risks, particularly concerning dangerous abilities. Google's approach involves comprehensive assessments, including self-reasoning and cyber capabilities, that discern the potential of models to act autonomously or maliciously. Defoe is optimistic about the prospects of AI systems performing various tasks well while recognizing the inherent challenges in modeling human-like responses. Continuous observation and testing can measure and enhance capabilities safely.
The Importance of External Validity in Evaluations
Ensuring external validity in AI evaluations is essential for determining how models will perform in real-world applications. Long-term observations and comparisons to established industry benchmarks can provide insights into a model's adaptability and available mitigations. Engaging super forecasters and academics can lend their expertise to predict when technologies will demonstrate their capabilities effectively. This multidimensional analysis helps form a clearer understanding of risks and benefits associated with AI technologies.
Global Coordination and Governance Challenges
Global coordination is crucial for ensuring that powerful AI technologies are developed and deployed responsibly, addressing scalability and societal impacts. Defoe argues for a multi-layered approach to governance, involving industry, government entities, and civil society. Open discussions about the limitations and potential consequences of AI technologies will facilitate more informed policymaking. By involving broader coalitions in this debate, it is possible to create robust frameworks for the responsible management of AI's rapid evolution.
Episode: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
Release date: 2025-02-14
Get Podcast Transcript →
powered by Listen411 - fast audio-to-text and summarization

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.
That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.
Links to learn more, highlights, video, and full transcript.
This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.
Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.
But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.
As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.
As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.
Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.
That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.
But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.
Host Rob and Allan also cover:
- The most exciting beneficial applications of AI
- Whether and how we can influence the development of technology
- What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
- Why cooperative AI may be as important as aligned AI
- The role of democratic input in AI governance
- What kinds of experts are most needed in AI safety and governance
- And much more
Chapters:
- Cold open (00:00:00)
- Who's Allan Dafoe? (00:00:48)
- Allan's role at DeepMind (00:01:27)
- Why join DeepMind over everyone else? (00:04:27)
- Do humans control technological change? (00:09:17)
- Arguments for technological determinism (00:20:24)
- The synthesis of agency with tech determinism (00:26:29)
- Competition took away Japan's choice (00:37:13)
- Can speeding up one tech redirect history? (00:42:09)
- Structural pushback against alignment efforts (00:47:55)
- Do AIs need to be 'cooperatively skilled'? (00:52:25)
- How AI could boost cooperation between people and states (01:01:59)
- The super-cooperative AGI hypothesis and backdoor risks (01:06:58)
- Aren’t today’s models already very cooperative? (01:13:22)
- How would we make AIs cooperative anyway? (01:16:22)
- Ways making AI more cooperative could backfire (01:22:24)
- AGI is an essential idea we should define well (01:30:16)
- It matters what AGI learns first vs last (01:41:01)
- How Google tests for dangerous capabilities (01:45:39)
- Evals 'in the wild' (01:57:46)
- What to do given no single approach works that well (02:01:44)
- We don't, but could, forecast AI capabilities (02:05:34)
- DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)
- How 'structural risks' can force everyone into a worse world (02:15:01)
- Is AI being built democratically? Should it? (02:19:35)
- How much do AI companies really want external regulation? (02:24:34)
- Social science can contribute a lot here (02:33:21)
- How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore