AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Technology acts as a facilitator for new ways of living and interacting, but it is not the sole determinant of societal behavior. Steps taken by various groups in response to emerging technologies are influenced significantly by competitive pressures, particularly in military and economic contexts. If one group harnesses a new technology effectively, other groups may be coerced into adopting it to remain viable. This competitive dynamic suggests that while technology opens doors, the forces driving groups through those doors are rooted in broader social and political structures.
The Frontier Safety and Governance team focuses on three main pillars: frontier safety, governance, and planning. This group evaluates emerging capabilities of large general-purpose AI models, seeking to forecast potential risks and develop strategies for risk mitigation. They provide insights into norms, policies, and regulations that should guide the safe use of these powerful AI systems. Their planning efforts aim to identify upcoming considerations as AI technologies evolve toward artificial general intelligence.
Google DeepMind emphasizes collaboration among its various specialized teams to advance the goals of safety and governance effectively. This involves active partnerships with technical safety and policy teams to navigate the complex landscape of AI development. The integration between DeepMind and other Google entities enhances the overarching mission of addressing safety concerns and developing safe AI technologies. Currently, opportunities exist within the team for those interested in working on frontier AI challenges.
Alan Defoe's transition from being the founding director of the Center for the Governance of AI to his role at Google DeepMind was driven by the desire for greater impact. He recognized the importance of being embedded within a prominent AI organization to influence decision-making directly. Defoe believes that advising key decision-makers during pivotal historical moments is essential for shaping the future trajectory of AI development. His experience at DeepMind allows him to address challenges related to AI safety and governance more effectively.
Technological determinism presents a debate in understanding how technology shapes historical progress and societal changes. Defoe identifies two perspectives: one that emphasizes technology's autonomy in societal development and another that considers human agency and decisions. The challenge lies in reconciling these viewpoints to comprehend how different technologies emerge and influence society. For effective policy development, it is critical to recognize when human decisions interact with technological capabilities to shape historical outcomes.
Differential technological development refers to the targeted advancements that enhance one's societal or economic positioning in response to emerging capabilities. The discussion around this notion implies that timely advancements in safety and alignment measures can foster beneficial outcomes. However, Defoe cautions about the feasibility of recognizing viable pathways amidst rapid technological change. He advocates for increased collaboration and concentration of efforts on technologies beneficial to society as a whole.
Cooperative AI is poised to play a vital role in ensuring that AI models interact harmoniously, optimizing benefits while mitigating risks. Defoe highlights the importance of investing in cooperative skills to improve future AI interactions within both human communities and AI agents themselves. The principle of cooperatively intelligent models addresses the potential for unanticipated negative consequences arising from interactions between various AI systems. By committing to reinforcing these collaborative capabilities, society can better navigate the complexities of AI deployment.
The evaluation of frontier models is pivotal in determining their capabilities and risks, particularly concerning dangerous abilities. Google's approach involves comprehensive assessments, including self-reasoning and cyber capabilities, that discern the potential of models to act autonomously or maliciously. Defoe is optimistic about the prospects of AI systems performing various tasks well while recognizing the inherent challenges in modeling human-like responses. Continuous observation and testing can measure and enhance capabilities safely.
Ensuring external validity in AI evaluations is essential for determining how models will perform in real-world applications. Long-term observations and comparisons to established industry benchmarks can provide insights into a model's adaptability and available mitigations. Engaging super forecasters and academics can lend their expertise to predict when technologies will demonstrate their capabilities effectively. This multidimensional analysis helps form a clearer understanding of risks and benefits associated with AI technologies.
Global coordination is crucial for ensuring that powerful AI technologies are developed and deployed responsibly, addressing scalability and societal impacts. Defoe argues for a multi-layered approach to governance, involving industry, government entities, and civil society. Open discussions about the limitations and potential consequences of AI technologies will facilitate more informed policymaking. By involving broader coalitions in this debate, it is possible to create robust frameworks for the responsible management of AI's rapid evolution.
Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.
That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.
Links to learn more, highlights, video, and full transcript.
This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.
Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.
But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.
As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.
As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.
Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.
That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.
But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.
Host Rob and Allan also cover:
Chapters:
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode