AI, US-China relations, and lessons from the OpenAI board (with Helen Toner)
Feb 26, 2025
auto_awesome
Helen Toner, director at Georgetown's CSET, dives into the fierce US-China AI race and the implications for national security. She discusses the shifting dynamics of warfare with AI and autonomous drones, and the growing ethical concerns about autonomous weapons. Toner also explores the societal impacts of AI, highlighting the disconnect between public sentiment and political agendas. Additionally, she introduces intriguing parallels between horse training and parenting, emphasizing emotional connection as a foundation for communication.
The dynamics of power and fear within organizations can suppress necessary criticism, leading to a collective action problem that hampers transparency and innovation.
The US-China competition in AI is framed within national security, influencing policymakers to focus on innovation while risking missed collaborative governance opportunities.
There is a growing public concern about AI technologies, yet this apprehension often fails to translate into substantial political action or regulation efforts.
Deep dives
Insights from Serving on the OpenAI Board
Serving on the OpenAI board for two and a half years provided valuable insights into the dynamics of power and organizational relationships. Helen Toner highlights the fear surrounding criticism of powerful individuals, noting that many people may refrain from voicing dissent due to potential repercussions. This creates a collective action problem where individuals choose to remain silent, ultimately suppressing necessary criticism. Observing these dynamics not only within OpenAI but also in broader societal contexts emphasizes how fear and power can obstruct truth and justice.
Governance Structures and Incentives
Toner reflects on how theoretically sound corporate governance structures can falter under real-world pressures. Despite the design of these structures aiming to prioritize public interest, intense market forces and financial incentives often complicate decision-making processes. This raises questions about whether corporate governance can effectively withstand the inevitable pressures created by capital demands and investor expectations. As AI technologies progress, the challenge of aligning incentives with ethical governance becomes increasingly critical, necessitating a closer examination of current corporate practices.
Responding to Organizational Dynamics
The challenge of garnering consensus within organizations often leads to decision-making paralysis, where individuals may avoid expressing dissenting opinions. Toner discusses how voting along with the majority appears to preserve social capital but can result in a lack of critical discussions needed for progress. This dynamic, while common in corporate environments, can stifle innovation and transparency. By fostering an atmosphere of acceptance for diverse opinions, organizations can encourage more productive conversations and ultimately lead to better decision-making.
US-China AI Competition Landscape
The competition between the United States and China in the realm of AI is framed as a fundamental aspect of national security discourse. Policymakers view the relationship through a competitive lens, which affects their outlook on AI advancements and innovations. Toner emphasizes the narrative that China is rapidly catching up to or even surpassing the US in AI technology, igniting discussions about the implications for national security. This competitive mindset encourages a focus on innovation while posing risks of underestimating collaborative opportunities that could enhance global AI governance.
The Importance of Hardware in AI Development
Discussions around the semiconductor supply chain reveal the strategic importance of hardware in AI advancements. Toner explains that the manufacturing of advanced chips, primarily concentrated in Taiwan, plays a crucial role in the tech landscape. As the US government increases efforts to bolster semiconductor production domestically, the challenge of ensuring competitive positioning is more paramount than ever. Understanding the interplay between software capabilities and hardware production is essential for future advancements in AI technology.
Public Perception and Regulation of AI
Public opinion surrounding AI technology is evolving, yet there remains a considerable gap between concern and actionable regulatory measures. Toner notes that while surveys indicate strong public apprehension about AI's societal impacts, these concerns often do not translate into significant political initiatives. The potential for state-level regulation may serve as an avenue for addressing public fears, but federal-level action remains elusive. Striking a balance between fostering innovation and ensuring safety through regulation becomes increasingly important as AI technologies continue to shape our world.
Is it useful to vote against a majority when you might lose political or social capital for doing so? What are the various perspectives on the US / China AI race? How close is the competition? How has AI been used in Ukraine? Should we work towards a global ban of autonomous weapons? And if so, how should we define "autonomous"? Is there any potential for the US and China to cooperate on AI? To what extent do government officials — especially senior policymakers — worry about AI? Which particular worries are on their minds? To what extent is the average person on the street worried about AI? What's going on with the semiconductor industry in Taiwan? How hard is it to get an AI model to "reason"? How could animal training be improved? Do most horses fear humans? How do we project ourselves onto the space around us?
Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne. Follow her on Twitter at @hlntnr.