Lewis Ho, an expert in international institutions, discusses the importance of international collaborations in ensuring the benefits of advanced AI systems and managing the risks they pose. The podcast explores the need for international governance, challenges of advanced AI, standard setting, and the concept of a Frontier AI Collaborative. It also highlights the importance of providing underserved societies with advanced AI systems through international efforts.
An intergovernmental body could convene experts to assess the challenges and opportunities of advanced AI, promoting scientific consensus.
An intergovernmental or multi-stakeholder organization could set norms and standards to manage global threats from advanced AI systems, aiming to reduce misused and accident risks.
Deep dives
Commission on Frontier AI: Fostering scientific consensus
A proposed intergovernmental body to develop expert consensus on the challenges and opportunities of advanced AI. It aims to facilitate scientific consensus by convening experts to assess key AI topics, such as interventions for sustainable development, effects of regulation on innovation, distribution of benefits, and monitoring dual-use capabilities. Challenges include lack of scientific research, politicization, and significant scientific expenses.
Advanced AI Governance Organization: Promoting norms and standards, supporting implementation, monitoring compliance
An intergovernmental or multi-stakeholder organization to set norms and standards and assist in their implementation to manage global threats from advanced AI systems. It aims to harmonize AI regulation, facilitate compliance monitoring, and reduce misused and accident risks. Challenges include slow standard-setting processes, incentivizing participation, and scoping issues.
Frontier AI Collaborative: Enabling international access to AI
An international public-private partnership to ensure underserved societies can benefit from advanced AI. It aims to spread beneficial technology by acquiring or developing AI systems and distributing them. Challenges include obstacles to benefiting from AI access and managing the diffusion of dual-use technologies.
AI Safety Project: Conducting technical safety research
An international collaboration to conduct extensive research on AI safety. It focuses on improving reliability, reducing misuse risks, and developing safety protocols. Challenges include diverting safety research from frontier AI development and ensuring model access while managing security risks.
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity. International collaborations can unlock AI’s ability to further sustainable development, and coordination of regulatory efforts can reduce obstacles to innovation and the spread of benefits. Conversely, the potential dangerous capabilities of powerful and general-purpose AI systems create global externalities in their development and deployment, and international efforts to further responsible AI practices could help manage the risks they pose. This paper identifies a set of governance functions that could be performed at an international level to address these challenges, ranging from supporting access to frontier AI systems to setting international safety standards. It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations: 1) a Commission on Frontier AI that facilitates expert consensus on opportunities and risks from advanced AI, 2) an Advanced AI Governance Organization that sets international standards to manage global threats from advanced models, supports their implementation, and possibly monitors compliance with a future governance regime, 3) a Frontier AI Collaborative that promotes access to cutting-edge AI, and 4) an AI Safety Project that brings together leading researchers and engineers to further AI safety research. We explore the utility of these models and identify open questions about their viability.