Robert Trager on International AI Governance and Cybersecurity at AI Companies
Aug 20, 2023
auto_awesome
Robert Trager, AI governance and cybersecurity expert, discusses AI governance, incentives of governments and companies, regulatory diversity, the track record of anticipatory regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance.
AI governance involves addressing misuse risks, accident risks, and structural risks, such as the potential for harmful activities facilitated by AI.
International AI governance is crucial to tackle global risks and should establish common minimal standards while allowing for national variability.
Anticipatory regulation and assessing failure rates play a key role in designing effective AI governance and regulatory agreements.
Addressing the security dilemma in AI requires adapting regulatory approaches to factors like offense-defense balance and hidden AI capabilities.
Deep dives
AI Governance: Identifying Misuse, Accident, and Structural Risks
AI governance involves considering the three categories of risks: misuse risks, accident risks, and structural risks. Misuse risks refer to the potential for AI technology to be misused and to democratize harmful or dangerous activities. Accident risks pertain to the unintended consequences or negative outcomes resulting from the use of AI. Structural risks involve the profound changes and challenges that arise from the advancement and widespread adoption of AI, such as social justice issues and access to technology. Examples of misuse risks include the potential for engineered viruses and cyber attacks enabled by advanced AI capabilities. While AI offers immense positive potential, addressing and mitigating these risks is crucial in AI governance.
The Need for International AI Governance
International AI governance is essential due to the global nature of AI development and the potential risks it poses. Regulatory standards and agreements are necessary to address various concerns related to AI governance. While different countries may have diverse societal values and regulatory preferences, there is a need to identify common minimal standards internationally. These standards should address risks that can be mitigated through feasible strategies, while allowing room for national variability and reflecting local values. Determining the extent and scope of international governance is an ongoing discussion, with the importance of balanced incentives and considerations for defense, offense, and global security dilemmas being key factors.
Anticipatory Regulation and Balancing Failure Rates in AI Governance
Anticipatory regulation involves proactively considering the implications and potential risks of AI technology before they fully emerge. Balancing the failure rate of AI with the associated risks is a crucial aspect of AI governance. Not all failures carry the same level of risk, and different use cases may tolerate varying degrees of failure. Some areas may allow for a certain failure rate with acceptable trade-offs, while others, particularly in high-stakes scenarios, require a near-zero failure rate. Designing effective regulatory agreements and institutions involves assessing these risks and aligning them with the appropriate level of failure tolerance.
AI's Impact on the Security Dilemma and Strategic Considerations
AI's implications on the security dilemma and strategic considerations are complex and dependent on several factors. The security dilemma refers to situations where actions taken to enhance one country's security may inadvertently threaten or destabilize others. AI's influence in this context involves factors like offense-defense balance, the ability of AI to defend against other AI, and the potential for countries to hide their AI capabilities. While AI's effect on the security dilemma is yet to fully unfold, it is essential to adapt regulatory approaches and international agreements to address specific strategic considerations related to AI technology.
Verification and Compliance in Agreements
Verification plays a crucial role in ensuring compliance with agreements between governments. In the nuclear case, advanced technology allows for reliable detection of weapons and tests, which facilitates mutually assured destruction and helps maintain stability. Similarly, in the AI realm, verification techniques are needed to detect and monitor the actions of actors in the field. However, technical challenges and the invasive nature of verification procedures pose obstacles. Balancing the need for effective verification with privacy concerns and revealing national security information remains a significant challenge in international agreements.
Control over Computing Hardware Supply Chain
There is a concern about whether a small group of aligned states can control the supply chain of computing hardware, including chips, algorithms, and data. Currently, the supply chain is narrow, enabling a small club of countries to have significant control over the latest computing technologies. However, the uncertainties lie in how long this control will last and the risks associated with existing compute and potential modifications. Controlling access to inputs for AI technologies could have implications for national power dynamics and require more comprehensive regulation and cooperation among countries.
International AI Governance Framework
An international framework for governing AI is proposed, consisting of three parts: an international body responsible for setting AI standards, a jurisdictional certification body to audit and monitor adherence to those standards at the national level, and the implementation of regulations through domestic laws. This model draws from successful international regimes like the International Civilian Aviation Organization and the Financial Action Task Force. The proposed framework aims to strike a balance between global standards and the autonomy of jurisdictions, utilizing incentives and trade-related consequences to encourage compliance. While challenges and concerns still exist, this model offers a structured approach to international AI governance.
Robert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance. We also discuss Robert's forthcoming paper International Governance of Civilian AI: A Jurisdictional Certification Approach. You can read more about Robert's work at https://www.governance.ai
Timestamps:
00:00 The goals of AI governance
08:38 Incentives of governments and companies
18:58 Benefits of regulatory diversity
28:50 The track record of anticipatory regulation
37:55 The security dilemma in AI
46:20 Offense-defense balance in AI
53:27 Failure rates and international agreements
1:00:33 Verification of compliance
1:07:50 Controlling AI supply chains
1:13:47 Cybersecurity at AI companies
1:21:30 The jurisdictional certification approach
1:28:40 Objections to AI governance
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.