Lawfare Daily: Kevin Frazier on Prioritizing AI Research
Sep 3, 2024
auto_awesome
Kevin Frazier, Assistant Professor of Law at St. Thomas University College of Law and Co-Director of the Center for Law and AI Risk, discusses his paper advocating for prioritizing AI research over regulation. He highlights the early state of AI regulation and the need for targeted research to understand specific AI risks. Drawing parallels to automotive safety, Frazier calls for international cooperation in AI governance, contrasting successful models like CERN and IPCC to propose a robust framework for global research efforts.
Kevin Frazier emphasizes the urgent need for targeted AI risk research to address under-theorized regulatory challenges, akin to historical automotive safety efforts.
The importance of establishing an international institution for AI risk research is highlighted, promoting cooperative approaches to enhance legitimacy and regulatory coordination.
Deep dives
The Current State of AI Development
The capabilities of AI systems have progressed significantly, surpassing expectations set for 2024. Recent models like Llama 3.1 illustrate the rapid advancements made in just two years, following the launch of technologies such as Chat GPT. Despite concerns about potential slowdowns in innovation due to data and computational shortages, the continued emergence of AI agents capable of autonomous actions poses new regulatory challenges. As AI becomes increasingly intertwined with daily life, the urgency to address these regulatory issues escalates.
Regulatory Efforts and Their Limitations
Regulatory attempts to address AI risks have been varied, with mixed effectiveness tracked between regions. The EU's AI Act categorizes AI systems based on risk but lacks the necessary enforcement mechanisms to significantly reduce potential harms. In the U.S., regulatory efforts appear even less robust, with inconsistent messaging from lawmakers and a sense of urgency that hasn't yet materialized into effective legislation. Overall, the current regulatory landscape is underwhelming in its ability to tackle algorithmic discrimination and other existential threats posed by AI.
The Importance of Risk-Oriented Research
A critical gap exists in risk research focused on the tangible harms linked to AI technologies, which is lagging behind the capital invested in AI development. While private labs spend enormous sums on advancing AI, the public sector struggles to match that investment, as exemplified by the European Commission's $100 billion proposition for risk research over seven years. Historical analogies, such as the response to safety in the automobile industry, highlight the need for proactive research to inform regulatory measures. Without establishing a solid foundation of risk research, effective regulation of AI will remain elusive and reactive rather than preventive.
The Need for International Collaboration
To effectively conduct risk research on AI, an international institution would be beneficial in consolidating expertise and resources across borders. Leveraging models such as CERN and the IPCC underscores the advantages of a cooperative approach to address global challenges presented by AI. By including diverse stakeholders, the legitimacy and acceptance of resulting research could be enhanced, promoting coordinated regulatory efforts. Ultimately, this model would not only mitigate risks but also ensure that a broad range of perspectives are considered in shaping the future of AI governance.
Associate Professor at the University of Minnesota Law School and Lawfare Senior Editor Alan Rozenshtein sits down with Kevin Frazier, Assistant Professor of Law at St. Thomas University College of Law, Co-Director of the Center for Law and AI Risk, and a Tarbell Fellow at Lawfare. They discuss a new paper that Kevin has published as part of Lawfare’s ongoing Digital Social Contract paper series titled “Prioritizing International AI Research, Not Regulations.”
Frazier sheds light on the current state of AI regulation, noting that it's still in its early stages and is often under-theorized and under-enforced. He underscores the need for more targeted research to better understand the specific risks associated with AI models. Drawing parallels to risk research in the automobile industry, Frazier also explores the potential role of international institutions in consolidating expertise and establishing legitimacy in AI risk research and regulation.