Episode 32: Cary Coglianese on the Use of AI in Public Enforcement
Mar 12, 2025
auto_awesome
Cary Coglianese, the Edward B. Shils Professor of Law and Political Science at the University of Pennsylvania, dives into the intriguing world of AI in public enforcement. He discusses the shift in AI regulation from rigid 'guardrails' to more flexible 'leashes' that embrace innovation while ensuring safety. Coglianese addresses the balance of AI's efficiencies against ethical standards, highlighting the necessity of human oversight and the implications of recent regulatory developments, including the EU's Digital Markets Act.
The evolution of AI regulation advocates for flexible frameworks, favoring responsive 'leashes' over static 'guardrails' to balance innovation and safety.
Proactive risk management strategies are essential for regulators to address antitrust issues related to AI, preventing violations before they occur.
Deep dives
Shifting Metaphors in AI Regulation
The concept of AI regulation is evolving from traditional fixed governance metaphors to more dynamic approaches. The distinction between 'guardrails' and 'leashes' is particularly significant; while guardrails imply static, protective measures, leashes suggest a responsive and flexible regulatory framework. This metaphor captures the inherent fluidity of AI technology, necessitating oversight that adapts to its rapid development. By viewing regulation as a 'leash,' there remains room for innovation while ensuring necessary human oversight, allowing AI to explore new avenues without straying into detrimental territory.
The Balance of AI Safety and Innovation
Discussions around AI governance must carefully navigate the balance between safety and innovation, especially in light of recent global summits focused on AI. Although there's a current trend toward unleashing AI capabilities, the imperative for safety standards remains crucial for sustainable advancements. Past examples indicate that irresponsible practices could lead to severe industry backlash and regulation if mishaps occur. Therefore, a proactive approach that combines innovation with safety considerations through regulatory leashes can support the responsible evolution of AI technologies.
AI Regulation and Antitrust Approaches
Regulators can adopt management-based strategies to address antitrust concerns arising from AI utilization in market transactions. Rather than reacting to violations after they happen, the focus should be on preventing issues through proactive risk management requirements for firms. This shift mirrors practices in other regulatory spheres, aiming to establish a framework where organizations are responsible for monitoring their AI systems to avoid anti-competitive behavior. With this framework, antitrust authorities can develop more effective strategies in regulating the increasingly complex interactions generated by AI algorithms in commerce.
In episode 32, Thibault Schrepel and Teodora Groza speak with Cary Coglianese, Edward B. Shils Professor of Law and Professor of Political Science at the University of Pennsylvania Law School. They talk about Cary's research on the use of AI in public enforcement. Subscribe to our newsletter at https://law.stanford.edu/computationalantitrust for regular updateson the Stanford Computational Antitrust project.
References:
- Leashes, Not Guardrails: A Management-Based Approach to Artificial Intelligence Risk Regulation https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5137081
- Antitrust by Algorithm https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3985553