The State and Local AI Regulation Landscape with Dean Ball
Mar 19, 2025
auto_awesome
Dean Ball, a Research Fellow at George Mason University’s Mercatus Center and author of Hyperdimensional, dives into the complexities of AI regulation at state and local levels. He discusses the surge in legislation following lessons learned from social media, including over 800 proposed bills. The conversation covers the challenges of regulating malicious deepfakes, algorithmic discrimination, and the bipartisan effort in shaping laws. Ball also highlights California's SB 1047 and the necessity for clear standards to protect innovation while addressing safety concerns.
State-level AI legislation is surging with 833 bills introduced in early 2025, driven by proactive governance to avoid past regulatory mistakes.
Deepfake technology prompts extensive legislative debates focused on addressing misuse and balancing regulation with First Amendment rights amid rising political concerns.
Legislation on algorithmic discrimination emphasizes preemptive measures and AI impact assessments, contrasting with federal approaches that often react to existing biases.
Deep dives
The Surge in AI Legislation
There has been a significant increase in state-level AI legislation, with 833 bills introduced across various states in early 2025, more than double the previous year's activity. This legislative surge is partly driven by policymakers' desire to avoid repeating the past mistakes associated with social media regulation. Policymakers view AI as a transformative technology that needs preemptive measures to avoid potential pitfalls observed in other digital domains. The bipartisan concern across political lines reflects a unified sentiment: proactive governance is necessary in the face of rapidly evolving AI capabilities.
The Role of Deepfake Legislation
Deepfake technology has prompted a plethora of legislative debates, largely focusing on the creation of laws to address its misuse, particularly in relation to revenge pornography and misinformation. Many states are introducing bills that classify the malicious production and distribution of deepfakes as a criminal or civil offense, although some proposals are seen as redundant due to existing laws. Notably, political deepfakes are a growing concern, raising alarms about potential impacts on public perception and elections. Policymakers are exploring the balance between enforcing regulations and safeguarding First Amendment rights in this complex legislative landscape.
Algorithmic Discrimination and Its Complexities
Legislation addressing algorithmic discrimination represents a significant and complex area of focus, with 15 to 20 critical bills currently under consideration. These bills seek to ensure that AI systems used in various sectors do not perpetuate biases, mandating developers to conduct algorithmic impact assessments before deploying AI. A key distinction in state legislations is the push for preemptive regulations that address potential discrimination before AI systems are utilized, contrasting with federal actions that often enforce existing laws retroactively. This proactive approach indicates a growing awareness among lawmakers about the potential risks of unwarranted biases in AI-assisted decision-making processes.
Frontier AI Regulation and Liability Concerns
The regulation of frontier AI—particularly concerning high-risk systems with catastrophic potential—has emerged as a critical focus, with specific bills like California's SB 1047 highlighting the need for accountability among AI developers. These regulations propose requirements for companies to publish safety frameworks outlining their risk management strategies, with the additional layer of negligence liability potentially incentivizing responsible practices. However, a significant concern remains about how to enforce these regulations effectively, especially when leveraging public accountability mechanisms like auditing and whistleblower protections. The potential for existential damage raised by frontier AI models adds urgency to establish liability standards that balance innovation with public safety.
The Influence of the EU AI Act
The discussions around U.S. state AI bills reveal notable influences from the European Union's AI Act, particularly regarding algorithmic assessments and regulatory frameworks. Many state proposals have adopted language and concepts directly from the EU legislation, demonstrating a convergence in thinking about responsible AI deployment. However, disparities in enforcement mechanisms and penalties highlight critical differences between the two regulatory approaches, raising questions about their practical implications. As states navigate their legislative paths, there is an ongoing need to find a harmonious balance between regulatory consistency and the flexibility required by an evolving technology landscape.
In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence & Progress Project at George Mason University’s Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape.
In addition to his role at the George Mason University’s Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.