

Scaling Laws
Lawfare & University of Texas Law School
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.
Episodes
Mentioned books

Nov 18, 2025 • 37min
Anthropic's General Counsel, Jeff Bleich, Explores the Intersection of Law, Business, and Emerging Technology
Jeff Bleich, General Counsel at Anthropic and former U.S. Ambassador, dives into the legal intricacies of AI with host Kevin Frazier. He discusses how his background in autonomous vehicles shaped his insights into AI governance. Bleich emphasizes the need for tech optimism, arguing that responsible use of AI can enhance lives. He touches on the role of democracy in managing innovation, public skepticism toward new technologies, and how law students should embrace AI to stay ahead in their careers.

Nov 11, 2025 • 44min
The AI Economy and You: How AI Is, Will, and May Alter the Nature of Work and Economic Growth with Anton Korinek, Nathan Goldschlag, and Bharat Chander
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works.Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.

Nov 4, 2025 • 49min
Anthropic's Gabriel Nicholas Analyzes AI Agents
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.

Oct 28, 2025 • 55min
The GoLaxy Revelations: China's AI-Driven Influence Operations, with Brett Goldstein, Brett Benson, and Renée DiResta
Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.Mentioned in this episode:"The Era of A.I. Propaganda Has Arrived, and America Must Act" by Brett J. Goldstein and Brett V. Benson (New York Times, August 5, 2025)"China Turns to A.I. in Information Warfare" by Julian E. Barnes (New York Times, August 6, 2025)"The GoLaxy Papers: Inside China's AI Persona Army" by Dina Temple-Raston and Erika Gajda (The Record, September 19, 2025)"The supply of disinformation will soon be infinite" by Renée DiResta (The Atlantic, September 2020) Hosted on Acast. See acast.com/privacy for more information.

Oct 21, 2025 • 49min
Sen. Scott Wiener on California Senate Bill 53
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.

Oct 14, 2025 • 52min
AI and Energy: What do we know? What are we learning?
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003’https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.

Oct 7, 2025 • 47min
AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You’ll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.

Sep 30, 2025 • 36min
Rapid Response: California Governor Newsom Signs SB-53
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.

Sep 30, 2025 • 43min
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow at Penn Carey Law, dive into the challenges of AI governance. They discuss the muddled state of AI policy and the reactions driven by past regulatory mistakes. The duo critiques academic selection biases that skew tech policy debates, while exploring the need for engineers to understand legal complexities. They call for interdisciplinary collaboration in education and emphasize the importance of hands-on AI experience to inform better regulations.

Sep 23, 2025 • 59min
AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers
In this engaging discussion, Renee DiResta, an expert in information operations, and Jess Miers, a technology law scholar, dive into the mental health risks generative AI poses for children. They highlight how chatbots can amplify mental health issues and the critical role of media literacy and parental involvement. The conversation also touches on the recent developments in AI safety, the implications of proposed age verification measures, and ongoing legal battles, providing a comprehensive look at the future of AI regulation.


