Scaling Laws

Lawfare & University of Texas Law School
undefined
Oct 28, 2025 • 55min

The GoLaxy Revelations: China's AI-Driven Influence Operations, with Brett Goldstein, Brett Benson, and Renée DiResta

Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.Mentioned in this episode:"The Era of A.I. Propaganda Has Arrived, and America Must Act" by Brett J. Goldstein and Brett V. Benson (New York Times, August 5, 2025)"China Turns to A.I. in Information Warfare" by Julian E. Barnes (New York Times, August 6, 2025)"The GoLaxy Papers: Inside China's AI Persona Army" by Dina Temple-Raston and Erika Gajda (The Record, September 19, 2025)"The supply of disinformation will soon be infinite" by Renée DiResta (The Atlantic, September 2020) Hosted on Acast. See acast.com/privacy for more information.
undefined
Oct 21, 2025 • 49min

Sen. Scott Wiener on California Senate Bill 53

California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.
undefined
Oct 14, 2025 • 52min

AI and Energy: What do we know? What are we learning?

Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI.  They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs.  Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003’https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.
undefined
Oct 7, 2025 • 47min

AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan

David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.   They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You’ll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false  Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.
undefined
Sep 30, 2025 • 36min

Rapid Response: California Governor Newsom Signs SB-53

In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.
undefined
Sep 30, 2025 • 43min

The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).

Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow at Penn Carey Law, dive into the challenges of AI governance. They discuss the muddled state of AI policy and the reactions driven by past regulatory mistakes. The duo critiques academic selection biases that skew tech policy debates, while exploring the need for engineers to understand legal complexities. They call for interdisciplinary collaboration in education and emphasize the importance of hands-on AI experience to inform better regulations.
undefined
Sep 23, 2025 • 59min

AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers

In this engaging discussion, Renee DiResta, an expert in information operations, and Jess Miers, a technology law scholar, dive into the mental health risks generative AI poses for children. They highlight how chatbots can amplify mental health issues and the critical role of media literacy and parental involvement. The conversation also touches on the recent developments in AI safety, the implications of proposed age verification measures, and ongoing legal battles, providing a comprehensive look at the future of AI regulation.
undefined
Sep 16, 2025 • 59min

AI Copyright Lawsuits with Pam Samuelson

Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at UC Berkeley, specializes in copyright law and AI's legal implications. She discusses recent court rulings like Bartz v. Anthropic, probing whether training AI on copyrighted material constitutes fair use. The conversation highlights the balance between protecting creators' rights and promoting innovation, while also exploring the transformative nature of AI outputs. Key cases like Warhol vs. Goldsmith are examined for their impact on copyright law, making this a must-listen for anyone interested in the future of intellectual property.
undefined
9 snips
Sep 11, 2025 • 58min

AI and the Future of Work: Joshua Gans on Navigating Job Displacement

Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction," discusses the complexities of AI-induced job displacement. He analyzes how recent regulations, like updates to New York's WARN Act, impact transparency in layoffs. Gans speculates on AI's influence on entry-level jobs and emphasizes the essential human skills still needed in an AI-driven world. He advocates for adaptable AI regulations that foster innovation while addressing ethical concerns, ultimately revealing the nuanced dynamics of technology and employment.
undefined
Sep 9, 2025 • 47min

The State of AI Safety with Steven Adler

Steven Adler, a former OpenAI safety researcher and author of Clear-Eyed AI, joins Kevin Frazier to dive into AI safety. They explore the importance of pre-deployment safety measures and the challenges of ensuring trust in AI systems. Adler emphasizes the critical need for international cooperation in tackling AI threats, especially amid U.S.-China tensions. He discusses how commercial pressures have transformed OpenAI's safety culture and stresses the necessity of rigorous risk assessment as AI technologies continue to evolve.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app