

Scaling Laws
Lawfare & University of Texas Law School
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.
Episodes
Mentioned books

9 snips
Sep 11, 2025 • 58min
AI and the Future of Work: Joshua Gans on Navigating Job Displacement
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction," discusses the complexities of AI-induced job displacement. He analyzes how recent regulations, like updates to New York's WARN Act, impact transparency in layoffs. Gans speculates on AI's influence on entry-level jobs and emphasizes the essential human skills still needed in an AI-driven world. He advocates for adaptable AI regulations that foster innovation while addressing ethical concerns, ultimately revealing the nuanced dynamics of technology and employment.

Sep 9, 2025 • 47min
The State of AI Safety with Steven Adler
Steven Adler, a former OpenAI safety researcher and author of Clear-Eyed AI, joins Kevin Frazier to dive into AI safety. They explore the importance of pre-deployment safety measures and the challenges of ensuring trust in AI systems. Adler emphasizes the critical need for international cooperation in tackling AI threats, especially amid U.S.-China tensions. He discusses how commercial pressures have transformed OpenAI's safety culture and stresses the necessity of rigorous risk assessment as AI technologies continue to evolve.

8 snips
Sep 2, 2025 • 46min
Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. US
Anu Bradford, a Columbia Law School professor renowned for her insights on the Brussels Effect, and Kate Klonick, a Lawfare senior editor who specializes in content moderation, dive into the tumultuous world of Big Tech regulation. They contrast the EU’s progressive AI Act and its global influence with the US's interventionist stance on tech. The duo explores the geopolitical ramifications of these differing approaches, including the EU's challenges in technological sovereignty and the ongoing US-China tech rivalry, shedding light on the future landscape of digital governance.

Aug 28, 2025 • 48min
Uncle Sam Buys In: Examining the Intel Deal
Peter E. Harrell, an Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier to discuss the White House taking a 10% stake in Intel. They analyze the policy rationale and legality behind this move and consider its implications for the semiconductor industry. The conversation dives into the CHIPS Act, historical precedents of government interventions, and the complexities of federal authority in corporate equity transactions. They also address concerns over government favoritism and the impact on competition with other tech giants like NVIDIA.

Aug 26, 2025 • 1h 21min
AI in the Classroom with MacKenzie Price, Alpha School co-founder, and Rebecca Winthrop, leader of the Brookings Global Task Force on AI in Education
MacKenzie Price, co-founder of Alpha School, advocates for personalized learning with AI, discussing innovative classroom practices. Rebecca Winthrop of the Brookings Institution shares global efforts and challenges in integrating AI into education. They dive into the balance of data privacy and educational benefits, exploring technology's impact on student engagement, particularly in under-resourced areas. With nuanced perspectives on generative AI, they call for collaborative approaches to enhance learning while mitigating risks.

Aug 21, 2025 • 45min
The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBride
Nathan Lambert, a post-training lead at the Allen Institute for AI, and Keegan McBride, a lecturer at the Oxford Internet Institute, delve into the evolving landscape of open source AI. They discuss the shift towards open-source models and the implications for AI policy and global competition. The conversation highlights challenges in monetization and contrasts the dynamics of open versus closed models, with specific insights on China's advancements. They also address federal funding issues and the critical role of collaboration between government and academia in fostering innovation.

Aug 19, 2025 • 54min
Export Controls: Janet Egan, Sam Winter-Levy, and Peter Harrell on the White House's Semiconductor Decision
Peter Harrell, former senior director for international economics at the White House, Janet Egan, senior fellow at the Center for a New American Security, and Sam Winter-Levy, fellow at Carnegie, delve into the recent export controls on AI semiconductors to China. They discuss the implications of allowing companies like Nvidia to export advanced chips, the constitutional legality of export taxes, and the strategic risks to international coalitions. The conversation highlights the tension between national security and the global tech race.

5 snips
Aug 14, 2025 • 58min
Navigating AI Policy: Dean Ball on Insights from the White House
Dean Ball, former Senior Policy Advisor for AI at the White House, shares his insider perspective on the Trump administration's AI Action Plan. He discusses the challenges of federal AI policy-making and the clash between conservatism and techno-libertarianism. The conversation highlights the need for thoughtful regulatory frameworks and proactive governance. Ball outlines essential steps for America to lead in AI, including regulatory innovation and workforce development, while reflecting on his future aspirations in AI policy at a D.C. think tank.

Aug 12, 2025 • 47min
The Legal Maze of AI Liability: Anat Lior on Bridging Law and Emerging Tech
In this episode, we talk about the intricate world of AI liability through the lens of agency law. Join us as Anat Lior explores the compelling case for using agency law to address the legal challenges posed by AI agents. Discover how analogies, such as principal-agent relationships, can help navigate the complexities of AI liability, and why it's crucial to ensure that someone is held accountable when AI systems cause harm. Tune in for a thought-provoking discussion on the future of AI governance and the evolving landscape of legal responsibility. Hosted on Acast. See acast.com/privacy for more information.

Aug 7, 2025 • 50min
Values in AI: Safety, Ethics, and Innovation with OpenAI's Brian Fuller
Brian Fuller, a product policy leader at OpenAI, discusses the pressing need for ethical AI policies that prioritize safety and societal benefit. He shares insights on balancing innovation with ethical responsibility amidst the rapidly evolving AI landscape. The conversation touches on engaging diverse community values in AI development and the ethical implications of data labeling practices. Fuller candidly reflects on the complexities of navigating AI risks and the importance of stakeholder collaboration in shaping responsible technology.