

Scaling Laws
Lawfare & University of Texas Law School
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.
Episodes
Mentioned books

Oct 20, 2023 • 57min
The Crisis Facing Efforts to Counter Election Disinformation
Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter falsehoods today than they were in 2020. How could this be? On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down with Dean Jackson, who previously sat down with the Lawfare Podcast to discuss his work as a staffer on the Jan. 6 committee. He worked with the Center on Democracy and Technology to put out a new report on the challenges facing efforts to prevent the spread of election disinformation. They talked through the political, legal, and economic pressures that are making this work increasingly difficult—and what it means for 2024. Hosted on Acast. See acast.com/privacy for more information.

Oct 5, 2023 • 46min
Talking AI with Data and Society’s Janet Haven
Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And we’re discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it’s going to kill us all—so we’re glad to be able to share an interview that hopefully cuts through some of that noise.Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation she’d like to see in this space. If you’re interested in reading further, Janet mentions this paper from Data and Society on “Democratizing AI” in the course of the conversation. Hosted on Acast. See acast.com/privacy for more information.

Sep 11, 2023 • 45min
What Impact did Facebook Have on the 2020 Elections?
How much influence do social media platforms have on American politics and society? It’s a tough question for researchers to answer—not just because it’s so big, but also because platforms rarely if ever provide all the data that would be needed to address the problem. A new batch of papers released in the journals Science and Nature marks the latest attempt to tackle this question, with access to data provided by Facebook’s parent company Meta. The 2020 Facebook & Instagram Research Election Study, a partnership between Meta researchers and outside academics, studied the platforms’ impact on the 2020 election—and uncovered some nuanced findings, suggesting that these impacts might be less than you’d expect.Today on Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic are joined by the project’s co-leaders, Talia Stroud of the University of Texas at Austin and Joshua A. Tucker of NYU. They discussed their findings, what it was like to work with Meta, and whether or not this is a model for independent academic research on platforms going forward.(If you’re interested in more on the project, you can find links to the papers and an overview of the findings here, and an FAQ, provided by Tucker and Stroud, here.) Hosted on Acast. See acast.com/privacy for more information.

May 12, 2023 • 1h 4min
Brian Fishman on Violent Extremism and Platform Liability
Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. That’s the short description, but the paper is really about so much more—about how the work of content moderation actually takes place, how contemporary analyses of the harms of social media fail to address the history of how platforms addressed Islamist terror, and how we should understand “the original sin of the internet.” For this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down to talk with Brian about his work. Brian is the cofounder of Cinder, a software platform for the kind of trust and safety work we describe here, and he was formerly a policy director at Meta, where he led the company’s work on dangerous individuals and organizations. Hosted on Acast. See acast.com/privacy for more information.

May 2, 2023 • 30min
Cox and Wyden on Section 230 and Generative AI
Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.On this episode of Arbiters of Truth, Lawfare’s occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they’re skeptical that its protections apply to generative AI. Disclosure: Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interests in the issues discussed. Hosted on Acast. See acast.com/privacy for more information.

Apr 28, 2023 • 46min
An Interview with Meta’s Chief Privacy Officers
In 2018, news broke that Facebook had allowed third-party developers—including the controversial data analytics firm Cambridge Analytica—to obtain large quantities of user data in ways that users probably didn’t anticipate. The fallout led to a controversy over whether Cambridge Analytica had in some way swung the 2016 election for Trump (spoiler: it almost certainly didn’t), but it also generated a $5 billion fine imposed on Facebook by the FTC for violating users’ privacy. Along with that record-breaking fine, the FTC also imposed a number of requirements on Facebook to improve its approach to privacy. It’s been four years since that settlement, and Facebook is now Meta. So how much has really changed within the company? For this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic interviewed Meta’s co-chief privacy officers, Erin Egan and Michel Protti, about the company’s approach to privacy and its response to the FTC’s settlement order.At one point in the conversation, Quinta mentions a class action settlement over the Cambridge Analytica scandal. You can read more about the settlement here. Information about Facebook’s legal arguments regarding user privacy interests is available here and here, and you can find more details in the judge’s ruling denying Facebook’s motion to dismiss.Note: Meta provides support for Lawfare’s Digital Social Contract paper series. This podcast episode is not part of that series, and Meta does not have any editorial role in Lawfare. Hosted on Acast. See acast.com/privacy for more information.

Apr 26, 2023 • 54min
Eugene Volokh on AI Libel
If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation?On this episode of Arbiters of Truth, our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled "Large Libel Models.” Hosted on Acast. See acast.com/privacy for more information.

Apr 14, 2023 • 47min
A TikTok Ban and the First Amendment
Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application that’s used by millions to communicate every day?On this episode of Arbiters of Truth, our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban. Hosted on Acast. See acast.com/privacy for more information.

Mar 27, 2023 • 45min
Ravi Iyer on How to Improve Technology Through Design
On the latest episode of Arbiters of Truth, Lawfare's series on the information ecosystem, Quinta Jurecic and Alan Rozenshtein spoke with Ravi Iyer, the Managing Director of the Psychology of Technology Institute at the University of Southern California's Neely Center.Earlier in his career, Ravi held a number of positions at Meta, where he worked to make Facebook's algorithm provide actual value, not just "engagement," to users. Quinta and Alan spoke with Ravi about why he thinks that content moderation is a dead-end and why thinking about the design of technology is the way forward to make sure that technology serves us and not the other way around. Hosted on Acast. See acast.com/privacy for more information.

Mar 9, 2023 • 50min
Does Section 230 Protect ChatGPT?
During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT? Luckily, Matt Perault of the Center on Technology Policy at the University of North Carolina at Chapel Hill had already been thinking about this question and published a Lawfare article arguing that 230’s protections wouldn’t extend to content generated by AI. Lawfare Senior Editors Quinta Jurecic and Alan Rozenshtein sat down with Matt and Jess Miers, legal advocacy counsel at the Chamber of Progress, to debate whether ChatGPT’s output constitutes third-party content, whether companies like OpenAI should be immune for the output of their products, and why you might want to sue a chatbot in the first place. Hosted on Acast. See acast.com/privacy for more information.