Ep. 237: A tech policy bonanza! The FCC, FTC, AI regulations, and more
Mar 12, 2025
auto_awesome
Ari Cohn, lead counsel for tech policy at FIRE, Adam Thierer from the R Street Institute, and Jennifer Huddleston from the Cato Institute delve into the critical intersection of technology and free speech. They dissect the implications of Section 230 and recent FCC moves that could challenge online expression. The trio also debates the complexities of content moderation and the rise of AI regulations, emphasizing the need for balance between innovation and oversight. Their insights into the evolving landscape of digital policy are both thought-provoking and entertaining!
The FCC's potential efforts to extend its influence over social media content moderation raise significant concerns about free speech and government overreach.
Section 230 serves as a foundational protection for online platforms, allowing them to foster free expression while shielding against liability for user-generated content.
Proposed AI regulations reflect a shift toward fear-based policymaking that could stifle innovation and dilute the principles of free speech.
Deep dives
The Historical Context of FCC's Authority
The Federal Communications Commission (FCC) has a checkered history, characterized by intimidation and backdoor censorship tactics that stretch back decades. Recently, concerns have been raised about the FCC's potential attempt to extend its influence over digital platforms through ambiguous advisory opinions. This would act as a way to exert control over online speech, essentially strong-arming social media companies into compliance with governmental demands. This pattern, likened to mafia-style politics, raises serious questions about the implications for free speech and the role of government in regulating discourse.
Understanding Section 230
Section 230, a pivotal law from 1996, provides essential protections for online platforms regarding user-generated content. It shields social media companies from being treated as publishers of the information posted by users, thereby fostering an environment where free expression can flourish without fear of liability. The law includes provisions that protect content moderation decisions, allowing platforms to restrict access to harmful content without facing legal repercussions. As discussions around its interpretation continue, there remains substantial concern about potential governmental overreach threatening the law's foundational protections.
The Limits of FCC's Regulatory Power
There is broad consensus among legal experts that the FCC lacks the regulatory authority to enforce or modify Section 230. Historical context reveals that Section 230 was designed to reduce government interference in media and online spaces, and using the FCC as a regulatory tool contradicts its original intent. Experts argue that empowering the FCC to oversee such matters would set a dangerous precedent that contradicts the intentions behind the law, transforming it from a protective measure to a weapon of censorship. This potential shift reflects a concerning trend of leveraging government power for partisan gain, undermining the principles of free speech.
Concerns Over Regulating Content Moderation
The ongoing debate about content moderation extends to concerns over how governmental oversight could enforce specific speech standards on private platforms. Critics argue that any attempt by the government to regulate speech based on these contents would inherently infringe upon First Amendment rights. This tension comes from the challenge of defining subjective terms like 'hate speech' or 'obscene content', which could result in an arbitrary and politically motivated application of regulations. The implications of such actions could lead to an environment where private editorial discretion is limited, and government preferences dictate what content remains accessible.
Fear-Based Regulation in the Age of AI
The surge in proposed AI regulations marks a shift towards fear-based policymaking reminiscent of earlier media regulatory efforts. The introduction of numerous bills indicates a frantic search for solutions to perceived threats presented by AI technologies, reflecting a departure from the previously embraced model of innovation and freedom. Such an approach risks stifling creativity and development within technology sectors, reversing decades of progress achieved through deregulation and user-generated content promotion. There is a tangible fear that these rigorous regulations may dilute the essence of free speech and exacerbate existing challenges in content moderation and discourse.
The answer to that question may tell you all you need to know about the government involving itself in social media content moderation.
On today’s show, we cover the latest tech policy developments involving the Federal Communications Commission, Federal Trade Commission, AI regulation, and more.
- Adam Thierer, a resident technology and innovation senior fellow at the R Street Institute
- Jennifer Huddleston, a technology policy senior fellow at the CATO Institute
Timestamps:
00:00 Intro
01:30 Section 230
06:55 FCC and Section 230
14:32 Brendan Carr and “faith-based programming”
28:24 Media companies’ settlements with the Trump
30:24 Brendan Carr at Semafor event
38:37 FTC and social media companies
48:09 AI regulations
01:03:43 Outro
Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack’s paid subscriber podcast feed, please email sotospeak@thefire.org.