Regulating Social Media — Is it Lawful, Feasible, and Desirable? (NYU Law Forum)
Mar 26, 2025
auto_awesome
Daphne Keller, a leading voice on platform regulation from Stanford Law School, teams up with Michael Posner, a professor known for his expertise in business ethics at NYU. They delve into the urgent need for social media regulation to combat disinformation and protect democracy. The discussion covers the balance between free speech and community responsibility and explores the impact of social media on polarization. They also examine the challenges of reforming laws like Section 230 while addressing corporate interests and user rights.
Regulating social media is deemed essential for protecting democracy, particularly against misinformation and targeted harassment that erodes public trust.
The debate surrounding Section 230 highlights the tension between protecting free speech and holding tech companies accountable for harmful content moderation practices.
Transparency in social media operations is crucial, advocating for disclosure on content moderation and user data rights to ensure corporate accountability.
Deep dives
The Need for Regulation in Social Media
Current discussions highlight a pressing need for regulations on social media platforms to mitigate their harmful impact on democracy. Specific harms include election disinformation, health misinformation, and targeted harassment campaigns that undermine trust in institutions. The debate centers on whether aggressive government intervention is necessary or if it risks stifling free speech and consolidating control over speech content. This complex landscape involves stakeholders at all levels, with varying opinions on the best course of action to address these pressing issues.
The Role of Section 230 and State Experimentation
Section 230 of the Communications Decency Act, which provides legal immunity to tech companies for user-generated content, is at the center of the regulatory debate. While there is limited federal action, some states like Florida and Texas are experimenting with laws that restrict content moderation practices on these platforms. These variations in state-level approaches reflect differing philosophies on how best to navigate the balance between free speech and harmful content. Recent actions by the FTC also signal an increasing federal interest in investigating platform practices and potential reforms.
Challenges of Government Regulation
The question of whether to trust government bodies with the power to regulate social media raises significant concerns. Historical examples reveal that governments can also misuse their authority, potentially leading to broader censorship and control over lawful speech. There is a push for government intervention to ensure accountability; however, scrutiny over these regulations emphasizes the need for careful consideration of First Amendment implications. The panel highlights the importance of establishing clear limits and frameworks within which government actors can operate without infringing upon individual rights.
The Business Model of Tech Companies
The business models of major tech companies, centered around engagement and advertisement, create inherent conflicts with the regulation of harmful content. These platforms often prioritize user engagement driven by negative emotions, which has been shown to exacerbate societal polarization. Critics argue that the companies could take more significant responsibility for moderating harmful content but are disincentivized due to their profit motives. There is a call for disrupting these business models through regulation to promote healthier online environments conducive to democracy.
Transparency and User Rights
Transparency within social media operations is vital for effective regulation and user protection, ensuring companies are held accountable for their actions. Proposals include mandating that platforms disclose their content moderation practices and algorithms to foster trust with users and regulators alike. Furthermore, addressing user rights over their own data can empower individuals against corporate overreach in data privacy. Calls for improved legislation in this area emphasize that well-informed users can drive better outcomes regarding the content and privacy standards they deserve.
2025 will be a pivotal year for technology regulation in the United States and around the world. The European Union has begun regulating social media platforms with its Digital Services Act. In the United States, regulatory proposals at the federal level will likely include renewed efforts to repeal or reform Section 230 of the Communications Decency Act. Meanwhile, States such as Florida and Texas have tried to restrict content moderation by major platforms, but have been met with challenges to the laws' constitutionality.
On March 19, NYU Law hosted a Forum on whether it is lawful, feasible, and desirable for government actors to regulate social media platforms to reduce harmful effects on U.S. democracy and society with expert guests Daphne Keller, Director of the Program on Platform Regulation at Stanford Law School’s Cyber Policy Center, and Michael Posner, Director of the Center for Business and Human Rights at NYU Stern School of Business. Tess Bridgeman and Ryan Goodman, co-editors-in-chief of Just Security, moderated the event, which was co-hosted by Just Security, the NYU Stern Center for Business and Human Rights and Tech Policy Press.