Brian Fishman, co-founder of the trust and safety platform Cinder and a former policy director at Meta, discusses the intricate relationship between violent extremism and social media. He explores how content moderation has evolved, particularly regarding the shift from ISIS to far-right extremism in the U.S. The conversation dives into the challenges of regulating harmful content while maintaining free speech, the complexities surrounding Section 230, and the importance of transparency in fighting extremism online.
01:05:03
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
question_answer ANECDOTE
CasaPound Case
Italian courts forced Facebook to reinstate CasaPound, a neo-fascist group, after removal.
This highlights potential issues of foreign courts impacting global platform policies.
insights INSIGHT
Section 230's Importance
Section 230 protects platforms from some legal liabilities for content moderation choices.
Removing this protection might stifle smaller platforms lacking resources for extensive legal battles.
insights INSIGHT
Early Internet Harms
Critics often overlook online harms of the early internet, creating a mythical past.
Extremists used digital platforms long before social media and recommendation engines.
Get the Snipd Podcast app to discover more snips from this episode
Dual-Use Regulation: Managing Hate and Terrorism Online, Before and After Section 230 Reform
Dual-Use Regulation: Managing Hate and Terrorism Online, Before and After Section 230 Reform
Brian Fishman
This work examines the impact of Section 230 reforms on the regulation of online content related to hate and terrorism. It delves into how changes in Section 230, a part of the Communications Decency Act, affect the liability of online platforms for content moderation and their role in preventing the spread of harmful content. The analysis includes discussions on dual-use regulations and their application in managing online hate and terrorism.
From May 12, 2023: Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. That’s the short description, but the paper is really about so much more—about how the work of content moderation actually takes place, how contemporary analyses of the harms of social media fail to address the history of how platforms addressed Islamist terror, and how we should understand “the original sin of the internet.”
For this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down to talk with Brian about his work. Brian is the cofounder of Cinder, a software platform for the kind of trust and safety work we describe here, and he was formerly a policy director at Meta, where he led the company’s work on dangerous individuals and organizations.