Lawfare Archive: Brian Fishman on Violent Extremism and Platform Liability
Jan 11, 2025
auto_awesome
Brian Fishman, co-founder of the trust and safety platform Cinder and a former policy director at Meta, discusses the intricate relationship between violent extremism and social media. He explores how content moderation has evolved, particularly regarding the shift from ISIS to far-right extremism in the U.S. The conversation dives into the challenges of regulating harmful content while maintaining free speech, the complexities surrounding Section 230, and the importance of transparency in fighting extremism online.
Brian Fishman emphasizes that effective content moderation requires a nuanced understanding of how different extremist groups operate online.
The potential reform of Section 230 poses the risk of inadvertently restricting free speech while limiting platforms' ability to manage harmful content.
Fishman points out that advancements in AI can enhance content moderation efforts but also present new challenges as adversarial actors may exploit these technologies.
Deep dives
Meta's Approach to Content Moderation
Meta recently announced a significant shift in its content moderation strategy, replacing its traditional fact-checking program with user-generated community notes. This change has reignited debates about the responsibilities of social media platforms in regulating harmful content. Historically, platforms like Meta have faced criticism for their handling of extremist content, particularly during the rise of organizations like ISIS. Brian Fishman discusses the need for more nuanced approaches in moderation, emphasizing the importance of understanding how different extremist groups operate to develop effective policies.
Regulatory Challenges and Bad Policy Risks
Fishman highlights the complexities of regulating social media platforms, particularly in light of existing laws such as Section 230, which limits the liability of online platforms for user-generated content. He expresses concerns that poorly crafted regulations may inadvertently harm free speech and restrict platforms' ability to moderate effectively. Additionally, he points to the case of Casa Pound in Italy, where an extremist group successfully challenged Meta’s removal on free speech grounds, illustrating the potential for unintended consequences from regulatory actions. This raises important questions about the balance between government intervention and the autonomy of private companies.
The Historical Context of Online Extremism
Fishman stresses the importance of examining the history of online extremism, noting that extremist groups have exploited the internet since its inception, long before the advent of modern social media. He argues that many contemporary criticisms of social media overlook this historical context, leading to a misdiagnosis of the issues at hand. He warns that while platforms evolve, the fundamental reality remains: adverse actors will always seek to exploit new technologies for their agendas. Understanding this dynamic is essential for accurately addressing the root causes of online harms and improving systemic responses.
The Role of Generative AI in Content Moderation
Discussing the potential of generative AI, Fishman notes that AI tools like ChatGPT can enhance content moderation efforts by enabling platforms to accelerate their response to emerging threats. These technologies can facilitate quicker policy adjustments, helping companies remain agile in a landscape where adversarial actors continuously adapt and evolve. However, Fishman cautions that these advancements come with their own set of challenges, as bad actors may also leverage AI for malicious purposes. Ultimately, the implementation of AI in trust and safety work could reshape the dynamics of content moderation across various platforms.
Navigating the Future of Trust and Safety
As trust and safety practices evolve in response to regulatory pressures and public scrutiny, Fishman sees a dual trend: an increased focus on accountability alongside the risk of retrenching in the face of challenges. Amid financial constraints and ideological shifts exemplified by platforms like Twitter, the industry faces pressures that could hinder progress in creating safer online environments. Yet, initiatives like the Trust and Safety Professional Association indicate a growing sophistication within the field, promoting best practices and understanding across companies. The interplay of regulatory frameworks, public concern, and internal policies will significantly shape how platforms respond to the complex landscape of online safety.
From May 12, 2023: Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. That’s the short description, but the paper is really about so much more—about how the work of content moderation actually takes place, how contemporary analyses of the harms of social media fail to address the history of how platforms addressed Islamist terror, and how we should understand “the original sin of the internet.”
For this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down to talk with Brian about his work. Brian is the cofounder of Cinder, a software platform for the kind of trust and safety work we describe here, and he was formerly a policy director at Meta, where he led the company’s work on dangerous individuals and organizations.