MLOps.community  cover image

MLOps.community

Detecting Harmful Content at Scale // Matar Haller // #246

Jul 9, 2024
Matar Haller, VP of Data & AI at ActiveFence, discusses detecting harmful content online using AI, the challenges faced by platforms, leveraging Content Moderation APIs to flag harmful content, the importance of continuous model retraining, and transitioning hate speech models from notebooks to production APIs efficiently.
51:27

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • ActiveFence uses AI for online safety, focusing on detecting harmful content at scale.
  • Content moderation faces challenges from evolving harmful content types and the need for continuous monitoring.

Deep dives

Using AI to Combat Online Harm

Active Fence uses AI to combat hate speech and harmful content online. They focus on AI safety tech to detect and clean up unwanted content, ensuring a safer online environment. By flagging content for platforms and providing risk scores, they help in content moderation and preventing harmful material from spreading.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode