
MLOps.community
Detecting Harmful Content at Scale // Matar Haller // #246
Jul 9, 2024
Matar Haller, VP of Data & AI at ActiveFence, discusses detecting harmful content online using AI, the challenges faced by platforms, leveraging Content Moderation APIs to flag harmful content, the importance of continuous model retraining, and transitioning hate speech models from notebooks to production APIs efficiently.
51:27
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- ActiveFence uses AI for online safety, focusing on detecting harmful content at scale.
- Content moderation faces challenges from evolving harmful content types and the need for continuous monitoring.
Deep dives
Using AI to Combat Online Harm
Active Fence uses AI to combat hate speech and harmful content online. They focus on AI safety tech to detect and clean up unwanted content, ensuring a safer online environment. By flagging content for platforms and providing risk scores, they help in content moderation and preventing harmful material from spreading.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.