The Tech Policy Press Podcast

Evaluating Instagram's Promises to Protect Teens

Oct 19, 2025
Laura Edelson, an assistant professor and cybersecurity advocate, teams up with Arturo Béjar, a former Facebook safety director turned whistleblower, to discuss Instagram's failures in protecting teens. They reveal how many safety tools are easily circumvented and stress that product design flaws contribute significantly to these issues. The duo highlights the inadequacy of adult-to-teen messaging limits and calls for stronger age assurance measures. They emphasize the need for regulators to hold Meta accountable for transparency and verify safety claims.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Framework For Evaluating Safety Tools

  • The report uses a multi-dimensional methodology to evaluate safety tools across target, prevention vs mitigation, scope, risk type, and implementation style.
  • This framework reveals where tools fail by design rather than content moderation gaps.
INSIGHT

Red-Teaming Safety Tools Reveals Gaps

  • The team applied red-team testing to safety features, revealing trivial circumvention and accidental bypasses by teens.
  • They argue independent safety testing is as necessary as it is for cars, toys, or food.
INSIGHT

Most Teen Safety Tools Are Broken

  • The report rated 64% of Instagram's teen safety tools red, many being removed or ineffective, with only 17% fully functional.
  • This indicates systemic product and maintenance failures rather than isolated moderation misses.
Get the Snipd Podcast app to discover more snips from this episode
Get the app