The Data Exchange with Ben Lorica

Teaching AI How to Forget

13 snips
Jan 15, 2026
In this engaging discussion, Ben Luria, CEO of Hirundo, dives into the critical concept of machine unlearning for AI. He explains how AI deployments often falter due to risks like bias and PII leakage. Luria emphasizes the necessity of teaching AI to forget undesirable behaviors, contrasting behavioral unlearning with data removal. The conversation also explores practical unlearning workflows, aims for multimodal support, and highlights the potential to safeguard AI models from vulnerabilities like jailbreaks. Luria’s insights illuminate the pathway to safer AI systems.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Forgetting Is The Missing AI Capability

  • AI models can learn but cannot easily forget, which creates persistent risks for enterprise deployments.
  • Hirundo targets removing learned behaviors and information from models to make AI trustworthy for mission-critical tasks.
ANECDOTE

Customer Interviews Sparked The Startup

  • Ben Luria described interviewing many data teams and finding a common pain: fixing model problems is often a moment too late.
  • That led his team to focus on technical solutions that remove issues from models rather than external patches.
INSIGHT

Fix The Model, Not Just The Perimeter

  • Guardrails and context engineering operate outside the model and are often bypassable or add latency.
  • Unlearning focuses on internal model edits to remove risks rather than relying solely on external filters.
Get the Snipd Podcast app to discover more snips from this episode
Get the app