The Information Bottleneck

EP21: Privacy in the Age of Agents with Niloofar Mireshghallah

Jan 7, 2026
Niloofar Mireshghallah, an incoming assistant professor at Carnegie Mellon University, dives into the intriguing world of AI privacy and model behavior. She discusses the surprising reliance of models on context over memorization and highlights modern privacy threats like aggregation and inference attacks. The conversation touches on linguistic colonialism in AI, the challenges faced by non-English languages, and the importance of academic research in preserving the nuances of learning and cultural representation. Niloofar calls for innovative AI tools for science and education while emphasizing the need for privacy-aware designs.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

When Context Beats The Weights

  • Reasoning-enabled models rely more on context than parametric memory for in-domain tasks.
  • They still fall back to internal knowledge for novel or out-of-domain tasks where context can't create durable learning.
INSIGHT

Mix Memory, Context, And Occasional Weight Updates

  • The ideal system mixes parametric memory, context, and online updates with periodic weight consolidation.
  • Purely external memories that never update internal weights will miss necessary learning and drift over time.
ADVICE

Use Stepwise Prompts And Human-in-Loop Checks

  • Use human-in-the-loop workflows that expose failures and guide the model through simpler precursor tasks.
  • Prompt models to solve simpler subproblems first so they perform robust analysis on the real task.
Get the Snipd Podcast app to discover more snips from this episode
Get the app