The Information Bottleneck

EP12: Adversarial attacks and compression with Jack Morris

Nov 3, 2025
Join Jack Morris, a PhD student at Cornell and creator of the TextAttack library, as he dives into the intriguing world of adversarial examples in language models. Jack discusses the evolution of TextAttack, the complexities of open-source AI, and the security implications of inversion attacks. He highlights the Platonic representation hypothesis and its impact on understanding model embeddings. Plus, he connects the dots between compression in language models and their efficiency. Get ready for a fascinating exploration of the future of AI!
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Prioritize Tunability Over Tiny Gains

  • Fine-tune or distill large models for practical tasks rather than chasing tiny benchmark gains.
  • Choose models that are easy to adapt and deploy to solve concrete business use cases.
INSIGHT

Shift From Dense Scaling To Sparse MOEs

  • Scaling has shifted from massive dense models to very large but highly sparse mixtures of experts for cheaper inference.
  • Sparse MOE designs give big capacity with much lower serving cost and likely more deployment variety.
INSIGHT

Model Routing Faces Practical Trade-Offs

  • Perfect model routing is hard because the router itself often needs a large model, negating cost savings.
  • Efficient routing introduces trade-offs between routing accuracy and serving overhead.
Get the Snipd Podcast app to discover more snips from this episode
Get the app