The Information Bottleneck

EP10: Geometric Deep Learning with Michael Bronstein

Oct 20, 2025
Michael Bronstein, a Professor of AI at Oxford and scientific director at AITHYRA, dives deep into the realm of geometric deep learning. He discusses the debate between small neural networks and scaling, emphasizing the significance of geometric structures in enhancing AI efficiency. The conversation spans challenges in building effective tabular models and the role of instruction tuning. Michael also explores exciting applications in drug design, detailing how AI can revolutionize protein modeling and therapeutic strategies. His insights bridge the gap between theory and practical innovations in science.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Scaling Remains Central

  • Scaling has driven deep learning for 15+ years and repeatedly finds new avenues rather than being exhausted.
  • Small specialized models can excel in narrow tasks but don't negate the broad power of scaling.
ADVICE

Respect Marginal Gains From Scale

  • Don't dismiss scaling based on a single small-model result; imagine what could be done with more resources.
  • Treat marginal gains from scaling as highly valuable for complex abilities.
INSIGHT

Geometry Cuts The Curse Of Dimensionality

  • Geometric deep learning exploits data's underlying domain and symmetry to avoid the curse of dimensionality.
  • Encoding invariances like translation yields massive data efficiency by design.
Get the Snipd Podcast app to discover more snips from this episode
Get the app