The Information Bottleneck

EP10: Geometric Deep Learning with Michael Bronstein

Oct 20, 2025
Michael Bronstein, a Professor of AI at Oxford and scientific director at AITHYRA, dives deep into the realm of geometric deep learning. He discusses the debate between small neural networks and scaling, emphasizing the significance of geometric structures in enhancing AI efficiency. The conversation spans challenges in building effective tabular models and the role of instruction tuning. Michael also explores exciting applications in drug design, detailing how AI can revolutionize protein modeling and therapeutic strategies. His insights bridge the gap between theory and practical innovations in science.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Mix Inductive Bias And Augmentation

  • Use pragmatic trade-offs: hardwire easy symmetries in architectures and cover the rest with data augmentation.
  • Choose invariances you can implement efficiently rather than ideal but intractable ones.
INSIGHT

Approximate Equivariance Often Wins

  • Hard-imposing exact equivariance can hurt when the real world only approximately respects that symmetry.
  • An approximate or learned equivariance often reaches better performance than strict hardwiring.
INSIGHT

Let Data Structure Dictate Symmetry

  • Data structure dictates symmetry: sets and graphs require permutation invariance by nature.
  • You must choose architectures that respect the data's inherent ordering properties.
Get the Snipd Podcast app to discover more snips from this episode
Get the app