Brain Inspired

BI 229 Tomaso Poggio: Principles of Intelligence and Learning

12 snips
Jan 14, 2026
Tommaso Poggio, a renowned MIT professor and director of the Center for Biological and Computational Learning, dives into the principles of intelligence and learning. He compares the current stage of AI to historical breakthroughs in electricity, advocating for a theory-first approach. Poggio explores how learning can be integrated into existing models, shares insights from early machine learning developments, and discusses the significance of sparse compositionality. He also reflects on the evolving relationship between neuroscience and machine learning, emphasizing the need for theoretical foundation in both fields.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Between Engineering And Foundational Theory

  • We are between engineering breakthroughs and a unifying theory for AI, like Volta before Maxwell.
  • Tomaso Poggio argues theory will enable many more systematic advances beyond current engineering wins.
INSIGHT

Why Depth Matters: Sparse Compositionality

  • Sparse compositionality explains why deep networks generalize efficiently by composing many simple functions.
  • Poggio claims this principle theoretically requires depth for computational efficiency and generalization.
ANECDOTE

Volta To Maxwell As An Analogy

  • Poggio recounts the invention of Volta's battery and the 60-year arc to Maxwell's electromagnetic theory.
  • He uses this history as a metaphor for how engineering can precede deep theoretical understanding.
Get the Snipd Podcast app to discover more snips from this episode
Get the app