Machine Learning Street Talk (MLST)

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

55 snips
Jan 4, 2022
Yann LeCun, Meta's Chief AI Scientist and Turing Award winner, joins Randall Balestriero, a researcher at Meta AI, to dive into the complexities of interpolation and extrapolation in neural networks. They discuss how heavily dimensional data challenges traditional views, presenting their groundbreaking paper on high-dimensional extrapolation. Yann critiques the notion of interpolation in deep learning, while Randall emphasizes the geometric principles that can redefine our understanding of neural network behavior. Expect eye-opening insights into AI's evolving landscape!
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Extrapolation in High Dimensions

  • Yann LeCun argues that neural networks extrapolate, not interpolate, in high dimensions.
  • Our usual interpolation concept, convex hull membership, breaks down in high dimensions.
INSIGHT

Neural Networks as Recursive Partitions

  • Randall Balestriero suggests neural networks recursively partition input space into convex cells, similar to decision trees.
  • Different faces of these polyhedra correspond to different hyperplanes, sharing information between regions.
INSIGHT

Domain Knowledge in Neural Networks

  • Neural networks are not blank slates; human-crafted domain knowledge is incorporated.
  • This explains Francois Cholet's concept of developer-aware generalization.
Get the Snipd Podcast app to discover more snips from this episode
Get the app