Machine Learning Street Talk (MLST)

Dr. Paul Lessard - Categorical/Structured Deep Learning

33 snips
Apr 1, 2024
Dr. Paul Lessard, a Principal Scientist at Symbolica, dives into making neural networks more interpretable through category theory. He discusses the limits of current architectures in reasoning and generalization, suggesting they're not fundamental flaws but rather artifacts of training methods. The discussion explores mathematical abstractions as tools for structuring neural networks, with Paul enthusiastically explaining core concepts like functors and monads. His insights illuminate the potential of these frameworks to enhance AI's reliability and understanding.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

George Hotz Leaves Tesla

  • George Hotz left Tesla due to disagreements about Autopilot's development.
  • He believed overreliance on data was inefficient and sought alternative approaches.
INSIGHT

Limitations of GDL

  • Geometric deep learning (GDL) has limitations regarding invertible and composable transformations.
  • Generalizing GDL requires addressing non-invertible, non-composable computations, aligning with algorithmic challenges.
INSIGHT

Composable Computations

  • Not all computations are composable, particularly when input/output types don't align.
  • Type theory addresses this by ensuring compatibility, as seen with tree and list data structures.
Get the Snipd Podcast app to discover more snips from this episode
Get the app