

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
16 snips Aug 28, 2023
Sophia Sanborn, a postdoctoral scholar at UC Santa Barbara, blends neuroscience and AI in her groundbreaking research. She dives into the universality of neural representations, showcasing how both biological systems and deep networks can efficiently find consistent features. The conversation also highlights her innovative work on Bispectral Neural Networks, linking Fourier transforms to group theory, and explores the potential of geometric deep learning to transform CNNs. Sanborn reveals the striking similarities between artificial and biological neural structures, presenting a fascinating convergence of insights.
AI Snips
Chapters
Transcript
Episode notes
Efficient Coding in the Brain
- Biological systems, like brains, are resource-constrained and prioritize efficiency.
- This principle of efficient coding can explain why certain features, like edge detectors, emerge in the visual cortex.
Kitten Experiments
- Hubel and Wiesel's experiments with kittens revealed neurons in the primary visual cortex that act as feature detectors.
- These neurons are selective for oriented edges of specific widths, a discovery accidental.
Universality of Features
- The same features consistently appear in biological and artificial neural networks, suggesting underlying principles govern both.
- These features, such as Gabor features, exhibit mathematical structure and symmetries, particularly relating to Fourier analysis.