

#032- Simon Kornblith / GoogleAI - SimCLR and Paper Haul!
Dec 6, 2020
Simon Kornblith, a research scientist at Google Brain with a background in neuroscience, dives deep into the world of neural networks. He discusses the unique relationship between neural networks and biological brains, shedding light on how architecture affects learning. Kornblith explains the significance of loss functions in image classification and reveals insights from the SimCLR framework. He also touches on data augmentation strategies, self-supervised learning, and the programming advantages of Julia for machine learning tasks.
AI Snips
Chapters
Transcript
Episode notes
Neuroscience to Machine Learning
- Simon Kornblith transitioned from neuroscience to machine learning due to slow progress in neuroscience.
- He initially believed understanding artificial neural networks would be easier but found it challenging.
Centered Kernel Alignment
- Centered kernel alignment effectively compares neural network representations.
- This method reveals representational evolution by measuring layer self-similarity.
Blockiness in Self-Similarity Matrices
- Blocky patterns in self-similarity matrices indicate stagnant representation evolution and over-parameterization.
- Deeper and wider networks exhibit this blockiness, suggesting limited learning capacity beyond a certain point.