

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize
21 snips Jun 22, 2023
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Introduction
00:00 • 2min
The Challenge of Machine Learning
02:14 • 2min
How I Got Started in Deep Learning
04:39 • 2min
The Problem With Neural Networks
07:00 • 2min
The Connection Between Gaussian Processes and Machine Learning
09:14 • 3min
The Value of Finding Critical Points in a Deep Neural Network
12:36 • 3min
The Importance of Mode Connectivity in Complex Systems
15:13 • 2min
Percolation Theory and the Lost Landscape
16:57 • 3min
The Percolation Theory of Neural Networks
19:27 • 3min
Connectivity Matters From a Machine Learning Perspective
21:58 • 2min
The Importance of Mode Connectivity in Deep Learning
24:06 • 3min
The Importance of Qualitative Agreement in Neural Networks
26:53 • 3min
Convolutional Networks Do Better Than Fully Connecting Networks on Image Data
29:47 • 4min
The Power of Deep Learning
33:58 • 2min
The Four Hidden Layer Network at Infinite Width
36:28 • 2min
The Inductive Bias of Neural Networks
38:31 • 3min
The Importance of Learning Complex Functions
41:04 • 3min
The Limits of Neural Tangent Kernels in Deep Learning
44:33 • 5min
The Conservation of Learnability
49:30 • 4min
The Eigen Learning Framework Describes the Learning of Kernel Methods in This Sort of Sense
53:40 • 2min
The Importance of Generalization in Model Neural Regression
55:40 • 4min
The Future of Deep Learning and Neural Architecture Search
59:56 • 2min