Generally Intelligent

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

21 snips
Jun 22, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
The Challenge of Machine Learning
02:14 • 2min
3
How I Got Started in Deep Learning
04:39 • 2min
4
The Problem With Neural Networks
07:00 • 2min
5
The Connection Between Gaussian Processes and Machine Learning
09:14 • 3min
6
The Value of Finding Critical Points in a Deep Neural Network
12:36 • 3min
7
The Importance of Mode Connectivity in Complex Systems
15:13 • 2min
8
Percolation Theory and the Lost Landscape
16:57 • 3min
9
The Percolation Theory of Neural Networks
19:27 • 3min
10
Connectivity Matters From a Machine Learning Perspective
21:58 • 2min
11
The Importance of Mode Connectivity in Deep Learning
24:06 • 3min
12
The Importance of Qualitative Agreement in Neural Networks
26:53 • 3min
13
Convolutional Networks Do Better Than Fully Connecting Networks on Image Data
29:47 • 4min
14
The Power of Deep Learning
33:58 • 2min
15
The Four Hidden Layer Network at Infinite Width
36:28 • 2min
16
The Inductive Bias of Neural Networks
38:31 • 3min
17
The Importance of Learning Complex Functions
41:04 • 3min
18
The Limits of Neural Tangent Kernels in Deep Learning
44:33 • 5min
19
The Conservation of Learnability
49:30 • 4min
20
The Eigen Learning Framework Describes the Learning of Kernel Methods in This Sort of Sense
53:40 • 2min
21
The Importance of Generalization in Model Neural Regression
55:40 • 4min
22
The Future of Deep Learning and Neural Architecture Search
59:56 • 2min