Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Introduction
00:00 • 2min
Are You Inspired by Brain Facts and Et Cetera?
02:07 • 3min
The Intersection Between Ai and Nura
04:59 • 2min
Computer Science - I'm a Computer Scientist
07:06 • 2min
Neural Science and Scaling Los
09:04 • 4min
What Is Enough?
12:54 • 2min
Is Philosophy Unusual?
14:53 • 3min
How to Translate Philosophy and Buddhism to Machine Learning?
17:41 • 3min
What Is a G I?
20:25 • 3min
The Main Purpose of Debates Is to Talk Past One Another
23:15 • 3min
Is Back Propagation a Good Model for Learning?
25:49 • 1min
How to Optimize a Neural Net?
27:13 • 2min
Adaptive Adaption for Noura Net Learning
29:00 • 2min
Is There a Difference Between Auxiliary Variables and Target Propagation Methods?
31:13 • 3min
Optimizations for Architectures
34:15 • 3min
Scaling Scalable Networks
37:09 • 4min
Inductive Bias Versus Scaling
40:55 • 3min
Scaling and Inductive Biase?
43:31 • 2min
How Does Brain Do It?
45:42 • 3min
Neurogenesis in the Hippocampus
49:10 • 2min
Is Continuous Learning a Synonym for Continuous Learning?
51:16 • 3min
Transfer Learning and the Mad Adaption
53:56 • 4min
The Meta Learning Approach to Machine Learning
57:38 • 3min
Neurogenesis in Adults
01:00:27 • 2min
Do We Understand Human Behavior Enough?
01:01:59 • 3min
Is Human to Transfer Learning or Continue Learning Particular Settings?
01:04:32 • 3min
Learning Trajectories Matter a Lot
01:07:15 • 3min
Are You Optimistic About Continual Learning?
01:09:52 • 3min
Scaling and Continuous Learning for a Fixed Set of Tasks?
01:13:12 • 2min
The Ultimate Test for Anything That Is Called Ag
01:15:21 • 4min