Brain Inspired

BI 123 Irina Rish: Continual Learning

19 snips
Dec 26, 2021
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
Are You Inspired by Brain Facts and Et Cetera?
02:07 • 3min
3
The Intersection Between Ai and Nura
04:59 • 2min
4
Computer Science - I'm a Computer Scientist
07:06 • 2min
5
Neural Science and Scaling Los
09:04 • 4min
6
What Is Enough?
12:54 • 2min
7
Is Philosophy Unusual?
14:53 • 3min
8
How to Translate Philosophy and Buddhism to Machine Learning?
17:41 • 3min
9
What Is a G I?
20:25 • 3min
10
The Main Purpose of Debates Is to Talk Past One Another
23:15 • 3min
11
Is Back Propagation a Good Model for Learning?
25:49 • 1min
12
How to Optimize a Neural Net?
27:13 • 2min
13
Adaptive Adaption for Noura Net Learning
29:00 • 2min
14
Is There a Difference Between Auxiliary Variables and Target Propagation Methods?
31:13 • 3min
15
Optimizations for Architectures
34:15 • 3min
16
Scaling Scalable Networks
37:09 • 4min
17
Inductive Bias Versus Scaling
40:55 • 3min
18
Scaling and Inductive Biase?
43:31 • 2min
19
How Does Brain Do It?
45:42 • 3min
20
Neurogenesis in the Hippocampus
49:10 • 2min
21
Is Continuous Learning a Synonym for Continuous Learning?
51:16 • 3min
22
Transfer Learning and the Mad Adaption
53:56 • 4min
23
The Meta Learning Approach to Machine Learning
57:38 • 3min
24
Neurogenesis in Adults
01:00:27 • 2min
25
Do We Understand Human Behavior Enough?
01:01:59 • 3min
26
Is Human to Transfer Learning or Continue Learning Particular Settings?
01:04:32 • 3min
27
Learning Trajectories Matter a Lot
01:07:15 • 3min
28
Are You Optimistic About Continual Learning?
01:09:52 • 3min
29
Scaling and Continuous Learning for a Fixed Set of Tasks?
01:13:12 • 2min
30
The Ultimate Test for Anything That Is Called Ag
01:15:21 • 4min