Generally Intelligent

Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

Feb 28, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 4min
2
Can You Adapt to New Tasks?
04:09 • 2min
3
How Simple Is Relative to Briar Knowledge?
05:49 • 3min
4
Understanding Deep Learning Rethinking Generalization
08:55 • 2min
5
How Do We Know if We're Learning Something New?
11:09 • 2min
6
The Language Prediction Task
13:24 • 2min
7
The Type of Explanations Is Supervised, and That's a Critical Point
15:46 • 2min
8
Observability Matters in Turns of Wik Ar.
18:04 • 2min
9
Is Retard Learning Better Than Supervised Learning?
19:35 • 3min
10
Is the R L Agent Learning How to Solve a Task?
22:37 • 2min
11
The Inductive Bias of the Architecture
24:38 • 3min
12
How Has Your Work Influenced You?
28:01 • 2min
13
How Does Gesso Linked to Learning Mathematics?
29:57 • 3min
14
What's the Difference?
33:18 • 1min
15
Is There Anything Undervalued in Machine Learning?
34:41 • 4min
16
Is There a Way to Train a System to Look Around a Scene?
38:15 • 2min
17
Is Hard Attention Helping You?
40:20 • 2min
18
Learning an Imagenet Like Way Is Not Enough
42:05 • 3min
19
Object More
44:41 • 4min
20
How Do You Know if Someone Has Done This?
48:25 • 2min
21
Is There a Need for More Review Articles in Machine Learning?
50:33 • 2min
22
Is There a Way to Improve Machine Learning?
52:28 • 3min
23
What You Can't Learn Through Language
55:42 • 2min
24
Isoconbisom, Is It Possible?
57:39 • 2min
25
I Think It's a Good Idea to Apply a Decision Transformer to a Language Problem
01:00:07 • 3min
26
Symbolic Language
01:02:55 • 3min
27
Symbols and Ai From a Culturally Constructed Perspective
01:05:50 • 3min
28
What Is the Relationship Between Language and the Structure of the World?
01:08:38 • 3min
29
Machine Learning
01:11:35 • 2min
30
The Language Meets Continuous Learning
01:13:15 • 3min
31
Language as a Means of Compression
01:15:46 • 2min
32
Is There a Difference Between Memory and Visual Memory?
01:17:16 • 3min
33
Is There a Meto Learning Problem?
01:20:18 • 2min
34
The Meto Learning Idea Is Adaptable Across a Lot of Different Domains
01:22:32 • 2min
35
Adapting to a More Complicated Setting
01:24:29 • 3min
36
Do You Wanto Just Give a Brief Description of That?
01:27:02 • 2min
37
How to Recall Information From the Past
01:29:20 • 3min
38
Debugging a Model in a Supervised Setting
01:32:26 • 1min
39
The Art Memory Architecture
01:33:53 • 3min
40
When a System Will Learn How to Use Memory?
01:36:46 • 2min
41
Deep Learning
01:39:02 • 2min
42
Is There a Transfer Benefit to Learning the Same Task?
01:40:45 • 4min
43
Is It Possible to Transform a Convolutional Architecture?
01:45:01 • 3min
44
What Makes Great Researchers Great?
01:48:25 • 4min
45
Is There a Future for Machine Learning?
01:52:40 • 2min
46
I Think the Field Is Really Struggling With Constructing the Right Experiences for Our Systems to Learn From
01:54:42 • 3min
47
How to Put Dat That Into the Environment?
01:57:40 • 2min