

Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology
Feb 28, 2022
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
Introduction
00:00 • 4min
Can You Adapt to New Tasks?
04:09 • 2min
How Simple Is Relative to Briar Knowledge?
05:49 • 3min
Understanding Deep Learning Rethinking Generalization
08:55 • 2min
How Do We Know if We're Learning Something New?
11:09 • 2min
The Language Prediction Task
13:24 • 2min
The Type of Explanations Is Supervised, and That's a Critical Point
15:46 • 2min
Observability Matters in Turns of Wik Ar.
18:04 • 2min
Is Retard Learning Better Than Supervised Learning?
19:35 • 3min
Is the R L Agent Learning How to Solve a Task?
22:37 • 2min
The Inductive Bias of the Architecture
24:38 • 3min
How Has Your Work Influenced You?
28:01 • 2min
How Does Gesso Linked to Learning Mathematics?
29:57 • 3min
What's the Difference?
33:18 • 1min
Is There Anything Undervalued in Machine Learning?
34:41 • 4min
Is There a Way to Train a System to Look Around a Scene?
38:15 • 2min
Is Hard Attention Helping You?
40:20 • 2min
Learning an Imagenet Like Way Is Not Enough
42:05 • 3min
Object More
44:41 • 4min
How Do You Know if Someone Has Done This?
48:25 • 2min
Is There a Need for More Review Articles in Machine Learning?
50:33 • 2min
Is There a Way to Improve Machine Learning?
52:28 • 3min
What You Can't Learn Through Language
55:42 • 2min
Isoconbisom, Is It Possible?
57:39 • 2min
I Think It's a Good Idea to Apply a Decision Transformer to a Language Problem
01:00:07 • 3min
Symbolic Language
01:02:55 • 3min
Symbols and Ai From a Culturally Constructed Perspective
01:05:50 • 3min
What Is the Relationship Between Language and the Structure of the World?
01:08:38 • 3min
Machine Learning
01:11:35 • 2min
The Language Meets Continuous Learning
01:13:15 • 3min
Language as a Means of Compression
01:15:46 • 2min
Is There a Difference Between Memory and Visual Memory?
01:17:16 • 3min
Is There a Meto Learning Problem?
01:20:18 • 2min
The Meto Learning Idea Is Adaptable Across a Lot of Different Domains
01:22:32 • 2min
Adapting to a More Complicated Setting
01:24:29 • 3min
Do You Wanto Just Give a Brief Description of That?
01:27:02 • 2min
How to Recall Information From the Past
01:29:20 • 3min
Debugging a Model in a Supervised Setting
01:32:26 • 1min
The Art Memory Architecture
01:33:53 • 3min
When a System Will Learn How to Use Memory?
01:36:46 • 2min
Deep Learning
01:39:02 • 2min
Is There a Transfer Benefit to Learning the Same Task?
01:40:45 • 4min
Is It Possible to Transform a Convolutional Architecture?
01:45:01 • 3min
What Makes Great Researchers Great?
01:48:25 • 4min
Is There a Future for Machine Learning?
01:52:40 • 2min
I Think the Field Is Really Struggling With Constructing the Right Experiences for Our Systems to Learn From
01:54:42 • 3min
How to Put Dat That Into the Environment?
01:57:40 • 2min