The Robot Brains Podcast

Geoff Hinton on revolutionizing artificial intelligence... again

28 snips
Jun 1, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
What Are Nemets? And Why Should We Care?
03:09 • 2min
3
Is Back Propagation Better Than What the Brain Is Doing?
05:37 • 2min
4
Is the Brain Using Local Objective Functions?
07:19 • 2min
5
The Simclear Paper and Contrastive Learning
09:36 • 2min
6
Is Back Propagation a Good Mechanism for Perceptual Learning?
11:20 • 3min
7
Is There a Way to Distilate Knowledge From One Location to Another?
14:14 • 3min
8
Is It Just an Engineering Difference?
17:26 • 6min
9
Is the Retina a Spiking Neuron?
23:24 • 4min
10
The Importance of Spiking Youromets
27:29 • 6min
11
Using Convolutional Nets to a Big Data Set
32:59 • 2min
12
What Triggered Id?
34:34 • 4min
13
The End of My Principles
38:23 • 3min
14
How Do You Go From a Computer Science Degree to a Carpenter?
40:58 • 3min
15
How to Do Recursion With Neural Networks
44:27 • 4min
16
How Did Churing and Von Noyman Die Early?
48:10 • 2min
17
Deep Learning, Is It All We Need?
50:20 • 5min
18
What Do You Think About Large Language Models?
55:45 • 4min
19
Are You Getting It Right?
01:00:13 • 2min
20
Sleep Is a Computing Function
01:02:11 • 2min
21
Using Contrastive Learning, You Can Separate the Positive and Negative Phases
01:04:39 • 5min
22
Can You Learn From That?
01:09:09 • 4min
23
How to Get the Right Labels in the Classroom
01:13:16 • 2min
24
Nea Net Learning
01:14:55 • 5min
25
Using Tisny to Find the Distances of a Galaxies
01:19:39 • 3min