Generally Intelligent

Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

6 snips
Sep 10, 2021
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
Machine Learning - What's the Story of Your Research Interests?
02:04 • 4min
3
Did You Have Research Interests?
06:31 • 3min
4
Can You Be a Great Researcher Without Being the World's Greatest Mathematician?
09:28 • 3min
5
Why You Can't Train a Prude Network?
12:24 • 4min
6
Is There a Way to Start Small?
16:47 • 3min
7
How Do You Preserve Your Performance With Network Pruning?
19:25 • 4min
8
Is Early in Training Really Early?
23:38 • 4min
9
Prue Networks of Initialization - Is This a Good Technique?
28:00 • 3min
10
Deep Learning - What Are Some of the Most Important Open Questions?
30:55 • 3min
11
Machine Learning
33:38 • 2min
12
Those Roses After You Found an Idea That Worked
36:07 • 5min
13
Deep Learning
40:56 • 2min
14
I'm Not Trying to Be the First Person to Have the Next Advance
43:01 • 2min
15
How Do You Find More Papers Like This?
44:43 • 3min
16
I Think We Need More Than That in the Field of Science
47:47 • 4min
17
Manifesto for Papers
51:34 • 2min
18
The Original Lottery Ticket Paper
53:10 • 2min
19
Grat Lt, Bu Wa O Yotot?
54:43 • 4min
20
Should We Use Facial Recognition?
58:48 • 2min
21
Why Do We Write Assembles?
01:00:22 • 3min
22
I'm a Scientist, and I Build on What I See in Front of Me
01:02:53 • 2min
23
Object Oriented Programming
01:04:29 • 2min
24
The Hardest Thing for a Junior Researcher
01:06:48 • 2min
25
Publication Doesn't Have to Be Significant
01:08:42 • 2min
26
The Story Behind the Bachnorm Paper
01:10:28 • 2min
27
Why Is Sparsity Everywhere?
01:12:20 • 2min
28
I'm Not a Lawyer, but I Want to Know.
01:13:56 • 2min
29
Is There Any Scale Set for Machine Learning?
01:15:30 • 2min
30
How to Train 700 Networks on Image Net?
01:17:23 • 4min