

Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning
6 snips Sep 10, 2021
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Introduction
00:00 • 2min
Machine Learning - What's the Story of Your Research Interests?
02:04 • 4min
Did You Have Research Interests?
06:31 • 3min
Can You Be a Great Researcher Without Being the World's Greatest Mathematician?
09:28 • 3min
Why You Can't Train a Prude Network?
12:24 • 4min
Is There a Way to Start Small?
16:47 • 3min
How Do You Preserve Your Performance With Network Pruning?
19:25 • 4min
Is Early in Training Really Early?
23:38 • 4min
Prue Networks of Initialization - Is This a Good Technique?
28:00 • 3min
Deep Learning - What Are Some of the Most Important Open Questions?
30:55 • 3min
Machine Learning
33:38 • 2min
Those Roses After You Found an Idea That Worked
36:07 • 5min
Deep Learning
40:56 • 2min
I'm Not Trying to Be the First Person to Have the Next Advance
43:01 • 2min
How Do You Find More Papers Like This?
44:43 • 3min
I Think We Need More Than That in the Field of Science
47:47 • 4min
Manifesto for Papers
51:34 • 2min
The Original Lottery Ticket Paper
53:10 • 2min
Grat Lt, Bu Wa O Yotot?
54:43 • 4min
Should We Use Facial Recognition?
58:48 • 2min
Why Do We Write Assembles?
01:00:22 • 3min
I'm a Scientist, and I Build on What I See in Front of Me
01:02:53 • 2min
Object Oriented Programming
01:04:29 • 2min
The Hardest Thing for a Junior Researcher
01:06:48 • 2min
Publication Doesn't Have to Be Significant
01:08:42 • 2min
The Story Behind the Bachnorm Paper
01:10:28 • 2min
Why Is Sparsity Everywhere?
01:12:20 • 2min
I'm Not a Lawyer, but I Want to Know.
01:13:56 • 2min
Is There Any Scale Set for Machine Learning?
01:15:30 • 2min
How to Train 700 Networks on Image Net?
01:17:23 • 4min