Snipd home pageGet the app
public
NVIDIA AI Podcast chevron_right

MIT’s Jonathan Frankle on “The Lottery Hypothesis” - Ep. 115

Apr 14, 2020
24:17
forum Ask episode
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
1
Introduction
00:00 • 3min
chevron_right
2
How to Train a Neural Network Smaller
03:18 • 2min
chevron_right
3
The Importance of Capacity in Learning
05:06 • 5min
chevron_right
4
The Haze in Deep Learning
09:48 • 2min
chevron_right
5
The Lottery Ticket Hypothesis, Finding Sparse Trainable Neural Networks
11:22 • 2min
chevron_right
6
The Implications of Running a Neural Network
12:59 • 4min
chevron_right
7
The Importance of a PhD in Machine Learning
17:21 • 2min
chevron_right
8
The Importance of Being Aware of Conflicts of Interests
19:15 • 2min
chevron_right
9
The Future of Neural Networks
21:02 • 3min
chevron_right
We spoke with Jonathan Frankle, a PhD student at MIT and coauthor of a seminal paper outlining a technique, known as the “The Lottery Ticket,” hypothesis that promises to help advance our understanding of why neural networks, and deep learning, works so well.
HomeTop podcastsPopular guestsTop books