NVIDIA AI Podcast cover image

MIT’s Jonathan Frankle on “The Lottery Hypothesis” - Ep. 115

NVIDIA AI Podcast

00:00

How to Train a Neural Network Smaller

It's an expensive proposition to run a research group doing neural networks. So wouldn't it be exciting if, you know, we could just train smaller networks? Sure. A lot of people see the pruning results and they say, well, doesn't that mean we can just training smaller networks from the beginning? And the answer is no.

Play episode from 03:18
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app