
Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning
Generally Intelligent
00:00
Why Is Sparsity Everywhere?
Sparsity is an important concept in machine learning, and it's everywhere. When sparsity pape performd it turns out that when an only batchnorm, the batch norm perimeters learn to disable a large number of features. "I won't claim that this paper should tell us how we should train our networks or anything like that," he says.
Transcript
Play full episode