The Stack Overflow Podcast cover image

Balancing a PhD program with a startup career

The Stack Overflow Podcast

00:00

The Power of Knowledge Distillation

The ability to compress the size, complexity and cost has been some of the most interesting stuff that I've seen percolating up recently. If you make something great by training it on a really big data set and using feedback from human reinforcement learning, then that model itself is a set of instructions like a parent like a blueprint. You can train a smaller model on less data with less time and less parameters, but also get reasonably high accuracy.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app