Machine Learning Street Talk (MLST) cover image

Jordan Edwards: ML Engineering and DevOps on AzureML

Machine Learning Street Talk (MLST)

00:00

Exploring Model Ensembling and Knowledge Distillation in Machine Learning

This chapter explores model ensembling in machine learning, likening it to the collaborative function of synapses in the human brain. It delves into combining outputs from diverse models and discusses knowledge distillation's role in optimizing model sizes and inference efficiency.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app