Machine Learning Street Talk (MLST) cover image

Jonas Hübotter (ETH) - Test Time Inference

Machine Learning Street Talk (MLST)

00:00

Interpreting Models with Linear Approaches

This chapter examines the challenges of representation bias and overfitting in machine learning models during fine-tuning, emphasizing the need for balance between local information and prior knowledge. It introduces the concept of linear surrogate models, such as linear probes and LIME, to facilitate understanding of complex models and their predictions. Additionally, the discussion includes Bayesian methods for uncertainty estimation, highlighting their potential for enhancing predictions through more manageable computations.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app