The Gradient: Perspectives on AI cover image

Zachary Lipton: Where Machine Learning Falls Short

The Gradient: Perspectives on AI

00:00

Is This Robust to Distribution Shift?

A lot of claims that just aren't given the appropriate qualifiers end up confusing people, they get taken as true. The distribution shift literature is fully a sense of methods that obviously don't work in general because they're settings worth obvious, obvious numbers that no method works in general. So yeah, it's label smoothing causes calibration. It's like, all I need is a way to solve a problem. X is binary feature. Now what if I do label smoothing? Now the classifier is going to be right a hundred percent of time. But if you dolabel smoothing, everything fails. And so there's this weird. There's this weird group of collaborators in different areas

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app