LessWrong (Curated & Popular) cover image

“Distillation Robustifies Unlearning” by Bruce W. Lee, Addie Foote, alexinf, leni, Jacob G-W, Harish Kamath, Bryce Woodworth, cloud, TurnTrout

LessWrong (Curated & Popular)

00:00

Optimizing Model Robustness Through Unlearning and Distillation

This chapter explores the 'Unlearn and Distill' approach for improving model robustness over traditional methods. It introduces UNDO, a novel technique that combines unlearning, noise integration, and distillation to enhance resilience while managing computational efficiency.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app