

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351
Feb 24, 2020
Ilias Diakonikolas, a faculty member at the University of Wisconsin-Madison, shares insights from his impactful research on robust learning algorithms. They tackle the challenges of high-dimensional data, exploring how noise affects model reliability and introducing new strategies for robust statistics. The discussion highlights the importance of median over mean for outlier management and delves into the evaluation of model performance under adversarial conditions. Diakonikolas also reflects on distribution-independent learning and its implications for machine learning applications.
AI Snips
Chapters
Transcript
Episode notes
High-Dimensional Robust Learning
- Ilias Diakonikolas's main research area is high-dimensional robust learning.
- This area focuses on making machine learning algorithms robust to deviations from assumed data models.
Test Time vs. Training Time Attacks
- Diakonikolas distinguishes between test time attacks and training time attacks in robust machine learning.
- Adversarial examples perturb data at test time, while training time attacks involve training on corrupted data.
Median vs. Average in High Dimensions
- The median is a robust alternative to the average when dealing with one-dimensional outliers.
- However, generalizing this robustness to high dimensions is challenging due to noise accumulation across dimensions.