

Fairness and Robustness in Federated Learning with Virginia Smith -#504
Jul 26, 2021
Virginia Smith, an assistant professor at Carnegie Mellon University, delves into her innovative work on federated learning. She discusses her research on fairness and robustness, highlighting the challenges of maintaining model performance across diverse data inputs. The conversation touches on her findings from the paper 'Ditto', exploring the trade-offs in AI ethics. Additionally, she shares insights on leveraging data heterogeneity in federated clustering to enhance model effectiveness and the balance between privacy and robust learning.
AI Snips
Chapters
Transcript
Episode notes
Fairness in Federated Learning
- Federated learning fairness differs from AI ethics fairness.
- It focuses on consistent model performance across diverse devices, addressing representation disparity.
Failure Modes in Federated Learning
- Standard federated learning, minimizing average loss, can sacrifice performance on some devices.
- Prioritizing average performance can lead to catastrophically poor results on diverse subsets.
Fairness vs. Robustness
- Federated learning must balance fairness and robustness, often conflicting goals.
- Removing diverse or outlier data improves robustness but can harm fairness by excluding important device data.