Practical AI

Federated learning in production (part 1)

81 snips
May 30, 2025
Patrick Foley, Lead AI architect at Intel and a key player in OpenFL’s development, delves into the transformative potential of federated learning. He highlights how this approach ensures data privacy by training models where the data resides, particularly in sensitive areas like healthcare. The discussion covers real-world applications like brain tumor segmentation, the challenges of trust and verification, and the role of frameworks in enhancing collaboration. Foley also hints at the future of federated learning in breaking down data silos while maintaining security.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

How Federated Learning Works

  • Federated learning trains models by sending the model to multiple data locations instead of centralizing data.
  • Updated model weights from each location are aggregated centrally without exposing raw data, ensuring privacy.
ANECDOTE

Healthcare Federated Learning Example

  • Intel collaborated with the University of Pennsylvania to deploy federated learning for brain tumor segmentation across 70 hospitals worldwide.
  • This real-world healthcare deployment achieved 99% accuracy compared to centralized training models.
INSIGHT

Model Types Suited for Federated Learning

  • Most federated learning applications currently focus on neural networks due to their shared weight representation.
  • Techniques like PEFT and quantization help make federated training of large generative AI models feasible despite size constraints.
Get the Snipd Podcast app to discover more snips from this episode
Get the app