

Responsible AI in Practice with Sarah Bird - #322
Dec 4, 2019
In this engaging conversation, Sarah Bird, a Principal Program Manager at Microsoft specializing in Azure Machine Learning and responsible AI, shares insights on the new tools aimed at ethical machine learning. She discusses the InterpretML toolkit and its user-friendly interface for model insights. The conversation also delves into the challenges of differential privacy, emphasizing the balance between data accuracy and individual privacy. Additionally, Sarah highlights the importance of fairness in AI through the FairLearn toolkit, showcasing collaborative strategies for responsible AI development.
AI Snips
Chapters
Transcript
Episode notes
Research to Product Journey
- Sarah Bird's career started in machine learning research, focusing on systems at Berkeley.
- She transitioned to product-focused roles, aiming to bring cutting-edge research to practical applications.
Collaborative Responsibility
- Responsible AI isn't one person's job; everyone needs to consider it, similar to security practices.
- Involve user research and design teams early to address the human element and broader implications.
Abstraction vs. Understanding
- Abstracting complex algorithms simplifies development but can hinder responsible AI practices by narrowing focus.
- Tools like InterpretML help analyze models and identify potential issues, promoting deeper understanding, not just compliance.