Machine Learning Street Talk (MLST)

#92 - SARA HOOKER - Fairness, Interpretability, Language Models

7 snips
Dec 23, 2022
Sara Hooker, founder of Cohere For AI and a leader in machine learning research, discusses pivotal topics in the field. She explores the 'hardware lottery' concept, emphasizing how hardware compatibility affects ideas' success. The conversation delves into fairness, highlighting challenges like annotator bias and the need for fairness objectives in model training. Hooker also tackles model efficiency versus size, self-supervised learning's capabilities, and the nuances of prompting in language models, offering insights into making machine learning more accessible and trustworthy.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Fairness Drifts

  • Fairness in machine learning isn't static; it drifts over time, like evolving humor.
  • Current fairness research often relies on labeled perspectives and struggles with unlabeled data.
INSIGHT

Model Bias and Memorization

  • Model bias is crucial because larger models tend to memorize, impacting fairness.
  • Memorization affects low-frequency attributes, especially protected ones.
INSIGHT

Optimizing for Interpretability

  • Post-hoc interpretability methods are limited by the model's initial training.
  • Optimizing for interpretability during training offers more control and insight into feature emergence.
Get the Snipd Podcast app to discover more snips from this episode
Get the app