Last Week in AI

The way we train AI is fundamentally flawed, bias, the compute divide

Nov 26, 2020
This discussion dives into the critical flaws in AI training methodologies, emphasizing the concept of 'under specification.' Facebook's struggles with moderating harmful content are highlighted, showcasing the challenges in AI supervision. The exploration of AI bias reveals stark inequalities, as the 'compute divide' accelerates disparities in research output. The need for greater diversity in AI research is stressed, along with initiatives aimed at creating a more equitable landscape for future innovation. It's a thought-provoking look at the current state of AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Training Flaw: Underspecification

  • AI models suffer from underspecification, yielding multiple solutions for one training dataset.
  • Real-world performance varies due to random factors like weight initialization, impacting generalization.
INSIGHT

Unreliable Training Performance

  • Training performance alone doesn't predict real-world model performance.
  • Models with identical training scores can exhibit significant variance on new data, making evaluation complex.
INSIGHT

Facebook's Hate Speech Claims vs. Reality

  • Facebook claims improved hate speech detection (94.7%) with a new AI model, the Linformer.
  • Critics argue harmful content still spreads widely despite these claimed improvements.
Get the Snipd Podcast app to discover more snips from this episode
Get the app