Machine Learning Street Talk (MLST)

#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)

16 snips
Mar 6, 2021
Mark Saroufim, author of "Machine Learning: The Great Stagnation," joins Mathew Salvaris, a lead ML scientist at iRobot, to dissect the stagnation in machine learning. They discuss how academia’s incentive structures stifle innovation and the implications of 'state-of-the-art' chasing. They highlight the rise of the 'gentleman scientist,' the complexities of achieving measurable success, and the need for a user-focused approach in research. The duo emphasizes collaboration and the importance of embracing failures as part of the learning process.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

SOTA Chasing vs. Breakthroughs

  • Machine learning research often prioritizes short-term gains and hype over genuine scientific breakthroughs.
  • This focus on "SOTA chasing" stifles innovation and leads to incremental improvements rather than groundbreaking discoveries.
ADVICE

Honesty and Risk-Taking

  • Be honest about your motivations, whether it's scientific curiosity or financial gain.
  • Take risks and pursue ambitious projects once you have financial security.
INSIGHT

The Peer Review System and Individual Incentives

  • SOTA chasing is driven by individual incentives within the current peer review system.
  • Publishing innovative work outside of established venues requires accepting potential career downsides.
Get the Snipd Podcast app to discover more snips from this episode
Get the app