The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356

9 snips
Mar 12, 2020
Beidi Chen, a PhD candidate in Computer Science at Rice University, discusses groundbreaking research that challenges the dominance of GPUs in deep learning. The conversation dives into their innovative algorithmic approach, SLIDE, which uses locality-sensitive hashing to optimize extreme classification tasks. Chen highlights how randomized algorithms enhance computational efficiency, often outperforming conventional hardware solutions. The importance of collaboration and the evolution of machine learning systems are also key themes, showcasing a new path forward for AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Computation Bottlenecks

  • Computational bottlenecks are a major challenge in computer science, often limiting problem-solving.
  • Randomized algorithms and approximations can help overcome these limitations by reducing time and memory complexity.
INSIGHT

Matrix Multiplication Bottleneck

  • Matrix multiplication is a major bottleneck in neural networks, especially in extreme classification.
  • GPUs address this, but high memory access remains an issue, prompting exploration of alternative solutions.
INSIGHT

LSH for Efficient Computation

  • Locality-sensitive hashing (LSH) helps identify useful computations by finding nearest neighbors, improving efficiency.
  • In neural networks, LSH identifies vectors with high inner products, reducing redundant computations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app