The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Vector Quantization for NN Compression with Julieta Martinez - #498

Jul 5, 2021
Julieta Martinez, a senior research scientist at Waabi, dives into the fascinating world of AI and self-driving technology. She highlights how insights from large-scale visual search can bolster neural network compression techniques. The conversation also covers the intricacies of using product quantization to enhance performance while managing vast datasets. Additionally, Julieta discusses her research on deep multitask learning, demonstrating how integrating localization, perception, and prediction can revolutionize autonomous systems and improve real-world applications.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Latinx in AI Community

  • Julieta Martinez and a few Latin American researchers started informally meeting at conferences.
  • This led to a more formal Latinx in AI presence, encouraging submissions and highlighting speakers.
INSIGHT

Common Ground in Large-Scale Problems

  • Large-scale visual search and neural network compression share similar computational challenges, especially memory.
  • Both require clever compression for usability due to large datasets and high dimensionality.
INSIGHT

Product Quantization Explained

  • Product quantization compresses large datasets by dividing them into smaller subsets and applying k-means.
  • This creates a compact representation using codes and codebooks, enabling faster nearest-neighbor search.
Get the Snipd Podcast app to discover more snips from this episode
Get the app