

Cross-Device AI Acceleration, Compilation & Execution with Jeff Gehlhaar - #500
12 snips Jul 12, 2021
Jeff Gehlhaar, VP of Technology at Qualcomm, dives into the world of AI compilers and their importance in managing parallelism. He highlights Qualcomm's latest innovations, including AI Engine Direct, which bridges capabilities across devices. The conversation covers how research on compression and quantization is translated into real products and the competitive landscape of ML compilers like Glow and TVM. Additionally, Jeff discusses advancements in benchmarking and the integration of AI frameworks that enhance smartphone performance.
AI Snips
Chapters
Transcript
Episode notes
ML Compilers: Tiling and Code Generation
- ML compilers schedule and tile neural networks for efficient hardware mapping.
- Code generation translates this plan into instructions, optimizing performance.
Tiling vs. Code Generation
- Tiling breaks down tensors, optimizes memory, and parallelizes operations.
- Code generation produces the final instructions for hardware execution.
AIC 100 Architecture
- Qualcomm's AIC 100 uses a finer-grained, highly parallel architecture.
- This differs from other devices that use larger arrays, allowing efficient parallelism.