
Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling - TWiML Talk #267
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
00:00
Optimizing Neural Networks for Mobile AI
This chapter explores research on neural network compression and quantization aimed at enhancing AI hardware deployment, specifically on mobile devices using Snapdragon AI chipsets. It emphasizes the use of Bayesian optimization to find optimal configurations, balancing the trade-offs between direct measurements and simulator approximations.
Transcript
Play full episode