

Advances in Neural Compression with Auke Wiggers - #570
May 2, 2022
Auke Wiggers, an AI research scientist at Qualcomm, dives into the exciting realm of neural data compression. He discusses how generative models and transformer architectures are revolutionizing image and video coding. The conversation highlights the shift from traditional techniques to neural codecs that learn from examples, and the impressive real-time performance on mobile devices. Auke also touches on innovations like transformer-based transform coding and shares insights from recent ICLR papers, showcasing the future of efficient data compression.
AI Snips
Chapters
Transcript
Episode notes
Auke's Path to Qualcomm
- Auke Wiggers's journey into machine learning began with an AI course at UVA in 2012.
- He joined a startup called Cipher, which was later acquired by Qualcomm, where he now works on neural data compression.
Neural Data Compression Basics
- Neural data compression leverages generative models to estimate the likelihood of data points like images or audio.
- Using entropy coding, redundancy is eliminated, allowing for efficient compression based on the likelihood model.
Quantization's Role
- Quantization in neural compression is essential for lossless transmission of compressed data representations.
- While latent space is quantized for research, full model quantization (weights, activations) happens during device deployment for efficiency.