

[28] Karen Ullrich - A Coding Perspective on Deep Latent Variable Models
Jul 16, 2021
Karen Ullrich, a Research Scientist at FAIR, studies the intersection of information theory and machine learning. She discusses her PhD work, highlighting the minimum description length principle and its impact on neural network compression. Their conversation delves into the intricate ties between data compression and cognitive processes, while exploring innovative methods for addressing imaging challenges. Ullrich also shares insights on enhancing differentiability in image reconstruction and offers practical advice for new researchers navigating complex data landscapes.
AI Snips
Chapters
Transcript
Episode notes
Compression, Generative Models, and Intelligence
- A connection exists between compression, generative models, and intelligence, but it's restrictive.
- Better compression in generative models implies higher intelligence, aligning with arithmetic coding principles.
Real-World Approach of Machine Learning
- Karen Ullrich was initially drawn to machine learning by its practical, real-world approach.
- This contrasted with her physics background, where simplifying assumptions often led to less meaningful results.
Reproducing a Cognitive Science Paper
- Karen Ullrich's research reproduced a 25-year-old paper from cognitive science with few citations.
- The study explored binding neurons for better generalization, similar to modern VQVAEs.