

Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino - #528
Oct 18, 2021
Andrea Banino, a research scientist at DeepMind, dives into the fascinating world of artificial general intelligence and episodic memory. He discusses how past experiences shape intelligent behavior and the challenges of integrating memory into neural networks. The conversation highlights his innovative work on PonderNet, a model that optimizes computational resources based on problem complexity. Banino also touches on the importance of grid cells in navigation and the synergy between transformers and reinforcement learning for enhanced performance.
AI Snips
Chapters
Transcript
Episode notes
AGI and Episodic Memory
- Andrea Banino's interest in AGI stems from his neuroscience background, specifically episodic memory research.
- Episodic memory, distinct from semantic memory, involves remembering specific past events and their context ("what, when, and where").
Episodic Memory and Generalization
- Episodic memory facilitates generalization by linking related experiences.
- The brain automatically connects related events, enabling inferences and knowledge transfer, crucial for AGI.
Memory in AI
- Current AI memory forms include working memory (RNNs, LSTMs), memory-augmented networks, and retrieval-augmented models.
- These approaches have limitations compared to true episodic memory, like inefficient knowledge consolidation.