
 "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
 "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Titans: Neural Long-Term Memory for LLMs, with author Ali Behrouz
 152 snips 
 May 15, 2025  In this engaging discussion, Ali Behrouz, a PhD student from Cornell University, shares insights from his research on enhancing memory in large language models. He introduces the Titans architecture, which mimics human-like memory to tackle coherence challenges in AI. Key topics include the limitations of current models and innovative solutions to catastrophic forgetting. Behrouz also highlights the potential of specialized LLMs in corporate settings, revealing the necessity for improved memory mechanisms to unlock AI's full capabilities in the workplace. 
 AI Snips 
 Chapters 
 Books 
 Transcript 
 Episode notes 
Neural Network as Memory Module
- Titans introduces a neural network as a memory module for LLMs updated by gradient descent at runtime.
- This approach differs fundamentally from classical memories as mere vectors or matrices seen in previous models.
Human Memory Inspires Neural Memory
- Human memory likely operates as a network of interconnected neurons rather than isolated units.
- Memory as a neural network inspires moving beyond vector/matrix memories for richer, more dynamic long-term memory.
Runtime Gradient Updates in Memory
- Recurrent gradient descent updates enable the memory MLP to approximate the attention values given keys.
- This runtime update mirrors human-like continual learning within a fixed finite memory size.




