Dev Propulsion Labs

Sarah Wooders on why LLMs are like Memento and building the infrastructure for stateful AI agents

17 snips
Sep 11, 2025
Sarah Wooders, CTO and co-founder of Letta AI, dives into the intriguing world of stateful AI agents. She compares current LLMs to forgetting characters in 'Memento,' emphasizing their lack of memory. Discussing the AI landscape, she suggests 2025 mirrors the early internet's potential. Wooders critiques the distinction between true agents and mere marketing hype, while also stressing the necessity for better standardization in AI protocols. Open-source tools, she argues, are vital for fostering genuine advancements in AI capabilities.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Need Persistent Memory

  • LLMs are extremely capable but fundamentally limited by lack of persistent memory and identity.
  • Leta builds context management around models so agents can learn and retain memory over time.
ANECDOTE

How MemGPT Proved The Idea

  • Sarah and her co-founder built MemGPT at Berkeley as an early stateful agent prototype.
  • MemGPT combined tool calling, context reads/writes, and vector DB memory to manage agent memory.
INSIGHT

Plans In Context Prevent Drift

  • Long-horizon tasks cause agents to lose track mid-task because of context limits.
  • Writing plans and checking off actions in agent context maintains progress and improves long runs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app