Software Engineering Daily

Redis and AI Agent Memory with Andrew Brookins

61 snips
Aug 26, 2025
In this engaging discussion, Andrew Brookins, a Principal Applied AI Engineer at Redis, shares insights into the challenges of building AI agents. He explains how large language models' statelessness affects continuity and the critical role of memory management. Topics include the significance of fast data retrieval in AI systems, advancements in Redis like vector search and semantic caching, and the comparison between hybrid search and vector-only methods. Andrew also touches on the complexities of maintaining relevant memory and the development of effective world models for dynamic environments.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Can't Predict Environment State

  • LLMs excel at generating from context but fail at predicting environment state transitions.
  • Agents break down when they must change the world and predict outcomes rather than just produce language.
ADVICE

Design Agents With Persistent Storage

  • Always design LLM systems assuming you must store data externally; treat memory as integral.
  • Use a database to persist messages and state so interactions can continue across sessions.
INSIGHT

Three Layers Of Agent Memory

  • Memory for agents has layers: raw message history, summarized context, and extracted long-term facts.
  • Each layer serves different retrieval and compaction roles in building model context.
Get the Snipd Podcast app to discover more snips from this episode
Get the app