A key challenge with designing AI agents is that large language models are stateless and have limited context windows. This requires careful engineering to maintain continuity and reliability across sequential LLM interactions. To perform well, agents need fast systems for storing and retrieving short-term conversations, summaries, and long-term facts.
Redis is an open‑source, in‑memory data store widely used for high‑performance caching, analytics, and message brokering. Recent advances have extended Redis’ capabilities to vector search and semantic caching, which has made it an increasingly popular part of the agentic application stack.
Andrew Brookins is a Principal Applied AI Engineer at Redis. He joins the show with Sean Falconer to discuss the challenges of building AI agents, the role of memory in agents, hybrid search versus vector-only search, the concept of world models, and more.
Full Disclosure: This episode is sponsored by Redis.

Sean’s been an academic, startup founder, and Googler. He has published works covering a wide range of topics from AI to quantum computing. Currently, Sean is an AI Entrepreneur in Residence at Confluent where he works on AI strategy and thought leadership. You can connect with Sean on LinkedIn.
Please click here to see the transcript of this episode.
Sponsorship inquiries: sponsor@softwareengineeringdaily.com
The post Redis and AI Agent Memory with Andrew Brookins appeared first on Software Engineering Daily.