
Today in Tech How AI really remembers, and why agents will keep forgetting
Most people assume AI “remembers everything” — every chat, every command, every conversation. But that’s not how today’s systems actually work. On this episode of Today in Tech, Keith Shaw talks with Manifest AI CEO Jacob Buckman about how AI memory really works under the hood, why chatbots feel so different from humans, and what has to change for true long-running digital agents to become reality.
Jacob explains concepts like short-term vs. long-term AI memory, context windows, KV caches, and “scratchpad” summaries in plain language. He uses analogies from medicine and the movie Memento to show why current AI tools can ace a single conversation but struggle to stay on task over hours, days, or projects. They also dig into hallucinations, why simply “making models bigger” isn’t enough, and how new architectures like power retention aim to give AI a more human-like ability to remember what actually matters over time.
You’ll learn:
* Why AI remembers everything inside a chat window but almost nothing between sessions
* How today’s memory tricks (summaries, scratchpads, huge context windows) still fall short
* How memory limits hold back reliable AI agents for coding, research, and creative work
* Why better long-term memory could cut hallucinations and boost trust in business use cases
* What “power retention” is — and how it could reshape the next generation of AI systems
