AI agents offer unprecedented power, but mastering agent reliability is the ultimate challenge for agentic systems to actually work in production.
Mikiko Chandrashekar, Staff Developer Advocate at MongoDB, whose background spans the entire data-to-AI pipeline, unveils MongoDB's vision as the memory store for agents, supporting complex multi-agent systems from data storage and vector search to debugging chat logs. She highlights how MongoDB, reinforced by the acquisition of Voyage, empowers developers to build production-scale agents across various industries, from solo projects to major enterprises. This robust data layer is foundational to ensure agent performance and improve the end user experience.
Mikiko advocates for treating agents as software products, applying rigorous engineering best practices to ensure reliability, even for non-deterministic systems. She details MongoDB's unique position to balance GPU/CPU loads and manage data for performance and observability, including Galileo's integrations.
The conversation emphasizes the profound need to rethink observability, evaluations, and guardrails in the era of agents, showcasing Galileo's family of small language models for real-time guardrailing, Luna-2, and Insights Engine for automated failure analysis. Discover how building trustworthiness through systematic evaluation, beyond just "vibe checks," is essential for AI agents to scale and deliver value in high-stakes use cases.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Mikiko on LinkedIn
Follow Mikiko on X/Twitter
Explore Mikiko's YouTube channel
Check out Mikiko's Substack
Connect with MongoDB on LinkedIn
Connect with MongoDB on YouTube
Check out Galileo
Try Galileo
Agent Leaderboard