
AI Explained Lessons Learned from Building Agentic Systems With Jayeeta Putatunda
21 snips
Aug 16, 2025 Jayeeta Putatunda, Director of AI Center of Excellence at Fitch Group, shares her expertise on building AI agent systems. She dives into the hurdles of moving from concept to production, discussing critical evaluation metrics and the significance of observability in reliable AI. The conversation highlights the hybrid approach necessary for finance applications and the crucial developer-business partnership for customized metrics. Additionally, she examines the evolution from MLOps to AgentOps, unpacking new challenges in AI operational frameworks.
AI Snips
Chapters
Transcript
Episode notes
Measure Model Usage And System Observability
- Monitor traceability, token usage, latency, error rates, and model-call counts across multi-model agent pipelines.
- Add drift detection and infrastructure observability to diagnose cost, reliability, and performance issues.
Log Checkpoints Liberally At First
- Log inputs, outputs, tool calls, and intermediate steps at each agent checkpoint to enable root-cause analysis.
- Initially over-log; then reduce once you understand failure patterns and mature the workflow.
Observability Must Include Human Patterns
- Observability still covers latency and reliability but must include human-in-the-loop pattern detection for non-deterministic outputs.
- Use sampled human reviews to surface recurring failure modes and scale monitoring by detecting patterns.
