

AI governance: Building smarter AI agents from the fundamentals, part 4
Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes.
Show notes:
• Agentic AI systems require governance at every step: perception, reasoning, action, and learning
• Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps
• Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)
• Traditional governance approaches are insufficient for the complexity of agentic systems
• Organizations must implement granular monitoring, logging, and validation for each component
• Human-in-the-loop oversight is not a substitute for robust governance frameworks
• The true cost of agentic systems includes governance overhead, monitoring tools, and human expertise
Make sure you check out Part 1: Mechanism design, Part 2: Utility functions, and Part 3: Linear programming. If you're building agentic AI systems, we'd love to hear your questions and experiences. Contact us.
What we're reading:
- We took reading "break" this episode to celebrate Sid! This month, he successfully defended his Ph.D. Thesis on "Psychological Health and Belief Measurement at Scale Through Language." Say congrats!>>
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.