Is your organization stuck in "read-only" mode with AI agents? You're not alone. In this episode, Dev Rishi (GM of AI at Rubrik, formerly CEO of Predibase) joins Ashish and Caleb to dissect why enterprise AI adoption is stalling at the experimentation phase and how to safely move to production .
Dev reveals the three biggest fears holding IT leaders back: shadow agents, lack of real-time governance, and the inability to "undo" catastrophic mistakes . We dive deep into the concept of "Agent Rewind", a capability to roll back changes made by rogue AI agents, like deleting a production database and why this remediation layer is critical for trust .
The conversation also explores the technical architecture needed for safe autonomous agents, including the debate between MCP (Model Context Protocol) and A2A (Agent to Agent) standards . Dev explains why traditional "anomaly detection" fails for AI and proposes a new model of AI-driven policy enforcement using small language models (SLMs) as judges .
Questions asked:
(00:00) Introduction(02:50) Who is Dev Rishi? From Predibase to Rubrik(04:00) The Shift from Fine-Tuning to Foundation Models (07:20) Enterprise AI Use Cases: Background Checks & Call Centers (11:30) The 4 Phases of AI Adoption: Where are most companies? (13:50) The 3 Biggest Fears of IT Leaders: Shadow Agents, Governance, & Undo (18:20) "Agent Rewind": How to Undo a Rogue Agent's Actions (23:00) Why Agents are Stuck in "Read-Only" Mode (27:40) Why Anomaly Detection Fails for AI Security (30:20) Using AI Judges (SLMs) for Real-Time Policy Enforcement (34:30) LLM Firewalls vs. Bespoke Policy Enforcement (44:00) Identity for Agents: Scoping Permissions & Tools (46:20) MCP vs. A2A: Which Protocol Wins? (48:40) Why A2A is Technically Superior but MCP Might Win