The Chief AI Officer Show

Front Lines
undefined
Dec 18, 2025 • 45min

ACC’s Dr. Ami Bhatt: AI Pilots Fail Without Implementation Planning

Dr. Ami Bhatt's team at the American College of Cardiology found that most FDA-approved cardiovascular AI tools sit unused within three years. The barrier isn't regulatory approval or technical accuracy. It's implementation infrastructure. Without deployment workflows, communication campaigns, and technical integration planning, even validated tools fail at scale. Bhatt distinguishes "collaborative intelligence" from "augmented intelligence" because collaboration acknowledges that physicians must co-design algorithms, determine deployment contexts, and iterate on outputs that won't be 100% correct. Augmentation falsely suggests AI works flawlessly out of the box, setting unrealistic expectations that kill adoption when tools underperform in production. Her risk stratification approach prioritizes low-risk patients with high population impact over complex diagnostics. Newly diagnosed hypertension patients (affecting 1 in 2 people, 60% undiagnosed) are clinically low-risk today but drive massive long-term costs if untreated. These populations deliver better ROI than edge cases but require moving from episodic hospital care to continuous monitoring infrastructure that most health systems lack. Topics discussed: Risk stratification methodology prioritizing low-risk, high-impact patient populations Infrastructure gaps between FDA approval and scaled deployment Real-world evidence approaches for AI validation in lower-risk categories Synthetic data sets from cardiovascular registries for external company testing Administrative workflow automation through voice-to-text and prior authorization tools Apple Watch data integration protocols solving wearable ingestion problems Three-part startup evaluation: domain expertise, technical iteration capacity, implementation planning Real-time triage systems reordering diagnostic queues by urgency
undefined
Dec 4, 2025 • 40min

Usertesting's Michael Domanic: Hallucination Fears Mean You're Building Assistants, Not Thought Partners

UserTesting deployed 700+ custom GPTs across 800 employees, but Michael Domanic's core insight cuts against conventional wisdom: organizations fixated on hallucination risks are solving the wrong problem. That concern reveals they're building assistants for summarization when transformational value lives in using AI as strategic thought partner. This reframe shifts evaluation criteria entirely. Michael connects today's moment to 2015's Facebook Messenger bot collapse, when Wit.ai integration promised conversational commerce that fell flat. The inversion matters: that cycle failed because NLP couldn't meet expectations shaped by decades of sci-fi. Today foundation models outpace organizational capacity to deploy responsibly, creating an obligation to guide employees through transformation rather than just chase efficiency. His vendor evaluation cuts through conference floor noise. When teams pitch solutions, first question: can we build this with a custom GPT in 20 minutes? Most pitches are wrappers that don't justify $40K spend. For legitimate orchestration needs, security standards and low-code accessibility matter more than demos. Topics discussed: Using AI as thought partner for strategic problem-solving versus summarization and content generation tasks Deploying custom GPTs at scale through OKR-building tools that demonstrated broad organizational application Treating conscientious objectors as essential partners in responsible deployment rather than adoption blockers Filtering vendor pitches by testing whether custom GPT builds deliver equivalent functionality first Prioritizing previously impossible work over operational efficiency when setting transformation strategy Building agent chains for customer churn signal monitoring while maintaining human decision authority Implementing security-first evaluation for enterprise orchestration platforms with low-code requirements Creating automated AI news digests using agent workflows and Notebook LM audio synthesis
undefined
Nov 6, 2025 • 43min

Extreme's Markus Nispel On Agent Governance: 3 Controls For Production Autonomy

Extreme Networks architected their AI platform around a fundamental tension: deploying non-deterministic generative models to manage deterministic network infrastructure where reliability is non-negotiable. Markus Nispel, CTO EMEA and Head of AI Engineering, details their evolution from 2018 AI ops implementations to production multi-agent systems that analyze event correlations impossible for human operators and automatically generate support tickets. Their ARC framework (Acceleration, Replacement, Creation) separates mandatory automation from competitive differentiation by isolating truly differentiating use cases in the "creation" category, where ROI discussions become simpler and competitive positioning strengthens. The governance architecture solves the trust problem for autonomous systems in production environments. Agents inherit user permissions with three-layer controls: deployment scope (infrastructure boundaries), action scope (operation restrictions), and autonomy level (human-in-loop requirements). Exposing the full reasoning and planning chain before execution creates audit trails while building operator confidence. Their organizational shift from centralized AI teams to an "AI mesh" structure pushes domain ownership to business units while maintaining unified data architecture, enabling agent systems that can leverage diverse data sources across operational, support, supply chain, and contract domains. Topics discussed: ARC framework categorizing use cases by Acceleration, Replacement, and Creation to focus resources on differentiation Three-dimension agent governance: deployment scope, action scope, and autonomy levels with inherited user permissions Exposing agent reasoning, planning, and execution chains for production transparency and audit requirements AI mesh organizational model distributing domain ownership while maintaining centralized data architecture Pre-production SME validation versus post-deployment behavioral analytics for accuracy measurement 90% reduction in time-to-knowledge through RAG systems accessing tens of thousands of documentation pages Build versus buy decisions anchored to competitive differentiation and willingness to rebuild every six months Strategic data architecture enabling cross-domain agent capabilities combining operational, support, and business data Agent interoperability protocols including MCP and A2A for cross-enterprise collaboration Production metrics tracking user rephrasing patterns, sentiment analysis, and intent understanding for accuracy
undefined
Oct 16, 2025 • 44min

Edge AI Foundation's Pete Bernard on an Edge-First Framework: Eliminate Cloud Tax Running AI On Site

Pete Bernard, CEO of Edge AI Foundation, breaks down why enterprises should default to running AI at the edge rather than the cloud, citing real deployments where QSR systems count parking lot cars to auto-trigger french fry production and medical implants that autonomously adjust deep brain stimulation for Parkinson's patients. He shares contrarian views on IoT's past failures and how they shaped today's cloud-native approach to managing edge devices. Topics discussed: Edge-first architectural decision framework: Run AI where data is created to eliminate cloud costs (ingress, egress, connectivity, latency) Market growth projections reaching $80 billion annually by 2030 for edge AI deployments across industries Hardware constraints driving deployment decisions: fanless systems for dusty environments, intrinsically safe devices for hazardous locations Self-tuning deep brain stimulation implants measuring electrical signals and adjusting treatment autonomously, powered for decades without external intervention Why Bernard considers Amazon Alexa "the single worst thing to ever happen to IoT" for creating widespread skepticism Solar-powered edge cameras reducing pedestrian fatalities in San Jose and Colorado without infrastructure teardown Generative AI interpreting sensor fusion data, enabling natural language queries of hospital telemetry and industrial equipment health
undefined
Oct 2, 2025 • 38min

PATH's Bilal Mateen on the measurement problem stalling healthcare AI

PATH's Chief AI Officer Bilal Mateen reveals how a computer vision tool that digitizes lab documents cut processing time from 90 days to 1 day in Kenya, yet vendors keep pitching clinical decision support systems instead of these operational solutions that actually move the needle. After 30 years between FDA approval of breast cancer AI diagnostics and the first randomized control trial proving patient benefit, Mateen argues we've been measuring the wrong things: diagnostic accuracy instead of downstream health outcomes. His team's Kenya pilot with Penda Health demonstrated cash-releasing ROI through an LLM co-pilot that prevented inappropriate prescriptions, saving patients and insurers $50,000 in unnecessary antibiotics and steroids. What looks like lost revenue to the clinic represents system-wide healthcare savings. Topics discussed: The 90-day to 1-day document digitization transformation in Kenya Research showing only 1 in 20 improved diagnostic tests benefit patients Cash-releasing versus non-cash-releasing efficiency gains framework The 30-year gap between FDA approval and proven patient outcomes Why digital infrastructure investment beats diagnostic AI development Hidden costs of scaling pilots across entire health systems How inappropriate prescription prevention creates system-wide savings Why operational AI beats clinical decision support in resource-constrained settings
undefined
Sep 26, 2025 • 39min

Dr. Lisa Palmer on "Resistance-to-ROI": Why business metrics break through organizational fear

Dr. Lisa Palmer brings a rare "jungle gym" career perspective to enterprise AI, having worked as a CIO, negotiated from inside Microsoft and Teradata, led Gartner's executive programs, and completed her doctorate in applied AI just six months after ChatGPT hit the market. In this conversation, she challenges the assumption that heavily resourced enterprises are best positioned for AI success and reveals why the MIT study showing 95% of AI projects fail to impact P&L, and what successful organizations do differently. Key Topics Discussed: Why Heavily Resourced Organizations Are Actually Disadvantaged in AI Large enterprises lack nimbleness; power companies now partner with 12+ startups. Two $500M-$1B companies are removing major SaaS providers using AI replacements. The "Show AI Don't Tell It" Framework for Overcoming Resistance Built interactive LLM-powered hologram for stadium executives instead of presentations. Addresses seven resistance layers from board skepticism to frontline job fears. Got immediate funding. Breaking "Pilot Purgatory" Through Organizational Redesign Pilots create "false reality" with cross-functional collaboration absent in siloed organizations. Solution: replicate pilot's collaborative structure organizationally, not just deploy technology. The Four Stage AI Performance Flywheel Foundation (data readiness, break silos), Execution (visual dartboarding for co-ownership), Scale (redesign processes), Innovation (AI surfaces new use cases). Why You Need a Business Strategy Fueled by AI, Not an AI Strategy MIT shows 95% failure from lacking business focus. Start with metrics (competitive advantage, cost reduction) not technology. Stakeholders confuse AI types. The Coming Shift: Agentic Layers Replacing SaaS GUIs Organizations building agent layers above SaaS platforms. Vendors opening APIs survive; those protecting walled gardens lose decades-old accounts. Building Courageous Leadership for AI Transformation "Bold AI Leadership" framework: complete work redesign requiring personal career risk. Launching certifications. Insurance company reduced complaints 26% through human-AI process rebuild.
undefined
Sep 19, 2025 • 37min

Virtuous’ Nathan Chappell on the CAIO shift: From technical oversight to organizational conscience

Nathan Chappell's first ML model in 2017 outperformed his organization's previous fundraising techniques by 5x—but that was just the beginning. As Virtuous's first Chief AI Officer, he's pioneering what he calls "responsible and beneficial" AI deployment, going beyond standard governance frameworks to address long-term mission alignment. His radical thesis: the CAIO role has evolved from technical oversight to serving as the organizational conscience in an era where AI touches every business process. Topics Discussed: The Conscience Function of CAIO Role: Nathan positions the CAIO as "the conscience of the organization" rather than technical oversight, given that "AI is among in and through everything within the organization"—a fundamental redefinition as AI becomes ubiquitous across all business processes "Responsible and Beneficial" AI Framework: Moving beyond standard responsible AI to include beneficial impact—where responsible covers privacy and ethics, but beneficial requires examining long-term consequences, particularly critical for organizations operating in the "currency of trust" Hiring Philosophy Shift: Moving from "subject matter experts that had like 15 years domain experience" to "scrappy curious generalists who know how to connect dots"—a complete reversal of traditional expertise-based hiring for the AI era The November 30, 2022 Best Practice Reset: Nathan's framework that "if you have a best practice that predates November 30th, 2022, then it's an outdated practice"—using ChatGPT's launch as the inflection point for rethinking organizational processes Strategic AI Deployment Pattern: Organizations succeeding through narrow, specific, and intentional AI implementation versus those failing with broad "we just need to use AI" approaches—includes practical frameworks for identifying appropriate AI applications Solving Aristotle's 2,300-Year Philanthropic Problem: Using machine learning to quantify connection and solve what Aristotle identified as the core challenge of philanthropy—determining "who to give it to, when, and what purpose, and what way" Failure Days as Organizational Learning Architecture: Monthly sessions where teams present failed experiments to incentivize risk-taking and cross-pollination—operational framework for building curiosity culture in traditionally risk-averse nonprofit environments Information Doubling Acceleration Impact: Connecting Eglantine Jeb's 1927 observation that "the world is not unimaginative or ungenerous, it's just very busy" to today's 12-hour information doubling cycle, with AI potentially reducing this to hours by 2027
undefined
Aug 26, 2025 • 42min

Zayo Group's David Sedlock on Building Gold Data Sets Before Chasing AI Hype

What happens when a Chief Data & AI Officer tells the board "I'm not going to talk about AI" on day two of the job? At Zayo Group, the largest independent connectivity company in the United States with around 145,000 route miles, it sparked a systematic approach that generated tens of millions in value while building enterprise AI foundations that actually scale. David Sedlock inherited a company with zero data strategy and a single monolithic application running the entire business. His counterintuitive move: explicitly refuse AI initiatives until data governance matured. The payoff came fast—his organization flipped from cost center to profit center within two months, delivering tens of millions in year one savings while constructing the platform architecture needed for production AI. The breakthrough insight: encoding all business logic in portable Python libraries rather than embedding it in vendor tools. This architectural decision lets Zayo pivot between AI platforms, agentic frameworks, and future technologies without rebuilding core intelligence, a critical advantage as the AI landscape evolves. Topics Discussed: Implementing "AI Quick Strikes" methodology with controlled technical debt to prove ROI during platform construction - Sedlock ran a small team of three to four people focused on churn, revenue recognition, and service delivery while building foundational capabilities, accepting suboptimal data usage to generate tens of millions in savings within the first year. Architecting business logic portability through Python libraries to eliminate vendor lock-in - All business rules and logic are encoded in Python libraries rather than embedded in ETL tools, BI tools, or source systems, enabling seamless migration between AI vendors, agentic architectures, and future platforms without losing institutional intelligence. Engineering 1,149 critical data elements into 176 business-ready "gold data sets" - Rather than attempting to govern millions of data elements, Zayo identified and perfected only the most critical ones used to run the business, combining them with business logic and rules to create reliable inputs for AI applications. Achieving 83% confidence level for service delivery SLA predictions using text mining and machine learning - Combining structured data with crawling of open text fields, the model predicts at contract signing whether committed timeframes will be met, enabling proactive action on service delivery challenges ranked by confidence level. Democratizing data access through citizen data scientists while maintaining governance on certified data sets - Business users gain direct access to gold data sets through the data platform, enabling front-line innovation on clean, verified data while technical teams focus on deep, complex, cross-organizational opportunities. Compressing business requirements gathering from months to hours using generative AI frameworks - Recording business stakeholder conversations and processing them through agentic frameworks generates business cases, user stories, and test scripts in real-time, condensing traditional PI planning cycles that typically involve hundreds of people over months. Scaling from idea to 500 users in 48 hours through data platform readiness - Network inventory management evolved from Excel spreadsheet to live dashboard updated every 10 minutes, demonstrating how proper foundational architecture enables rapid application development when business needs arise. Reframing AI workforce impact as capability multiplication rather than job replacement - Strategic approach of hiring 30-50 people to perform like 300-500 people, with humans expanding roles as agent managers while maintaining accountability for agent outcomes and providing business context feedback loops. Listen to more episodes:  Apple  Spotify  YouTube
undefined
Aug 5, 2025 • 49min

Intelagen and Alpha Transform Holdings’ Nicholas Clarke on How Knowledge Graphs Are Your Real Competitive Moat

When foundation models commoditize AI capabilities, competitive advantage shifts to how systematically you encode organizational intelligence into your systems. Nicholas Clarke, Chief AI Officer at Intelagen and Alpha Transform Holdings, argues that enterprises rushing toward "AI first" mandates are missing the fundamental differentiator: knowledge graphs that embed unique operational constraints and strategic logic directly into model behavior. Clarke's approach moves beyond basic RAG implementations to comprehensive organizational modeling using domain ontologies. Rather than relying on prompt engineering that competitors can reverse-engineer, his methodology creates knowledge graphs that serve as proprietary context layers for model training, fine-tuning, and runtime decision-making—turning governance constraints into competitive moats. The core challenge? Most enterprises lack sufficient self-knowledge of their own differentiated value proposition to model it effectively, defaulting to PowerPoint strategies that can't be systematized into AI architectures. Topics discussed: Build comprehensive organizational models using domain ontologies that create proprietary context layers competitors can't replicate through prompt copying. Embed company-specific operational constraints across model selection, training, and runtime monitoring to ensure organizationally unique AI outputs rather than generic responses. Why enterprises operating strategy through PowerPoint lack the systematic self-knowledge required to build effective knowledge graphs for competitive differentiation. GraphOps methodology where domain experts collaborate with ontologists to encode tacit institutional knowledge into maintainable graph structures preserving operational expertise. Nano governance framework that decomposes AI controls into smallest operationally implementable modules mapping to specific business processes with human accountability. Enterprise architecture integration using tools like Truu to create systematic traceability between strategic objectives and AI projects for governance oversight. Multi-agent accountability structures where every autonomous agent traces to named human owners with monitoring agents creating systematic liability chains. Neuro-symbolic AI implementation combining symbolic reasoning systems with neural networks to create interpretable AI operating within defined business rules. Listen to more episodes:  Apple  Spotify  YouTube
undefined
Jun 4, 2025 • 48min

AutogenAI’s Sean Williams on How Philosophy Shaped a AI Proposal Writing Success

A philosophy student turned proposal writer turned AI entrepreneur, Sean Williams, Founder & CEO of AutogenAI, represents a rare breed in today's AI landscape: someone who combines deep theoretical understanding with pinpointed commercial focus. His approach to building AI solutions draws from Wittgenstein's 80-year-old insights about language games, proving that philosophical rigor can be the ultimate competitive advantage in AI commercialization.   Sean's journey to founding a company that helps customers win millions in government contracts illustrates a crucial principle: the most successful AI applications solve specific, measurable problems rather than chasing the mirage of artificial general intelligence. By focusing exclusively on proposal writing — a domain with objective, binary outcomes — AutogenAI has created a scientific framework for evaluating AI effectiveness that most companies lack.   Topics discussed:   Why Wittgenstein's "language games" theory explains LLM limitations and the fallacy of general language engines across different contexts and domains. The scientific approach to AI evaluation using binary success metrics, measuring 60 criteria per linguistic transformation against actual contract wins. How philosophical definitions of truth led to early adoption of retrieval augmented generation and human-in-the-loop systems before they became mainstream. The "Boris Johnson problem" of AI hallucination and building practical truth frameworks through source attribution rather than correspondence theory. Advanced linguistic engineering techniques that go beyond basic prompting to incorporate tacit knowledge and contextual reasoning automatically. Enterprise AI security requirements including FedRAMP compliance for defense customers and the strategic importance of on-premises deployment options. Go-to-market strategies that balance technical product development with user delight, stakeholder management, and objective value demonstration. Why the current AI landscape mirrors the Internet boom in 1996, with foundational companies being built in the "primordial soup" of emerging technology. The difference between AI as search engine replacement versus creative sparring partner, and why factual question-answering represents suboptimal LLM usage. How domain expertise combined with philosophical rigor creates sustainable competitive advantages against both generic AI solutions and traditional software incumbents.     Listen to more episodes:  Apple  Spotify  YouTube Intro Quote: “We came up with a definition of truth, which was something is true if you can show where the source came from. So we came to retrieval augmented generation, we came to sourcing. If you looked at what people like Perplexity are doing, like putting sources in, we come to that and we come to it from a definition of truth. Something's true if you can show where the source comes from. And two is whether a human chooses to believe that source. So that took us then into deep notions of human in the loop.” 26:06-26:36

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app