The Chief AI Officer Show

Front Lines
undefined
Oct 16, 2025 • 44min

Edge AI Foundation's Pete Bernard on an Edge-First Framework: Eliminate Cloud Tax Running AI On Site

Pete Bernard, CEO of Edge AI Foundation, breaks down why enterprises should default to running AI at the edge rather than the cloud, citing real deployments where QSR systems count parking lot cars to auto-trigger french fry production and medical implants that autonomously adjust deep brain stimulation for Parkinson's patients. He shares contrarian views on IoT's past failures and how they shaped today's cloud-native approach to managing edge devices. Topics discussed: Edge-first architectural decision framework: Run AI where data is created to eliminate cloud costs (ingress, egress, connectivity, latency) Market growth projections reaching $80 billion annually by 2030 for edge AI deployments across industries Hardware constraints driving deployment decisions: fanless systems for dusty environments, intrinsically safe devices for hazardous locations Self-tuning deep brain stimulation implants measuring electrical signals and adjusting treatment autonomously, powered for decades without external intervention Why Bernard considers Amazon Alexa "the single worst thing to ever happen to IoT" for creating widespread skepticism Solar-powered edge cameras reducing pedestrian fatalities in San Jose and Colorado without infrastructure teardown Generative AI interpreting sensor fusion data, enabling natural language queries of hospital telemetry and industrial equipment health
undefined
Oct 2, 2025 • 38min

PATH's Bilal Mateen on the measurement problem stalling healthcare AI

PATH's Chief AI Officer Bilal Mateen reveals how a computer vision tool that digitizes lab documents cut processing time from 90 days to 1 day in Kenya, yet vendors keep pitching clinical decision support systems instead of these operational solutions that actually move the needle. After 30 years between FDA approval of breast cancer AI diagnostics and the first randomized control trial proving patient benefit, Mateen argues we've been measuring the wrong things: diagnostic accuracy instead of downstream health outcomes. His team's Kenya pilot with Penda Health demonstrated cash-releasing ROI through an LLM co-pilot that prevented inappropriate prescriptions, saving patients and insurers $50,000 in unnecessary antibiotics and steroids. What looks like lost revenue to the clinic represents system-wide healthcare savings. Topics discussed: The 90-day to 1-day document digitization transformation in Kenya Research showing only 1 in 20 improved diagnostic tests benefit patients Cash-releasing versus non-cash-releasing efficiency gains framework The 30-year gap between FDA approval and proven patient outcomes Why digital infrastructure investment beats diagnostic AI development Hidden costs of scaling pilots across entire health systems How inappropriate prescription prevention creates system-wide savings Why operational AI beats clinical decision support in resource-constrained settings
undefined
Sep 26, 2025 • 39min

Dr. Lisa Palmer on "Resistance-to-ROI": Why business metrics break through organizational fear

Dr. Lisa Palmer brings a rare "jungle gym" career perspective to enterprise AI, having worked as a CIO, negotiated from inside Microsoft and Teradata, led Gartner's executive programs, and completed her doctorate in applied AI just six months after ChatGPT hit the market. In this conversation, she challenges the assumption that heavily resourced enterprises are best positioned for AI success and reveals why the MIT study showing 95% of AI projects fail to impact P&L, and what successful organizations do differently. Key Topics Discussed: Why Heavily Resourced Organizations Are Actually Disadvantaged in AI Large enterprises lack nimbleness; power companies now partner with 12+ startups. Two $500M-$1B companies are removing major SaaS providers using AI replacements. The "Show AI Don't Tell It" Framework for Overcoming Resistance Built interactive LLM-powered hologram for stadium executives instead of presentations. Addresses seven resistance layers from board skepticism to frontline job fears. Got immediate funding. Breaking "Pilot Purgatory" Through Organizational Redesign Pilots create "false reality" with cross-functional collaboration absent in siloed organizations. Solution: replicate pilot's collaborative structure organizationally, not just deploy technology. The Four Stage AI Performance Flywheel Foundation (data readiness, break silos), Execution (visual dartboarding for co-ownership), Scale (redesign processes), Innovation (AI surfaces new use cases). Why You Need a Business Strategy Fueled by AI, Not an AI Strategy MIT shows 95% failure from lacking business focus. Start with metrics (competitive advantage, cost reduction) not technology. Stakeholders confuse AI types. The Coming Shift: Agentic Layers Replacing SaaS GUIs Organizations building agent layers above SaaS platforms. Vendors opening APIs survive; those protecting walled gardens lose decades-old accounts. Building Courageous Leadership for AI Transformation "Bold AI Leadership" framework: complete work redesign requiring personal career risk. Launching certifications. Insurance company reduced complaints 26% through human-AI process rebuild.
undefined
Sep 19, 2025 • 37min

Virtuous’ Nathan Chappell on the CAIO shift: From technical oversight to organizational conscience

Nathan Chappell's first ML model in 2017 outperformed his organization's previous fundraising techniques by 5x—but that was just the beginning. As Virtuous's first Chief AI Officer, he's pioneering what he calls "responsible and beneficial" AI deployment, going beyond standard governance frameworks to address long-term mission alignment. His radical thesis: the CAIO role has evolved from technical oversight to serving as the organizational conscience in an era where AI touches every business process. Topics Discussed: The Conscience Function of CAIO Role: Nathan positions the CAIO as "the conscience of the organization" rather than technical oversight, given that "AI is among in and through everything within the organization"—a fundamental redefinition as AI becomes ubiquitous across all business processes "Responsible and Beneficial" AI Framework: Moving beyond standard responsible AI to include beneficial impact—where responsible covers privacy and ethics, but beneficial requires examining long-term consequences, particularly critical for organizations operating in the "currency of trust" Hiring Philosophy Shift: Moving from "subject matter experts that had like 15 years domain experience" to "scrappy curious generalists who know how to connect dots"—a complete reversal of traditional expertise-based hiring for the AI era The November 30, 2022 Best Practice Reset: Nathan's framework that "if you have a best practice that predates November 30th, 2022, then it's an outdated practice"—using ChatGPT's launch as the inflection point for rethinking organizational processes Strategic AI Deployment Pattern: Organizations succeeding through narrow, specific, and intentional AI implementation versus those failing with broad "we just need to use AI" approaches—includes practical frameworks for identifying appropriate AI applications Solving Aristotle's 2,300-Year Philanthropic Problem: Using machine learning to quantify connection and solve what Aristotle identified as the core challenge of philanthropy—determining "who to give it to, when, and what purpose, and what way" Failure Days as Organizational Learning Architecture: Monthly sessions where teams present failed experiments to incentivize risk-taking and cross-pollination—operational framework for building curiosity culture in traditionally risk-averse nonprofit environments Information Doubling Acceleration Impact: Connecting Eglantine Jeb's 1927 observation that "the world is not unimaginative or ungenerous, it's just very busy" to today's 12-hour information doubling cycle, with AI potentially reducing this to hours by 2027
undefined
Aug 26, 2025 • 42min

Zayo Group's David Sedlock on Building Gold Data Sets Before Chasing AI Hype

What happens when a Chief Data & AI Officer tells the board "I'm not going to talk about AI" on day two of the job? At Zayo Group, the largest independent connectivity company in the United States with around 145,000 route miles, it sparked a systematic approach that generated tens of millions in value while building enterprise AI foundations that actually scale. David Sedlock inherited a company with zero data strategy and a single monolithic application running the entire business. His counterintuitive move: explicitly refuse AI initiatives until data governance matured. The payoff came fast—his organization flipped from cost center to profit center within two months, delivering tens of millions in year one savings while constructing the platform architecture needed for production AI. The breakthrough insight: encoding all business logic in portable Python libraries rather than embedding it in vendor tools. This architectural decision lets Zayo pivot between AI platforms, agentic frameworks, and future technologies without rebuilding core intelligence, a critical advantage as the AI landscape evolves. Topics Discussed: Implementing "AI Quick Strikes" methodology with controlled technical debt to prove ROI during platform construction - Sedlock ran a small team of three to four people focused on churn, revenue recognition, and service delivery while building foundational capabilities, accepting suboptimal data usage to generate tens of millions in savings within the first year. Architecting business logic portability through Python libraries to eliminate vendor lock-in - All business rules and logic are encoded in Python libraries rather than embedded in ETL tools, BI tools, or source systems, enabling seamless migration between AI vendors, agentic architectures, and future platforms without losing institutional intelligence. Engineering 1,149 critical data elements into 176 business-ready "gold data sets" - Rather than attempting to govern millions of data elements, Zayo identified and perfected only the most critical ones used to run the business, combining them with business logic and rules to create reliable inputs for AI applications. Achieving 83% confidence level for service delivery SLA predictions using text mining and machine learning - Combining structured data with crawling of open text fields, the model predicts at contract signing whether committed timeframes will be met, enabling proactive action on service delivery challenges ranked by confidence level. Democratizing data access through citizen data scientists while maintaining governance on certified data sets - Business users gain direct access to gold data sets through the data platform, enabling front-line innovation on clean, verified data while technical teams focus on deep, complex, cross-organizational opportunities. Compressing business requirements gathering from months to hours using generative AI frameworks - Recording business stakeholder conversations and processing them through agentic frameworks generates business cases, user stories, and test scripts in real-time, condensing traditional PI planning cycles that typically involve hundreds of people over months. Scaling from idea to 500 users in 48 hours through data platform readiness - Network inventory management evolved from Excel spreadsheet to live dashboard updated every 10 minutes, demonstrating how proper foundational architecture enables rapid application development when business needs arise. Reframing AI workforce impact as capability multiplication rather than job replacement - Strategic approach of hiring 30-50 people to perform like 300-500 people, with humans expanding roles as agent managers while maintaining accountability for agent outcomes and providing business context feedback loops. Listen to more episodes:  Apple  Spotify  YouTube
undefined
Aug 5, 2025 • 49min

Intelagen and Alpha Transform Holdings’ Nicholas Clarke on How Knowledge Graphs Are Your Real Competitive Moat

When foundation models commoditize AI capabilities, competitive advantage shifts to how systematically you encode organizational intelligence into your systems. Nicholas Clarke, Chief AI Officer at Intelagen and Alpha Transform Holdings, argues that enterprises rushing toward "AI first" mandates are missing the fundamental differentiator: knowledge graphs that embed unique operational constraints and strategic logic directly into model behavior. Clarke's approach moves beyond basic RAG implementations to comprehensive organizational modeling using domain ontologies. Rather than relying on prompt engineering that competitors can reverse-engineer, his methodology creates knowledge graphs that serve as proprietary context layers for model training, fine-tuning, and runtime decision-making—turning governance constraints into competitive moats. The core challenge? Most enterprises lack sufficient self-knowledge of their own differentiated value proposition to model it effectively, defaulting to PowerPoint strategies that can't be systematized into AI architectures. Topics discussed: Build comprehensive organizational models using domain ontologies that create proprietary context layers competitors can't replicate through prompt copying. Embed company-specific operational constraints across model selection, training, and runtime monitoring to ensure organizationally unique AI outputs rather than generic responses. Why enterprises operating strategy through PowerPoint lack the systematic self-knowledge required to build effective knowledge graphs for competitive differentiation. GraphOps methodology where domain experts collaborate with ontologists to encode tacit institutional knowledge into maintainable graph structures preserving operational expertise. Nano governance framework that decomposes AI controls into smallest operationally implementable modules mapping to specific business processes with human accountability. Enterprise architecture integration using tools like Truu to create systematic traceability between strategic objectives and AI projects for governance oversight. Multi-agent accountability structures where every autonomous agent traces to named human owners with monitoring agents creating systematic liability chains. Neuro-symbolic AI implementation combining symbolic reasoning systems with neural networks to create interpretable AI operating within defined business rules. Listen to more episodes:  Apple  Spotify  YouTube
undefined
Jun 4, 2025 • 48min

AutogenAI’s Sean Williams on How Philosophy Shaped a AI Proposal Writing Success

A philosophy student turned proposal writer turned AI entrepreneur, Sean Williams, Founder & CEO of AutogenAI, represents a rare breed in today's AI landscape: someone who combines deep theoretical understanding with pinpointed commercial focus. His approach to building AI solutions draws from Wittgenstein's 80-year-old insights about language games, proving that philosophical rigor can be the ultimate competitive advantage in AI commercialization.   Sean's journey to founding a company that helps customers win millions in government contracts illustrates a crucial principle: the most successful AI applications solve specific, measurable problems rather than chasing the mirage of artificial general intelligence. By focusing exclusively on proposal writing — a domain with objective, binary outcomes — AutogenAI has created a scientific framework for evaluating AI effectiveness that most companies lack.   Topics discussed:   Why Wittgenstein's "language games" theory explains LLM limitations and the fallacy of general language engines across different contexts and domains. The scientific approach to AI evaluation using binary success metrics, measuring 60 criteria per linguistic transformation against actual contract wins. How philosophical definitions of truth led to early adoption of retrieval augmented generation and human-in-the-loop systems before they became mainstream. The "Boris Johnson problem" of AI hallucination and building practical truth frameworks through source attribution rather than correspondence theory. Advanced linguistic engineering techniques that go beyond basic prompting to incorporate tacit knowledge and contextual reasoning automatically. Enterprise AI security requirements including FedRAMP compliance for defense customers and the strategic importance of on-premises deployment options. Go-to-market strategies that balance technical product development with user delight, stakeholder management, and objective value demonstration. Why the current AI landscape mirrors the Internet boom in 1996, with foundational companies being built in the "primordial soup" of emerging technology. The difference between AI as search engine replacement versus creative sparring partner, and why factual question-answering represents suboptimal LLM usage. How domain expertise combined with philosophical rigor creates sustainable competitive advantages against both generic AI solutions and traditional software incumbents.     Listen to more episodes:  Apple  Spotify  YouTube Intro Quote: “We came up with a definition of truth, which was something is true if you can show where the source came from. So we came to retrieval augmented generation, we came to sourcing. If you looked at what people like Perplexity are doing, like putting sources in, we come to that and we come to it from a definition of truth. Something's true if you can show where the source comes from. And two is whether a human chooses to believe that source. So that took us then into deep notions of human in the loop.” 26:06-26:36
undefined
May 20, 2025 • 35min

Doubleword's Meryem Arik on Why AI Success Starts With Deployment, Not Demos

From theoretical physics to transforming enterprise AI deployment, Meryem Arik, CEO & Co-founder of Doubleword, shares why most companies are overthinking their AI infrastructure and that adoption can be smoothed over by focusing on deployment flexibility over model sophistication. She also explains why most companies don't need expensive GPUs for LLM deployment and how focusing on business outcomes leads to faster value creation.    The conversation explores everything from navigating regulatory constraints in different regions to building effective go-to-market strategies for AI infrastructure, offering a comprehensive look at both the technical and organizational challenges of enterprise AI adoption.   Topics discussed:   Why many enterprises don't need expensive GPUs like H100s for effective LLM deployment, dispelling common misconceptions about hardware requirements. How regulatory constraints in different regions create unique challenges for AI adoption. The transformation of AI buying processes from product-led to consultative sales, reflecting the complexity of enterprise deployment. Why document processing and knowledge management will create more immediate business value than autonomous agents. The critical role of change management in AI adoption and why technological capability often outpaces organizational readiness. The shift from early experimentation to value-focused implementation across different industries and sectors. How to navigate organizational and regulatory bottlenecks that often pose bigger challenges than technical limitations. The evolution of AI infrastructure as a product category and its implications for future enterprise buying behavior. Managing the balance between model performance and deployment flexibility in enterprise environments.    Listen to more episodes:  Apple  Spotify  YouTube   Intro Quote: “We're going to get to a point — and I don't actually, I think it will take longer than we think, so maybe, three to five years — where people will know that this is a product category that they need and it will look a lot more like, “I'm buying a CRM,” as opposed to, “I'm trying to unlock entirely new functionalities for my organization,” as it is at the moment. So that's the way that I think it'll evolve. I actually kind of hope it evolves in that way. I think it'd be good for the industry as a whole for there to be better understanding of what the various categories are and what problems people are actually solving.” 31:02-31:39
undefined
May 6, 2025 • 43min

Gentrace’s Doug Safreno on Escaping POC Purgatory with Collaborative AI Evaluation

The reliability gap between AI models and production-ready applications is where countless enterprise initiatives die in POC purgatory. In this episode of Chief AI Officer, Doug Safreno, Co-founder & CEO of Gentrace, offers the testing infrastructure that helped customers escape the Whac-A-Mole cycle plaguing AI development. Having experienced this firsthand when building an email assistant with GPT-3 in late 2022, Doug explains why traditional evaluation methods fail with generative AI, where outputs can be wrong in countless ways beyond simple classification errors. With Gentrace positioned as a "collaborative LLM testing environment" rather than just a visualization layer, Doug shares how they've transformed companies from isolated engineering testing to cross-functional evaluation that increased velocity 40x and enabled successful production launches. His insights from running monthly dinners with bleeding-edge AI engineers reveal how the industry conversation has evolved from basic product questions to sophisticated technical challenges with retrieval and agentic workflows. Topics discussed: Why asking LLMs to grade their own outputs creates circular testing failures, and how giving evaluator models access to reference data or expected outcomes the generating model never saw leads to meaningful quality assessment. How Gentrace's platform enables subject matter experts, product managers, and educators to contribute to evaluation without coding, increasing test velocity by 40x. Why aiming for 100% accuracy is often a red flag, and how to determine the right threshold based on recoverability of errors, stakes of the application, and business model considerations. Testing strategies for multi-step processes where the final output might be an edit to a document rather than text, requiring inspection of entire traces and intermediate decision points. How engineering discussions have shifted from basic form factor questions (chatbot vs. autocomplete) to specific technical challenges in implementing retrieval with LLMs and agentic workflows. How converting user feedback on problematic outputs into automated test criteria creates continuous improvement loops without requiring engineering resources. Using monthly dinners with 10-20 bleeding-edge AI engineers and broader events with 100+ attendees to create learning communities that generate leads while solving real problems. Why 2024 was about getting basic evaluation in place, while 2025 will expose the limitations of simplistic frameworks that don't use "unfair advantages" or collaborative approaches. How to frame AI reliability differently from traditional software while still providing governance, transparency, and trust across organizations. Signs a company is ready for advanced evaluation infrastructure: when playing Whac-A-Mole with fixes, when product managers easily break AI systems despite engineering evals, and when lack of organizational trust is blocking deployment.
undefined
Apr 10, 2025 • 42min

Eloquent AI’s Tugce Bulut on Probabilistic Architecture for Deterministic Business Outcomes

Tugce Bulut, Co-founder and CEO of Eloquent AI, dives into the fascinating world of probabilistic architecture aimed at achieving deterministic business outcomes. She discusses the innovative methods her team employs, like using up to 11 specialized agents for real-time responses and teaching AI when to admit uncertainty. Tugce shares insights on optimizing AI for customer service, addressing the importance of regulations, and transforming knowledge structures for efficiency. Get ready to explore the cutting-edge of conversational AI!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app