AI Engineering Podcast

Tobias Macey
undefined
5 snips
Dec 29, 2025 • 54min

Beyond the Chatbot: Practical Frameworks for Agentic Capabilities in SaaS

Preeti Shukla, a seasoned product and engineering leader with a focus on generative AI and SaaS, dives into the operational challenges of integrating agentic capabilities. She discusses crucial factors like latency, cost control, and data privacy in multi-tenant environments. Preeti emphasizes the importance of starting with internal pilots and outlines frameworks for choosing models and deployment strategies. She also tackles the complexities of evaluation and monitoring in AI systems, offering valuable insights on avoiding confident hallucinations and ensuring reliability.
undefined
28 snips
Dec 16, 2025 • 1h 8min

MCP as the API for AI‑Native Systems: Security, Orchestration, and Scale

Craig McLuckie, co-creator of Kubernetes and CEO of StackLock, dives into the pivotal role of the Model Context Protocol (MCP) as the API layer for AI-native applications. He discusses the importance of securing AI agents through optimized MCP deployments and highlights common adoption pitfalls like tool pollution and security risks. Craig also stresses the need for continuous evaluations in stochastic systems and shares insights on ToolHive's innovative approach to orchestration and semantic search for better developer experiences.
undefined
45 snips
Nov 24, 2025 • 60min

Context as Code, DevX as Leverage: Accelerating Software with Multi‑Agent Workflows

Max Beauchemin, a data engineering veteran and creator of Apache Airflow and Superset, discusses his shift to multi-agent development with Agor. He explores the concept of an 'AI-first reflex,' where humans orchestrate tasks while agents accelerate workflows. Max highlights how shifting bottlenecks like code review can be addressed through improved developer experiences and 'context as code.' He introduces Agor’s innovative platform, designed for managing git worktrees and collaborative environments, enabling richer visibility and parallelization in software engineering.
undefined
Nov 16, 2025 • 1h 1min

Inside the Black Box: Neuron-Level Control and Safer LLMs

Vinay Kumar, Founder and CEO of Arya.ai and head of Lexsi Labs, dives into the nuances of AI interpretability and alignment. He contrasts interpretability with explainability, highlighting the evolution of these concepts into tools for model alignment. Vinay shares insights on leveraging neuron-level editing for safer LLMs and discusses practical techniques like pruning and unlearning. He emphasizes the need for concrete metrics in alignment and explores the future role of AI agents in enhancing model safety, aiming for advanced AI that is both effective and responsible.
undefined
15 snips
Nov 10, 2025 • 1h 7min

Building the Internet of Agents: Identity, Observability, and Open Protocols

Guillaume de Saint Marc, VP of Engineering at Cisco OutShift, dives into the exciting realm of multi-agent systems. He contrasts rigid workflows with dynamic, self-forming agents that enhance trust in enterprise settings. The discussion touches on the Internet of Agents and the importance of open protocols like A2A and MCP for collaboration. Guillaume highlights the challenges of identity and observability, sharing successes in IT operations. He also introduces Slim, a next-gen communication layer, tailored for efficient agent collaboration.
undefined
16 snips
Nov 2, 2025 • 59min

Agents, IDEs, and the Blast Radius: Practical AI for Software Engineers

In this discussion, Will Vincent, a Python developer advocate at JetBrains, dives into the evolution of software engineering alongside AI. He contrasts 'vibe coding' with a more structured 'vibe engineering,' highlighting the importance of collaboration between developers and AI. Will shares practical strategies for utilizing AI tools effectively within IDEs, discusses the role of human oversight in architectural decisions, and addresses the challenges of context loss in code reviews. He emphasizes experimentation and ethical considerations in AI implementation.
undefined
9 snips
Oct 27, 2025 • 49min

From MRI to World Models: How AI Is Changing What We See

Daniel Sodickson, Chief of Innovation in Radiology at NYU Grossman School of Medicine, shares his expertise in AI and medical imaging. He unveils the evolution from linear MRI to deep learning, emphasizing the distinction between upstream AI that influences measurement and downstream AI that interprets images. Their discussion includes the challenges of cross-disciplinary knowledge, ethical implications of decoding brain activity, and innovative concepts like 'imaging without images.' Daniel highlights the necessity of human oversight as AI transforms healthcare and visual understanding.
undefined
51 snips
Oct 19, 2025 • 1h 6min

Specs, Tests, and Self‑Verification: The Playbook for Agentic Engineering Teams

Andrew Filev, CEO and founder of ZenCoder, shares his expertise on architecting AI-first engineering workflows. He discusses the evolution from simple autocomplete to truly agentic models and emphasizes the importance of context engineering and verification. Filev details ZenCoder's internal playbook, covering human-in-the-loop strategies and test-driven development. He also explores the balance between human control and model autonomy, predicts self-verification trends, and gives insightful lessons on navigating the challenges of building modern coding systems.
undefined
50 snips
Oct 11, 2025 • 1h 12min

From Probabilistic to Trustworthy: Building Orion, an Agentic Analytics Platform

In a fascinating discussion, Lucas Thelosen, CEO of Gravity with experience from Looker and Google, and Drew Gillson, AI expert and co-founder of Gravity, dive into their innovative analytics platform, Orion. They explore the shift from probabilistic to deterministic tools for data accuracy and the importance of user-oriented push-based insights. The duo emphasizes context engineering, organizational impact, and the emerging role of 'AI managers' to drive better data literacy. They also share surprising applications of Orion for qualitative analysis at scale.
undefined
46 snips
Oct 7, 2025 • 51min

Building Production-Ready AI Agents with Pydantic AI

Samuel Colvin, the mastermind behind the Pydantic validation library, shares his journey in creating Pydantic AI—a type-safe framework for AI agents in Python. He discusses the importance of stability and observability, comparing single-agent versus multi-agent systems. Samuel explores architectural patterns, emphasizing minimal abstractions and robust engineering practices. He also addresses code safety and the challenge of model-provider churn, while promoting open standards for enhanced observability. Join him as he reveals insights on crafting reliable AI agents!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app