

Chain of Thought
Conor Bronsdon
AI is reshaping infrastructure, strategy, and entire industries. Host Conor Bronsdon talks to the engineers, founders, and researchers building breakthrough AI systems about what it actually takes to ship AI in production, where the opportunities lie, and how leaders should think about the strategic bets ahead.
Chain of Thought translates technical depth into actionable insights for builders and decision-makers. New episodes bi-weekly.
Conor Bronsdon is an angel investor in AI and dev tools, Head of Technical Ecosystem at Modular, and previously led growth at AI startups Galileo and LinearB.
Chain of Thought translates technical depth into actionable insights for builders and decision-makers. New episodes bi-weekly.
Conor Bronsdon is an angel investor in AI and dev tools, Head of Technical Ecosystem at Modular, and previously led growth at AI startups Galileo and LinearB.
Episodes
Mentioned books

Dec 19, 2025 • 37min
Explaining Eval Engineering | Galileo's Vikram Chatterji
You've heard of evaluations—but eval engineering is the difference between AI that ships and AI that's stuck in prototype.Most teams still treat evals like unit tests: write them once, check a box, move on. But when you're deploying agents that make real decisions, touch real customers, and cost real money, those one-time tests don't cut it. The companies actually shipping production AI at scale have figured out something different—they've turned evaluations into infrastructure, into IP, into the layer where domain expertise becomes executable governance.Vikram Chatterji, CEO and Co-founder of Galileo, returns to Chain of Thought to break down eval engineering: what it is, why it's becoming a dedicated discipline, and what it takes to actually make it work. Vikram shares why generic evals are plateauing, how continuous learning loops drive accuracy, and why he predicts "eval engineer" will become as common a role as "prompt engineer" once was.In this conversation, Conor and Vikram explore:Why treating evals as infrastructure—not checkboxes—separates production AI from prototypesThe plateau problem: why generic LLM-as-a-judge metrics can't break 90% accuracyHow continuous human feedback loops improve eval precision over timeThe emerging "eval engineer" role and what the job actually looks likeWhy 60-70% of AI engineers' time is already spent on evalsWhat multi-agent systems mean for the future of evaluationVikram's framework for baking trust AND control into agentic applicationsPlus: Conor shares news about his move to Modular and what it means for Chain of Thought going forward.Chapters:00:00 – Introduction: Why Evals Are Becoming IP01:37 – What Is Eval Engineering?04:24 – The Eval Engineering Course for Developers05:24 – Generic Evals Are Plateauing08:21 – Continuous Learning and Human Feedback11:01 – Human Feedback Loops and Eval Calibration13:37 – The Emerging Eval Engineer Role16:15 – What Production AI Teams Actually Spend Time On18:52 – Customer Impact and Lessons Learned24:28 – Multi-Agent Systems and the Future of Evals30:27 – MCP, A2A Protocols, and Agent Authentication33:23 – The Eval Engineer Role: Product-Minded + Technical34:53 – Final Thoughts: Trust, Control, and What's NextConnect with Conor Bronsdon:Substack – https://conorbronsdon.substack.com/LinkedIn – https://www.linkedin.com/in/conorbronsdon/X (Twitter) – https://x.com/ConorBronsdonLearn more about Eval Engineering:https://galileo.ai/evalengineeringConnect with Vikram Chatterji:LinkedIn – https://www.linkedin.com/in/vikram-chatterji/

Nov 26, 2025 • 59min
Debunking AI's Environmental Panic | Andy Masley
Andy Masley, Director of Effective Altruism DC and a former physics teacher, joins the discussion to debunk common myths surrounding AI's environmental impact. He reveals a staggering 4,500x error in a bestselling book regarding a data center's water usage. They explore how many AI water usage claims are misleading and emphasize that using AI tools has a minimal environmental footprint. Andy argues for focusing on systemic issues like data center efficiency and suggests that AI could ultimately help mitigate climate change.

9 snips
Nov 19, 2025 • 1h 18min
The Critical Infrastructure Behind the AI Boom | Cisco CPO Jeetu Patel
Jeetu Patel, President and Chief Product Officer at Cisco, shares insights on the critical infrastructure needed for AI's rapid growth. He discusses three major constraints: infrastructure limits, trust issues from non-deterministic models, and a data gap. Jeetu highlights Cisco's approach to building secure AI factories and their collaborations with major partners like NVIDIA. He also emphasizes why enterprises may soon utilize thousands of specialized models and the importance of high-trust teams. Join him for a deep dive into the future of AI infrastructure!

Nov 12, 2025 • 53min
Beyond Transformers: Maxime Labonne on Post-Training, Edge AI, and the Liquid Foundation Model Breakthrough
Maxime Labonne, Head of Post-Training at Liquid AI and creator of a popular LLM course, dives into the future of AI architectures. He reveals how Liquid AI’s hybrid model merges transformers with convolutional layers for efficiency on edge devices. Maxime discusses the pivotal role of post-training in maximizing AI capabilities and the use of synthetic data. He shares insights on small on-device models, creative applications, and the challenges of function calling—making complex AI evolution both relatable and accessible.

15 snips
Oct 8, 2025 • 53min
Architecting AI Agents: The Shift from Models to Systems | Aishwarya Srinivasan, Fireworks AI Head of AI Developer Relations
Aishwarya Srinivasan, Head of AI Developer Relations at Fireworks AI, dives into the intricate world of building robust AI agents. She advocates for a shift from model-centric thinking to viewing AI as a complete software system. Aish discusses the evolution from prompt to context engineering, emphasizing high-quality data and responsible AI. She also explores the pros and cons of open-source models, the importance of evaluation-driven development, and strategies for managing agent autonomy. Her insights provide a roadmap for navigating the future of AI.

8 snips
Oct 1, 2025 • 21min
The accidental algorithm: Melisa Russak, AI research scientist at WRITER
Melisa Russak, an AI research scientist at Writer, shares her journey from a math teacher in China to an innovator in machine learning. She recounts accidentally rediscovering core algorithms, emphasizing how fresh perspectives can lead to breakthroughs. Melisa dives into creating a handwritten character classifier and talks about using synthetic data due to data constraints. Her insights on training AI for self-knowledge and the importance of human-centered evaluation reveal the future of enterprise AI.

33 snips
Sep 24, 2025 • 55min
If Code Generation is Solved What's Next? | Graphite’s Greg Foster
Greg Foster, Co-founder and CTO of Graphite, shares insights on the evolving role of AI in software development. He highlights how code reviews are now the bottleneck as AI automates code generation. Greg introduces three waves of AI technologies transforming coding processes and discusses the importance of context and senior engineers in this new landscape. He explains 'stacking'—breaking down changes for better review efficiency—and emphasizes the hiring gap for experienced engineers who can effectively leverage AI tools. A captivating dive into the future of coding!

Sep 10, 2025 • 54min
Vercel's Playbook for AI Agents: From Vibe Check to Production | Malte Ubl
What’s the first step to building an enterprise-grade AI tool? Malte Ubl, CTO of Vercel, joins us this week to share Vercel’s playbook for agents, explaining how agents are a new type of software for solving flexible tasks. He shares how Vercel's developer-first ecosystem, including tools like the AI SDK and AI Gateway, is designed to help teams move from a quick proof-of-concept to a trusted, production-ready application.Malte explores the practicalities of production AI, from the importance of eval-driven development to debugging chaotic agents with robust tracing. He offers a critical lesson on security, explaining why prompt injection requires a totally different solution - tool constraint - than traditional threats like SQL injection. This episode is a deep dive into the infrastructure and mindset, from sandboxes to specialized SLMs, required to build the next generation of AI tools.Follow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Today's Guest(s)Connect with Malte on LinkedInFollow Malte on X (formerly Twitter)Learn more about VercelCheck out GalileoTry GalileoAgent Leaderboard

14 snips
Aug 27, 2025 • 52min
From Demo to Defensibility: How to Build an AI Business that Lasts | Aurimas Griciūnas
Aurimas Griciūnas, CEO of SwirlAI and AI bootcamp founder, discusses building sustainable AI businesses. He emphasizes that success now relies on speed, financial backing, and exceptional talent, rather than just trendy tools. Aurimas warns about the pitfalls of neglecting fundamental engineering in a crowded market. He also shares insights on the future of AI, foreshadowing a slowdown in LLM advancements and the rise of self-improving systems, making a case for the significance of robust data engineering and automated feedback loops.

8 snips
Aug 20, 2025 • 42min
Mindset Over Metrics: How to Approach AI Engineering | Hamel Husain
Hamel Husain, an independent AI consultant with a rich history at Airbnb and GitHub, dives into the mindset shift required for successful AI engineering. He critiques the reliance on vanity metrics, arguing they lead to misconceptions about AI performance. Instead, he champions custom evaluations and error analysis as the backbone of robust AI products. The discussion also highlights the importance of domain expertise in refining AI metrics and encourages an experimentation mindset to foster continuous improvement and reliability in AI systems.


