Barrchives

Barr Yaron
undefined
Sep 9, 2025 • 53min

How Factory builds agents that help across the entire SDLC with Matan Grinberg, Founder & CEO

Factory co-founder and CEO Matan Grinberg joins Barr Yaron to talk about the future of agent-driven development, why enterprise migrations are the perfect wedge for AI adoption, and how software engineering is moving toward a world where humans orchestrate instead of implement.They dive into Factory’s origin story, the challenges of building AI systems for large organizations, and what the world might look like when millions of “droids” (AI agents) collaborate on software. Along the way, Matan shares surprising use cases, lessons from working with enterprises, and how his personal journey—from physics to burritos to building Factory—has shaped his leadership.This episode is broken down into the following chapters:00:00 – Intro and welcome01:06 – Founding Factory: from ChatGPT experiments to AI engineers in every tab04:05 – Early vision: autonomy for software engineering06:14 – Why focus on the enterprise vs. indie developers08:29 – Behavior change and technical challenges in large orgs10:25 – Using painful migrations as a wedge for adoption12:20 – The paradigm shift to agent-driven development15:59 – Ubiquity: making droids available across IDEs, Slack, Jira, and more17:16 – Why droids need the same context as human engineers20:15 – Memory, configurability, and organizational learning23:05 – How many droids? Specialization vs. general purpose agents25:34 – Bespoke vs. common workflows across enterprises27:06 – The hardest droid to build: coding itself28:26 – Testing, costs, and scaling agentic workflows30:29 – Why observability is essential for trustworthy agents31:28 – Surprising use cases: PM adoption and GDPR audits34:02 – Who Factory is building for: PMs, juniors, seniors, and beyond36:09 – Systems thinking as the core engineering skill38:09 – Building for enterprise trust: guardrails and governance40:35 – What’s missing at the model layer today42:43 – Migrations as a go-to wedge in go-to-market43:53 – The thought experiment: what if 1M engineers collaborated?46:07 – Scaling agent orgs: structure, monitoring, and observability48:46 – Why everything must be recorded for droids to succeed50:11 – Recruiting people obsessed with software development51:37 – Burritos, routines, and how Matan has changed as a leader53:41 – From coffee to Celsius, and why team culture matters most54:20 – Closing thoughts: the future when agents are truly ubiquitousSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
undefined
11 snips
Sep 2, 2025 • 56min

Datadog’s AI story with Olivier Pomel, Datadog co-founder and CEO

Olivier Pomel, co-founder and CEO of Datadog, shares insights into the future of observability and the transformative role of AI in software development. He discusses how Datadog evolved alongside cloud computing and highlights innovative features like voice interfaces for incident management. The conversation also addresses the complexities of AI in coding, real-time monitoring, and the importance of user feedback in shaping technology. Joined by Sunil Dhaliwal, they delve into the balance of talent acquisition in the competitive AI landscape.
undefined
Aug 5, 2025 • 50min

How to Build 10x Cheaper Object Storage, with Simon Eskildsen, Co-founder & CEO at Turbopuffer

In this episode of Barrchives, Barr Yaron sits down with Simon Eskildsen, co-founder and CEO of turbopuffer, to explore how he went from infrastructure challenges at Shopify to launching a groundbreaking vector database company.Simon shares his journey from recognizing the inefficiencies of traditional vector storage solutions to creating TurboPuffer, a revolutionary database designed specifically for AI-driven applications. He details key moments of insight—from working with startups struggling with prohibitive storage costs, to realizing the untapped potential of affordable object storage combined with modern vector indexing techniques.This episode is broken down into the following chapters:00:00 – Intro: Simon Eskildsen, Founder of TurboPuffer00:26 – The “aha” moment: Simon’s transition from Shopify and startup consulting to founding TurboPuffer03:13 – Turning “strings into things”: The power of vector search05:51 – Why vector databases? Economic drivers and technology shifts07:35 – Building TurboPuffer V1: Key architecture choices and early trade-offs10:44 – Challenges of indexing: Evaluating exhaustive search, HNSW, and clustering17:23 – Finding product-market fit with Cursor: TurboPuffer’s first major customer20:05 – Defining TurboPuffer’s ideal customer profile and market positioning23:43 – Gaining conviction: When Simon knew TurboPuffer would scale25:39 – TurboPuffer V2: Architectural evolution and incremental indexing improvements32:12 – How AI-native workloads fundamentally change database design35:41 – Key trade-offs in TurboPuffer’s database architecture (accuracy, latency, and cost)38:07 – Ensuring vector database accuracy: Production vs. academic benchmarks41:03 – Deciding when TurboPuffer was ready for General Availability (GA)42:27 – The future of vector search and storage needs for AI agents45:03 – Building customer-centric engineering teams at TurboPuffer47:12 – Common storage hygiene mistakes (or opportunities) in AI companies49:42 – Simon’s personal growth as a leader since founding TurboPufferSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
undefined
6 snips
May 13, 2025 • 53min

How Abridge Uses AI to Help Doctors Spend More Time With Patients, with Zachary Lipton

Zachary Lipton, Chief Technology & Science Officer at Abridge and Associate Professor at Carnegie Mellon, dives into the transformative power of AI in healthcare. He discusses how effective AI must start with meaningful conversations rather than mere documentation. Lipton reveals the complexities of customizing AI for medical environments, the negative impact of burnout due to clerical work, and innovations like digital scribes. He emphasizes building trust in AI and the balance between personalizing care and maintaining accuracy in medical documentation.
undefined
May 6, 2025 • 52min

How AI21 Labs Builds Frontier Models For The Enterprise, With Ori Goshen, Co-Founder and Co-CEO at AI21 Labs

What if deep learning isn’t the future of AI—but just part of it?In this episode, Ori Goshen, Co-founder and Co-CEO at AI21 Labs, shares why his team set out to build reliable, deterministic AI systems—long before ChatGPT made language models mainstream.We explore the launch of Wordtune, the development of Jamba, and the release of Maestro—AI21’s orchestration engine for enterprise agentic workflows. Ori opens up about what it takes to move beyond probabilistic systems, build trust with global enterprises, and balance research and product in one of the most competitive AI markets in the world.If you want a masterclass in enterprise AI, model training, architecture tradeoffs, and scaling innovation out of Israel—this is it.🔔 Subscribe for deep dives with the people shaping the future of AI.This episode is broken down into the following chapters:00:00 – Intro00:47 – Why AI21 started with “deep learning is necessary but not sufficient”02:34 – Building reliable AI systems from day one03:46 – The risk of neural-symbolic hybrids and early bets on NLP05:40 – Why Wordtune became the first product08:14 – From B2C success to a pivot back into enterprise09:43 – What AI21 learned from Wordtune for enterprise AI11:15 – Defining “product algo fit”12:27 – Training models before it was cool: Jurassic, Jamba, and beyond13:38 – How to hire model-training engineers with no playbook14:53 – Recruiting systems talent: what to look for16:29 – How to orient your models around real enterprise needs17:10 – Why Jamba was designed for long-context enterprise use cases19:52 – What’s special about the Mamba + Transformer hybrid architecture22:46 – Experimentation, ablations, and finding the right architecture25:27 – Bringing Jamba to market: what enterprises actually care about29:26 – The state of enterprise AI readiness in 2023 → 202531:41 – The biggest challenge: evaluation systems32:10 – What most teams get wrong about evals33:45 – Architecting reliable, non-deterministic systems34:53 – What is Maestro and why build it now?36:02 – Replacing “prompt and pray” with AI for AI systems38:43 – Building interpretable and explicit agentic systems41:09 – Balancing control and flexibility in orchestration43:36 – What enterprise AI might actually look like in 5 years47:03 – Why Israel is a global powerhouse for AI49:44 – How Ori has evolved as a leader under extreme volatility52:26 – Staying true to your mission through chaosSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
undefined
Apr 22, 2025 • 43min

How to Build a Secure Browser for AI, With Ofer Ben Noon, Former Founder and CEO, Talon Security

What does it take to reimagine the browser—one of the most commoditized technologies in the world—for the enterprise?In this episode, Ofer Ben Noon, founder of Talon and now part of Palo Alto Networks, shares the wild journey from exploring digital health to building the world’s first enterprise-grade secure browser.We dig into:Why the browser became the new security perimeterHow Talon raised a $26M seed and scaled fastWhat it takes to compile Chromium daily (and why it’s so hard)Why Precision AI is essential to secure AI usage in the enterpriseAnd how generative AI, SaaS sprawl, and autonomous agents are reshaping enterprise risk in real timeIf you care about AI x cybersecurity, endpoint security, or enterprise infrastructure—this is a deep, real, and tactical look behind the curtain.This episode is broken down into the following chapters:00:00 – Intro01:05 – Why Ofer originally wanted to build in digital health02:15 – The pandemic shift to SaaS, hybrid work, and browser-first04:44 – Why Chromium was the perfect technical unlock05:27 – The insane complexity of compiling Chromium07:10 – What makes an enterprise browser different from a consumer browser09:36 – Browser isolation, web security, and file security10:50 – Why Talon needed a massive seed round from day one11:53 – What an MVP looked like for Talon14:08 – Early skepticism from CISOs and how Talon earned trust16:50 – Discovering new enterprise use cases over time17:11 – How AI and Precision AI power Talon’s security engine19:21 – Why Ofer chose to sell to Palo Alto Networks21:06 – Petabytes of data, 30B+ attacks blocked daily23:44 – The risks of LLMs and generative AI in the browser24:24 – What Talon sees when users interact with AI tools25:05 – The #1 risk: privacy and user error26:43 – Why AI use must be governed like any other SaaS27:22 – How Talon built secure enterprise access to ChatGPT28:05 – Mapping 1,000+ GenAI tools and classifying risk29:43 – Real-time blocking, DLP, and prompt visibility31:25 – Why user mistakes are accelerating in the age of agents32:04 – How autonomous AI agents amplify risk across the enterprise33:55 – The browser as the new control layer for users and AI36:57 – What AI is unlocking in cybersecurity orgs39:36 – Why data volume will determine which security companies win40:28 – Ofer’s leadership philosophy and staying grounded post-acquisition42:40 – Closing reflectionsSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
undefined
15 snips
Apr 8, 2025 • 42min

How Vanta Helps Customers Build Secure and Compliant AI Products, with Christina Cacioppo, Co-founder and CEO, and Iccha Sethi, VP of Engineering

In this engaging discussion, Christina Cacioppo, Co-founder and CEO of Vanta, and Iccha Sethi, VP of Engineering, share their insights into the intersection of AI with security and compliance. They explore how compliance serves as an ideal landscape for AI innovation and reveal the importance of building reliable, explainable systems. The duo discusses their innovative approach to using large language models for automating security questionnaires and maintaining 'golden datasets,' all while emphasizing the crucial human oversight needed to foster trust in automated processes.
undefined
Mar 26, 2025 • 54min

How Cartesia Edges Out The Big Labs With Audio AI Models, with Karan Goel, Founder and CEO at Cartesia

Karan Goel, Co-founder and CEO of Cartesia, dives into the future of voice AI and the groundbreaking use of state space models (SSMs) for audio applications. He details his transition from academia at CMU and Stanford to entrepreneurship, emphasizing the innovative efficiency of SSMs over traditional models. Karan also reveals how Cartesia is developing Sonic, an ultra-low latency text-to-speech model, and elaborates on the importance of rapid execution in voice AI, all while navigating the startup landscape.
undefined
6 snips
Mar 19, 2025 • 41min

How Hightouch Builds RL Agents For Marketing Teams, with Kashish Gupta, Co-CEO at Hightouch

Kashish Gupta, Co-founder and Co-CEO of Hightouch, discusses the revolutionary impact of AI on marketing. He delves into the rise of Composable CDPs, illustrating how they empower marketers with data democratization. Gupta reveals how reinforcement learning agents optimize campaigns through tailored reward systems and highlights the technical challenges faced in building these AI models. He emphasizes the balance between AI efficiency and the creative input needed from marketers to truly engage customers in an increasingly automated landscape.
undefined
Feb 26, 2025 • 45min

Why Your Customer Support Tools Won't Cut It in the AI Era with Jesse Zhang, CEO of Decagon

This episode consists of the following chapters: 00:00 - Introduction to Jesse Zhang and Decagon02:33 - Why customer support emerged as a clear use case for AI05:00 - The importance of discovery and understanding customer value08:20 - The Decagon product architecture: core AI agent, routing, and human assistance11:01 - How enterprise logic is integrated into the AI agent15:45 - Shared frameworks across different customers and industries17:12 - How AI agents are changing organizational planning19:59 - Automatically identifying knowledge gaps to improve resolution rates22:57 - Handling routing across different modalities (text and voice)26:09 - The continued importance of humans in customer support30:17 - The evolving role of human agents: supervising, QA, and logic building36:57 - Value-based pricing tied to the work AI performs39:17 - How sophisticated buyers evaluate AI customer support solutionsSubscribe to the Barrchives newsletter: www.barrchives.comSpotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app