
Phillip Carter
Principle PM at Honeycomb, previously worked at Microsoft on the .NET team. Focuses on building AI infrastructure and improving developer workflows.
Top 5 podcasts with Phillip Carter
Ranked by the Snipd community

13 snips
Dec 14, 2024 • 1h 5min
AI IRL at Honeycomb (Ship It! #134)
Phillip Carter, a Principal PM at Honeycomb with a robust background in .NET at Microsoft, shares his insights on building AI infrastructure. He discusses the evolution of cloud development and the critical transitions from corporate roles to dynamic startups. The conversation covers advancements in natural language querying and the importance of OpenTelemetry for unified observability. Carter also highlights how AI is transforming data instrumentation and enhancing developer workflows, making it an exciting time for innovation in tech.

8 snips
Jun 26, 2025 • 48min
Episode 51: Why We Built an MCP Server and What Broke First
In this discussion, Philip Carter, Product Management Director at Salesforce and former Principal PM at Honeycomb, shares insights on creating LLM-powered features. He explains the nuances of integrating real production data with these systems. Carter dives into the challenges of tool use, prompt templates, and flaky model behavior. He also discusses the development of the innovative MCP server that enhances observability in AI systems, emphasizing its role in improving user experience and navigating the pitfalls of SaaS product development.

7 snips
Jun 14, 2023 • 51min
#12 - Phillip Carter (Principal PM @ Honeycomb) - All The Hard Stuff when Building Products with LLMs, Actual Results from Leveraging AI
Phillip Carter is a Principal Product Manager at Honeycomb, which develops a software debugging product for distributed systems. Phillip recently published one of the most interesting blog posts I’ve read titled “All the Hard Stuff Nobody Talks About when Building Products with LLMs”. The post is excellent and everyone should give it a read. In this episode, we dive deep into what the hard stuff actually is, the pros and cons of Large Language Models (LLMs) and what teams need to think about when using LLMs in their products. We also talk about the real world results that come from shipping an LLM-enabled product, including the conversion increases that Honeycomb has seen as a result of Query Assistant and how the decision to use LLMs came about. For all the teams wondering how to use LLMs now, this is the episode for you.Where to Find Phillip:* LinkedIn: https://www.linkedin.com/in/phillip-carter-4714a135/* Twitter: https://twitter.com/_cartermp * Personal Website: https://phillipcarter.dev/* Blog Post Referred to in Podcast: https://www.honeycomb.io/blog/hard-stuff-nobody-talks-about-llmWhere to Find Shomik:* Twitter: https://twitter.com/shomikghosh21* LinkedIn: https://www.linkedin.com/in/shomik-ghosh-a5a71319/* Software Snack Bites Newsletter: https://www.shomik.substack.com* Software Snack Bites Podcast: Apple Podcasts, Spotify, Google.In this episode, we cover:(00:49) - Phillip’s Background and What Honeycomb Does(06:37) - How the Idea to Use LLMs for the Query Assistant Product Occurred(08:45) - Managing Context Windows and Token Counting(14:35) - Few Shot Prompting & Schema Context(16:32) - The Conversion Results from Using LLMs to Power a New UI(21:36) - Are LLMs Mainly Useful for Search & UI/UX(28:09) - Should Companies Have an “AI Strategy”(31:55) - Best Efforts Results for LLM Output(36:10) - Managing Direct Database Access for LLMs(40:51) - How Embeddings Can Help with Context Windows(44:31) - Guiding UI/UX Principles for LLM Features(47:52) - The Magic That LLM Features from a UI Perspective Can ProvideHow to Subscribe:Available on Apple Podcasts, Spotify, or Google. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit shomik.substack.com

Apr 3, 2024 • 1h 5min
SE Radio 610: Phillip Carter on Observability for Large Language Models
Phillip Carter, Principal Product Manager at Honeycomb, discusses observability for large language models. They delve into how observability helps in testing, refining functionality, debugging, and enabling incremental development for LLMs. Carter offers tips on implementing observability and highlights current technology limitations.

Dec 14, 2024 • 1h 5min
AI IRL at Honeycomb
Phillip Carter, Principal PM at Honeycomb and a former tech lead at Microsoft, shares insights on AI infrastructure building. He discusses the shift from large corporations to startups, revealing the freedom and challenges it brings. The conversation dives into Honeycomb's unique approaches to AI and the experimental nature of innovations. Phillip also touches on the evolution of observability tools and the crucial role of customer experience in tech development, highlighting the journey toward unified data systems.