Inference by Turing Post

Turing Post
undefined
Sep 6, 2025 • 30min

What Is The Future Of Coding? Warp’s Vision

What comes after the IDE? In this episode of Inference, I sit down with Zach Lloyd, founder of Warp, to talk about a new category he’s coining: the Agentic Development Environment (ADE). We explore why coding is shifting from keystrokes to prompts, how Warp positions itself against tools like Cursor and Claude Code, and what it means for developers when your “junior dev” is an AI agent that can already set up projects, fix bugs, and explain code line by line. We also touch on the risks: vibe coding that ships junk to production, the flood of bad software that might follow, and why developers still need to stay in the loop — not as code typists, but as orchestrators, reviewers, and intent-shapers. This is a conversation about the future of developer workbenches, the end of IDE dominance, and whether ADEs will become the default way we build software. Watch it! Did you like the episode? You know the drill:  📌 Subscribe for more conversations with the builders shaping real-world AI.  💬 Leave a comment if this resonated.  👍 Like it if you liked it.  🫶 Thank you for watching and sharing! Guest: Zach Lloyd, founder of Warp https://www.linkedin.com/in/zachlloyd/ https://x.com/zachlloydtweets https://x.com/warpdotdev https://www.warp.dev/ 📰 Want the transcript and edited version?  Subscribe to Turing Post https://www.turingpost.com/subscribe Chapters Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live. Sign up: Turing Post: https://www.turingpost.com Follow us Ksenia and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase #Warp #AgenticAI #AgenticDevelopment #AItools #CodingAgents #SoftwareDevelopment #Cursor #ClaudeCode #IDE #ADE #AgenticWorkflows #FutureOfCoding #AIforDevelopers #TuringPost
undefined
Sep 6, 2025 • 32min

Machines Don’t Think. Kids Do | The AI Literacy Series (Ep. 2)

Do machines think? Try asking a child. Children are natural philosophers of AI. They draw librarians inside speakers, ask what the robot “wants,” and poke at the cracks in classifiers until bias spills out. They remind us that anthropomorphism is not a mistake – it’s the starting point. In this second episode of the AI Literacy Series, I sit down again with Stefania Druga – researcher, educator, and my co-author on this project – to explore how kids help us see AI more clearly than most experts. *We’ll explore:* - Why words like think, know, imagine matter more than we admit. - How kids move from magical thinking to system thinking. - The “Big Three” AI families – classifiers, diffusion models, transformers – explained in ways families can test at home + more! - How transfer learning works for humans, connecting toy models to civic-scale consequences. - Six advanced family activities that make bias, prediction, and agency visible. AI literacy is not just technical instruction – it’s cultural negotiation. And sometimes the best teachers are sitting right at the dinner table. 📌 You can find all mentioned resources & activities here: https://www.turingpost.com/p/ailiteracy2 📌 Subscribe to follow the series and join us in building a living playbook for AI literacy.
undefined
Sep 5, 2025 • 35min

Stop Teaching Kids About AI. Do This Instead | The AI Literacy Series (Ep. 1)

Would you stop your child from learning how to write and read? AI literacy is the same now. To succeed in life, you have to be fluent in it. Generative and other types of AI is no longer something you “learn to use.” It’s the environment we all live in. It’s shaping homework, search, gameplay, even family kitchen conversations. And while adoption has crossed a threshold, our understanding of AI literacy is still catching up. In this first episode of the AI Literacy Series, I sit down with *Stefania Druga* – researcher, educator, creator of Cognimates, and my co-author on this project – to explore a central question: *what does it mean to raise an AI-literate generation – and actually be cool about it?* *We’ll explore:* - Why AI literacy is the new baseline, not an add-on. - How to help kids move from “using AI” to questioning and shaping it. - Practical frameworks like Graidients, making AI use visible, intentional, and ethical. - Seven simple activities you can try at home to build fluency together. This series isn’t about watering down AI for children. It’s about reimagining how we – as builders, parents, and educators – prepare the next generation to live, think, and create inside an always-on model ecosystem. Doesn't matter if you are an AI expert or a total novice – everyone will find something insightful. 📌 You can find all mentioned resources & activities here: https://www.turingpost.com/p/ailiteracy1 📌 Subscribe to follow the series and join us in building a living playbook for AI literacy.
undefined
7 snips
Aug 23, 2025 • 26min

When Will Inference Feel Like Electricity? Lin Qiao, co-founder & CEO of Fireworks AI

In this engaging conversation, Lin Qiao, co-founder and CEO of Fireworks AI and former head of PyTorch at Meta, shares her insights on the current AI landscape. She discusses the unexpected perils of achieving product-market fit in generative AI and the hidden costs of GPU usage. Lin highlights that 2025 may see the rise of AI agents across many sectors. She also delves into the pros and cons of open versus closed AI models, especially regarding innovations from Chinese labs. Finally, she shares her personal journey of overcoming fears.
undefined
Aug 23, 2025 • 25min

How to Make AI Actually Do Things | Alex Hancock, Block, Goose, MCP Steering Committee

In this discussion, Alex Hancock, a Senior Software Engineer at Block and key player in the development of Goose, shares insights into the emerging Model Context Protocol (MCP). Exploring how MCP transforms AI from mere models into functional agents, he emphasizes the importance of open governance and context management. They dive into challenges in API development and the necessity for intuitive AI interfaces. Alex also reveals his expectations for AGI's incremental arrival and how a long-term mindset shapes his contributions to AI infrastructure.
undefined
Aug 23, 2025 • 24min

Beyond the Hype: What Silicon Valley Gets Wrong About RAG. Amr Awadallah, founder & CEO of Vectara

In this episode of Inference, I sit down with Amr Awadallah – founder & CEO of Vectara, founder of Cloudera, ex-Google Cloud, and the original builder of Yahoo’s data platform – to unpack what’s actually happening with retrieval-augmented generation (RAG) in 2025. We get into why RAG is far from dead, how context windows mislead more than they help, and what it really takes to separate reasoning from memory. Amr breaks down the case for retrieval with access control, the rise of hallucination detection models, and why DIY RAG stacks fall apart in production. We also talk about the roots of RAG, Amr’s take on AGI timelines and what science fiction taught him about the future. If you care about truth in AI, or you're building with (or around) LLMs, this one will reshape how you think about trustworthy systems. Did you like the episode? You know the drill:  📌 Subscribe for more conversations with the builders shaping real-world AI.  💬 Leave a comment if this resonated.  👍 Like it if you liked it.  🫶 Thank you for watching and sharing! Guest: Amr Awadallah, Founder and CEO at Vectara https://www.linkedin.com/in/awadallah/ https://x.com/awadallah https://www.vectara.com/ 📰 Want the transcript and edited version? Subscribe to Turing Post: https://www.turingpost.com/subscribe Chapters 00:00 – Intro 00:44 – Why RAG isn’t dead (despite big context windows) 01:59 – Memory vs reasoning: the case for retrieval 02:45 – Retrieval + access control = trusted AI 06:51 – Why DIY RAG stacks fail in production 09:46 – Hallucination detection and guardian agents 13:14 – Open-source strategy behind Vectara 16:08 – Who really invented RAG? 17:30 – Can hallucinations ever go away? 20:27 – What AGI means to Amr 22:09 – Books that shaped his thinking Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live. Sign up (Jensen Huang is already in): https://www.turingpost.com Things mentioned during the interview: Hughes Hallucination Evaluation Model (HHEM) Leaderboard https://huggingface.co/spaces/vectara/leaderboard HHEM 2.1: A Better Hallucination Detection Model and a New Leaderboard https://www.vectara.com/blog/hhem-2-1-a-better-hallucination-detection-model HCMBench: an evaluation toolkit for hallucination correction models https://www.vectara.com/blog/hcmbench-an-evaluation-toolkit-for-hallucination-correction-models Books: Foundation series by Isaac Asimov https://en.wikipedia.org/wiki/Foundation_(novel_series) Sapiens: A Brief History of Humankind Hardcover by Yuval Noah Harari https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095 Setting the Record Straight on who invented RAG https://www.linkedin.com/pulse/setting-record-straight-who-invented-rag-amr-awadallah-8cwvc/ Follow us: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase
undefined
Aug 23, 2025 • 27min

AI CHANGED THE WEB. Here’s How to Build for It | A conversation with Linda Tong, CEO of Webflow

Linda Tong, CEO of Webflow, is reshaping the web to accommodate the growing influence of bots. She discusses the rise of non-human traffic and the need for 'agent-first' design, emphasizing how websites can cater to both AI agents and human visitors. Linda introduces the concept of agentic engine optimization (AEO) as a new SEO strategy. She also reflects on the importance of dynamic, personalized experiences and shares leadership insights inspired by 'Ender’s Game.' Get ready for a fast-paced, thought-provoking conversation about the future of web design!
undefined
Jun 29, 2025 • 19min

When Will We Fully Trust AI to Lead? A conversation with Eric Boyd, CVP of AI Platform

At Microsoft Build, I actually sat down with Eric Boyd, Corporate Vice President leading engineering for Microsoft’s AI platform, to talk about what it really means to build AI infrastructure that companies can trust – not just to assist, but to act. We get into the messy reality of enterprise adoption, why trust is still the bottleneck, and what it will take to move from copilots to fully autonomous agents.We cover: - When we'll trust AI to run businesses - What Microsoft learned from early agent deployments - How AI makes life easier - The architecture behind GitHub agents (and why guardrails matter) - Why developer interviews should include AI tools - Agentic Web, NLweb, and the new AI-native internet - Teaching kids (and enterprises) how to use powerful AI safely - Eric’s take on AGI vs “just really useful tools” If you’re serious about deploying agents in production, this conversation is a blueprint. Eric blends product realism, philosophical clarity, and just enough dad humor. I loved this one. Did you like the episode? You know the drill:  📌 Subscribe for more conversations with the builders shaping real-world AI.  💬 Leave a comment if this resonated.  👍 Like it if you liked it.  🫶 Thank you for watching and sharing! Guest: Eric Boyd, CVP of AI platform at Microsoft https://www.linkedin.com/in/emboyd/ 📰 Want the transcript and edited version?  Subscribe to Turing Post https://www.turingpost.com/subscribe Chapters 0:00 The big question: When will we trust AI to run our businesses? 1:28 From code-completions to autonomous agents – the developer lens 2:15 Agent acts like a real dev and succeeds 3:25 AI taking over tedious work 3:32 Building trustworthy AI vs. convincing stakeholders to trust it 4:46 Copilot in the enterprise: early lessons and the guard-rail mindset 6:17 What is Agentic Web? 7:55 Parenting in the AI age 9:41 What counts as AGI? 11:32 How developer roles are already shifting with AI 12:33 Timeline forecast for 2-5 years re 13:33 Opportunities and concerns 15:57 Enterprise hurdles: identity, governance, and data-leak safeguards 16:48 Books that shaped the guest Turing Post is a newsletter about AI's past, present, and future. We explore how intelligent systems are built – and how they’re changing how we think, work, and live. Sign up (Jense Huang is already in): Turing Post: https://www.turingpost.com Follow us Ksenia and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase
undefined
Jun 19, 2025 • 29min

Why AI Still Needs Us? A conversation with Olga Megorskaya, CEO of Toloka

In this episode, I sit down with Olga Megorskaya, CEO of Toloka, to explore what true human-AI co-agency looks like in practice. We talk about how the role of humans in AI systems has evolved from simple labeling tasks to expert judgment and co-execution with agents – and why this shift changes everything.We get into: - Why "humans as callable functions" is the wrong metaphor – and what to use instead - What co-agency really means? - Why some data tasks now take days, not seconds – and what that says about modern AI - The biggest bottleneck in human-AI teamwork (and it’s not tech) - The future of benchmarks, the limits of synthetic data, and why it is important to teach humans to distrust AI - Why AI agents need humans to teach them when not to trust the plan If you're building agentic systems or care about scalable human-AI workflows, this conversation is packed with hard-won perspective from someone who’s quietly powering some of the most advanced models in production. Olga brings a systems-level view that few others can – and we even nerd out about Foucault’s Pendulum, the power of text, and the underrated role of human judgment in the age of agents. Did you like the episode? You know the drill:  📌 Subscribe for more conversations with the builders shaping real-world AI.  💬 Leave a comment if this resonated.  👍 Like it if you liked it.  🫶 Thank you for watching and sharing! Guest:  Olga Megorskaya, CEO of Toloka 📰 Want the transcript and edited version?  Subscribe to Turing Post https://www.turingpost.com/subscribe Chapters 0:00 – Intro: Humans as Callable Functions? 0:33 – Evolving with ML: From Crowd Labeling to Experts 3:10 – The Rise of Deep Domain Tasks and Foundational Models 5:46 – The Next Phase: Agentic Systems and Complex Human Tasks 7:16 – What Is True Co-Agency? 9:00 – Task Planning: When AI Guides the Human 10:39 – The Critical Skill: Knowing When Not to Trust the Model 13:25 – Engineering Limitations vs. Judgment Gaps 15:19 – What Changed Post-ChatGPT? 18:04 – Role of Synthetic vs. Human Data 21:01 – Is Co-Agency a Path to AGI? 25:08 – How To Ensure Safe AI Deployment 27:04 – Benchmarks: Internal, Leaky, and Community-Led 28:59 – The Power of Text: Umberto Eco and AI Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Semenova explores how intelligent systems are built – and how they’re changing how we think, work, and live. Sign up: Turing Post: https://www.turingpost.com If you’d like to keep followingOlga and Toloka: https://www.linkedin.com/in/omegorskaya/ https://x.com/TolokaAI Ksenia and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase
undefined
May 30, 2025 • 28min

When Will We Train Once and Learn Forever? Insights from Dev Rishi, CEO and co-founder ⁨@Predibase ​

In this engaging discussion, Devvret Rishi, CEO and co-founder of Predibase, dives into the future of AI modeling. He explains the revolutionary concept of continuous learning and reinforcement fine-tuning (RFT), which could surpass traditional methods. Dev shares insights on the challenges of inference in production and the significance of specialized models over generalist ones. He addresses the gaps in open-source model evaluation and offers a glimpse into the smarter, more agentic AI workflows on the horizon.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app