

Inference by Turing Post
Turing Post
Inference is Turing Post’s way of asking the big questions about AI — and refusing easy answers. Each episode starts with a simple prompt: “When will we…?” – and follows it wherever it leads.Host Ksenia Se sits down with the people shaping the future firsthand: researchers, founders, engineers, and entrepreneurs. The conversations are candid, sharp, and sometimes surprising – less about polished visions, more about the real work happening behind the scenes.It’s called Inference for a reason: opinions are great, but we want to connect the dots – between research breakthroughs, business moves, technical hurdles, and shifting ambitions.If you’re tired of vague futurism and ready for real conversations about what’s coming (and what’s not), this is your feed. Join us – and draw your own inference.
Episodes
Mentioned books

13 snips
Dec 4, 2025 • 33min
What AI Is Missing for Real Reasoning? Axiom Math’s Carina Hong on how to build an AI mathematician
Carina Hong, co-founder and CEO of Axiom Math, is on a mission to enhance AI's reasoning through machine-checkable mathematics. She discusses why current AI models struggle with complex math and presents three pillars essential for an AI mathematician. Carina emphasizes the need for a hybrid approach, combining formal verification and neural networks. She explores the limits of intuition in math, critiques existing benchmarks, and advises on practical paths for using AI in mathematics, all while navigating the intriguing landscape between AGI and superintelligence.

Dec 4, 2025 • 27min
Can We Control AI That Controls Itself? Anneka Gupta from Rubrik on…
Is security still about patching after the crash? Or do we need to rethink everything when AI can cause failures on its own?
Anneka Gupta, Chief Product Officer at Rubrik, argues we're now living in the world before the crash – where autonomous systems can create their own failures.
In this episode of Inference, we explore:
Why AI agents are "the human problem on steroids"
The three pillars of AI resilience: visibility, governance, and reversibility
How to log everything an agent does (and why that's harder than it sounds)
The mental shift from deterministic code to outcome-driven experimentation
Why most large enterprises are stuck in AI prototyping (70-90% never reach production)
The tension between letting agents act and keeping them safe
What an "undo button" for AGI would actually look like
How AGI will accelerate the cat-and-mouse game between attackers and defenders
We also discuss why teleportation beats all other sci-fi tech, why Asimov's philosophical approach to robots shaped her thinking, and how the fastest path to AI intuition is just... using it every day.
This is a conversation about designing for uncertainty, building guardrails without paralyzing innovation, and what security means when the system can outsmart its own rules.
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest: Anneka Gupta, Chief Product Officer at Rubrik https://www.linkedin.com/in/annekagupta/
https://x.com/annekagupta
https://www.rubrik.com/
📰 Want the transcript and edited version?
Subscribe to Turing Post: https://www.turingpost.com/subscribe
Chapters:
Turing Post is a newsletter about AI's past, present, and future. Ksenia Se explores how intelligent systems are built – and how they're changing how we think, work, and live.
Follow us →
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
#AI #AIAgents #Cybersecurity #AIGovernance #EnterpriseAI #AIResilience #Rubrik #FutureOfSecurity

4 snips
Dec 4, 2025 • 28min
Spencer Huang: NVIDIA’s Big Plan for Physical AI: Simulation, World Models, and the 3 Computers
In a captivating discussion, Spencer Huang, NVIDIA’s product lead for robotics software, dives deep into the future of robotics and simulation. He outlines NVIDIA's innovative three-computer vision—training, simulation, and deployment. Spencer emphasizes the critical role of simulation in ensuring safety and speed in robot deployment. He also explores the fascinating contrast between conventional and neural simulators, tackling data bottlenecks in robotics while advocating for an open-source ecosystem. It's a thoughtful look at how robots learn and interact with the real world!

Dec 4, 2025 • 26min
Why do we need a special Operating System for AI?
When thousands of AI agents begin to act on our behalf, who builds the system they all run on?
Renen Hallak – founder and CEO of VAST Data – believes we’re witnessing the birth of an *AI Operating System*: a foundational layer that connects data, compute, and policy for the agentic era.
In this episode of Inference, we talk about how enterprises are moving from sandboxes and proof-of-concepts to full production agents, why *metadata matters more than “big data,”* and how the next infrastructure revolution will quietly define who controls intelligence at scale.
*We go deep into:*
What “AI OS” really means – and why the old stack can’t handle agentic systems
Why enterprises are underestimating the magnitude (but overestimating the speed) of this shift
The evolving role of data, metadata, and context in intelligent systems
How control, safety, and observability must be baked into infrastructure – not added later
Why Renen says the next 10 years will reshape everything – from jobs to the meaning of money
The ladder of progress: storage → database → data platform → operating system
What first-principles thinking looks like inside a company building for AGI-scale systems
This is a conversation about the architecture of the future – and the fine line between control and creativity when intelligence becomes infrastructure.
Watch the episode!
*Did you like the episode? You know the drill:*
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
*Guest:* Renen Hallak, Founder & CEO, VAST Data
https://www.linkedin.com/in/renenh/
https://www.linkedin.com/company/vast-data/
*📰 Want the transcript and edited version?*
Find it here: https://www.turingpost.com/p/renen
*Chapters:*
*Turing Post* is a newsletter about AI’s past, present, and future – exploring how intelligent systems are built and how they’re changing how we think, work, and live.
📩 Sign up: https://www.turingpost.com
*Follow us:*
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
#agenticOS, #enterpriseAI, #metadata, #AIoperatingsystem, exabyte storage, GPUs, production AI

Dec 4, 2025 • 26min
The Future of Cancer Diagnosis: Digital Pathology and AI
In this conversation, Akash Parvatikar, an AI scientist leading the PathologyMap platform at HistoWiz, dives into the transformative world of digital pathology. He explains how scanning glass slides revolutionizes cancer diagnosis and the potential of AI to enhance diagnostic processes. Akash outlines the exciting future of telepathology, addresses challenges like data bottlenecks, and highlights the importance of explainability in AI tools. He emphasizes that while technology evolves, the role of pathologists remains crucial. A must-listen for anyone interested in the future of medical diagnostics!

Sep 25, 2025 • 28min
What Really Blocks AI Progress? Ulrik Hansen from Encord thinks it’s…
Ulrik Hansen, co-founder of Encord, shares insights on the real roadblocks to AI progress. He argues that data, not models, is the true bottleneck, emphasizing the importance of data orchestration. He contrasts Tesla's live feedback system with Waymo's cautious rollout, discussing the challenges in robotics and edge cases in self-driving tech. Ulrik highlights the shift towards a connection economy and the rising value of trust in brands. He also explores the potential pitfalls of synthetic data and why applied AI is more thrilling than abstract AGI discussions.

Sep 6, 2025 • 30min
What Is The Future Of Coding? Warp’s Vision
What comes after the IDE?
In this episode of Inference, I sit down with Zach Lloyd, founder of Warp, to talk about a new category he’s coining: the Agentic Development Environment (ADE).
We explore why coding is shifting from keystrokes to prompts, how Warp positions itself against tools like Cursor and Claude Code, and what it means for developers when your “junior dev” is an AI agent that can already set up projects, fix bugs, and explain code line by line.
We also touch on the risks: vibe coding that ships junk to production, the flood of bad software that might follow, and why developers still need to stay in the loop — not as code typists, but as orchestrators, reviewers, and intent-shapers.
This is a conversation about the future of developer workbenches, the end of IDE dominance, and whether ADEs will become the default way we build software. Watch it!
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Zach Lloyd, founder of Warp
https://www.linkedin.com/in/zachlloyd/
https://x.com/zachlloydtweets
https://x.com/warpdotdev
https://www.warp.dev/
📰 Want the transcript and edited version?
Subscribe to Turing Post https://www.turingpost.com/subscribe
Chapters
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up: Turing Post: https://www.turingpost.com
Follow us
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
#Warp #AgenticAI #AgenticDevelopment #AItools #CodingAgents #SoftwareDevelopment #Cursor #ClaudeCode #IDE #ADE #AgenticWorkflows #FutureOfCoding #AIforDevelopers #TuringPost

7 snips
Aug 23, 2025 • 26min
When Will Inference Feel Like Electricity? Lin Qiao, co-founder & CEO of Fireworks AI
In this engaging conversation, Lin Qiao, co-founder and CEO of Fireworks AI and former head of PyTorch at Meta, shares her insights on the current AI landscape. She discusses the unexpected perils of achieving product-market fit in generative AI and the hidden costs of GPU usage. Lin highlights that 2025 may see the rise of AI agents across many sectors. She also delves into the pros and cons of open versus closed AI models, especially regarding innovations from Chinese labs. Finally, she shares her personal journey of overcoming fears.

Aug 23, 2025 • 25min
How to Make AI Actually Do Things | Alex Hancock, Block, Goose, MCP Steering Committee
In this discussion, Alex Hancock, a Senior Software Engineer at Block and key player in the development of Goose, shares insights into the emerging Model Context Protocol (MCP). Exploring how MCP transforms AI from mere models into functional agents, he emphasizes the importance of open governance and context management. They dive into challenges in API development and the necessity for intuitive AI interfaces. Alex also reveals his expectations for AGI's incremental arrival and how a long-term mindset shapes his contributions to AI infrastructure.

Aug 23, 2025 • 24min
Beyond the Hype: What Silicon Valley Gets Wrong About RAG. Amr Awadallah, founder & CEO of Vectara
Amr Awadallah, founder and CEO of Vectara and a pioneer at Cloudera, dives deep into the world of retrieval-augmented generation (RAG). He argues that RAG isn't dead, despite trends toward larger context windows, emphasizing its role in separating memory from reasoning for accurate AI. Amr discusses the importance of retrieval with access control for trustworthy AI and critiques DIY RAG implementations. He also shares insights on hallucination detection, proposing guardian agents to enhance reliability while reflecting on the historical roots and future of AI.


