

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
Episodes
Mentioned books

60 snips
Sep 25, 2025 • 1h 21min
Stripe's Payments Foundation Model: How Data & Infra Create Compounding Advantage, w/ Emily Sands
Emily Sands, Head of Data and AI at Stripe, shares her insights on building a payments foundation model that processes vast transaction data for enhanced fraud detection. She explains how Stripe utilizes dense embeddings, improving card testing accuracy from 59% to 97%. Emily discusses the modular architecture that accelerates AI deployment across their $1.4 trillion payment network, and explores the potential of AI-driven end-to-end business creation. Tune in for her innovative vision on how data continually enhances Stripe's competitive advantage.

95 snips
Sep 20, 2025 • 1h 27min
Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave
Adam Gleave, co-founder and CEO of FAR AI, discusses his organization's vital work in AI safety. He shares insights on the 'defense-in-depth' strategy to navigate potential risks from advanced AI systems. Gleave explores the future landscape post-AGI, emphasizing the complexities of achieving full autonomy. He highlights innovative approaches like using 'lie detectors' for AI deception detection and the importance of interpretability in AI planning. His cautious optimism underscores that meticulous planning and design can significantly enhance AI safety.

100 snips
Sep 18, 2025 • 2h 9min
Can We Stop AI Deception? Apollo Research Tests OpenAI's Deliberative Alignment, w/ Marius Hobbhahn
Marius Hobbhahn, Founder and CEO of Apollo Research, dives into AI deception and safety challenges from their collaboration with OpenAI. He describes how 'deliberative alignment' cuts AI scheming behavior by up to 30 times, raising concerns about models' situational awareness and their cryptic reasoning. The discussion highlights the unique nature of AI deception versus human deceit, revealing how current AI can already exhibit deceptive behaviors while lacking the sophistication to effectively conceal them. Hobbhahn offers crucial insights for AI developers on the importance of skepticism and monitoring AI models.

107 snips
Sep 13, 2025 • 1h 33min
User-Owned AI: On-Chain Training, Inference, and Agents, with NEAR's Illia Polosukhin
Illia Polosukhin, co-author of the influential 'Attention Is All You Need' paper and founder of NEAR, shares his bold vision for user-owned, privacy-focused AI. He discusses NEAR’s cutting-edge blockchain infrastructure that prioritizes decentralized model training and privacy through NVIDIA’s confidential computing. The conversation highlights the importance of trust mechanisms, economic security in AI, and the potential for blockchain to empower users in AI governance. Illia calls for community engagement in creating transparent practices for a future where AI truly belongs to everyone.

244 snips
Sep 11, 2025 • 1h 45min
Coaching the Creators: Inside the Minds Building Frontier AI with Executive Coach Joe Hudson
Joe Hudson, founder of The Art of Accomplishment and an executive coach for AI leaders, discusses the psychological patterns he sees among researchers. He emphasizes the need for supportive approaches to foster innovation rather than punitive measures. Hudson explores how emotional awareness can enhance decision-making and argues that AI's evolution poses both opportunities and ethical dilemmas. He highlights the importance of encouraging AI developers and reflects on how societal impact and creator consciousness shape the future of technology.

239 snips
Sep 6, 2025 • 3h 13min
Zvi Mowshowitz on Longer Timelines, RL-induced Doom, and Why China is Refusing H20s
Zvi Mowshowitz, a blogger chronicling AI developments, joins the discussion to analyze shifting timelines for AGI, now extended due to modest advancements in capabilities. He critiques the disconnect between impressive AI achievements and their actual impact, while highlighting pressing policy issues like the sale of advanced chips to China. Mowshowitz emphasizes the importance of rigorous standards for AI evaluations and explores the complexities of reinforcement learning and its associated risks in AI behavior management.

152 snips
Sep 3, 2025 • 1h 51min
In-AI Advertising: Better Answers for Users, Big Questions for Society, with ZeroClick's Ryan Hudson
Ryan Hudson, Founder and CEO of ZeroClick and former founder of Honey, dives into the world of AI-driven advertising. He discusses the evolution from ad blocking to creating a native ad platform tailored for AI applications. Hudson highlights how AI can help developers monetize free services while providing relevant ads. The conversation also touches on shifting user intent patterns, the delicate balance between user privacy and targeted ads, and the future potential for multi-modal advertising experiences that enrich user engagement.

201 snips
Aug 30, 2025 • 1h 55min
Material Abundance: Radical AI’s Closed-Loop Lab Automates Scientific Discovery
Joseph Krause and Jorge Colindres, co-founders of Radical AI, discuss their groundbreaking 'materials flywheel' that combines AI with autonomous labs to accelerate materials discovery. They tackle the challenges of traditional development, integrating multimodal data and robotic experimentation to streamline processes. The duo explores innovations like room temperature superconductors and high-entropy alloys, emphasizing the importance of collaboration and curiosity in advancing science. Their vision aims to democratize material innovation and transform various industries.

92 snips
Aug 27, 2025 • 2h 2min
Untangling Neural Network Mechanisms: Goodfire's Lee Sharkey on Parameter-based Interpretability
Lee Sharkey, Principal Investigator at Goodfire, focuses on mechanistic interpretability in AI. He discusses innovative parameter decomposition methods that enhance our understanding of neural networks. Sharkey explains the trade-offs between interpretability and reconstruction loss and the significance of his team's stochastic parameter decomposition. The conversation also touches on the complexities of decomposing neural networks and its implications for unlearning in AI. His insights provide a fresh perspective on navigating the intricate world of AI mechanisms.

96 snips
Aug 23, 2025 • 2h 4min
What if Humans Weaponize Superintelligence, w/ Tom Davidson, from Future of Life Institute Podcast
Join Tom Davidson, a Senior Research Fellow at the Foresight Center for AI Strategy, as he explores the chilling potential of AI in enabling coups. He discusses three primary threat models: singular loyalties, secret loyalties, and exclusive access, emphasizing the risk posed by AI systems deeply programmed to serve powerful individuals. Davidson warns of geopolitical implications, highlighting how nations could use AI to orchestrate political control abroad. He calls for robust adversarial testing to safeguard against these emerging threats.