

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
Episodes
Mentioned books

52 snips
Jan 29, 2026 • 1h 37min
AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws
Alan Rozenshtein, law professor focused on AI governance, and Kevin Frazier, AI and law program director, discuss how AI is reshaping legal practice and policy. They touch on AI-assisted lawyering, threats to entry-level roles, AI-written contracts, Claude’s virtue-ethics constitution, outcome-oriented legislation, new rights like the Right to Compute and Right to Share, and limits on surveillance and governance.

87 snips
Jan 25, 2026 • 2h 11min
The Internet Computer: Caffeine.ai CEO Dominic Williams on Unstoppable, Self-Writing Software
Dominic Williams, architect of the Internet Computer and CEO of Caffeine AI, describes a sovereign, tamper‑proof cloud and self-writing software. He outlines core innovations like the Network Nervous System, Motoko, and orthogonal persistence. The conversation also covers unstoppable apps, AI-driven code generation, decentralization tradeoffs, and real-world case studies such as OpenChat.

275 snips
Jan 22, 2026 • 2h 24min
AMA Part 2: Is Fine-Tuning Dead? How Am I Preparing for AGI? Are We Headed for UBI? & More!
In this engaging AMA session, Nathan dives into whether fine-tuning is on the decline and its nexus with emergent misalignment. He discusses personal preparations for AGI and explores potential job disruptions across various industries. Nathan emphasizes the importance of teaching AI concepts to non-technical audiences and debates the viability of Universal Basic Income amid evolving economic landscapes. With insights on investment strategies and safety approaches, he offers a candid view on the future of AI and its societal implications.

464 snips
Jan 18, 2026 • 2h 28min
Pioneering PAI: How Daniel Miessler's Personal AI Infrastructure Activates Human Agency & Creativity
Daniel Miessler, a cybersecurity veteran and creator of the Personal AI Infrastructure (PAI) framework, dives into a transformative vision for AI. He introduces the TELOS system, which helps individuals define purpose and set goals, facilitating a new relationship with AI agents. Miessler discusses how scaffolding can turn AI models into practical assistants, the potential for AI to reshape labor and ownership, and the looming challenges of cybersecurity in an AI-driven world. He also explores the implications of activating human agency and the need for universal basic income as technology evolves.

237 snips
Jan 14, 2026 • 1h 39min
Snowflake VP of AI Baris Gultekin on Bringing AI to Data, Agent Design, Text-2-SQL, RAG & More
Baris Gultekin, Vice President of AI at Snowflake, leads the charge in integrating AI with enterprise data while ensuring security and governance. He discusses the evolution from structured analytics to unlocking unstructured data, making AI more accessible without analysts. Baris highlights advancements in text-to-SQL, emphasizes the importance of embedding quality for RAG, and predicts that agents will enhance productivity in product development. His insights on open standards and model choices illuminate the future of AI in transforming data analytics.

328 snips
Jan 9, 2026 • 1h 55min
AMA Part 1: Is Claude Code AGI? Are we in a bubble? Plus Live Player Analysis
In a heartfelt update, Nathan shares insights about his son Ernie's cancer treatment and how AI models are aiding medical decisions. He probes whether Claude Opus 4.5 signals AGI in coding, sharing fun holiday app projects. Discussion shifts to AI's potential bubble status, emphasizing both its transformative capabilities and the risks of overvaluation. Live player analysis reveals insights on the strengths and weaknesses of major players like Google DeepMind, OpenAI, and Anthropic, while also critiquing Chinese AI models and the geopolitical landscape surrounding chip exports.

120 snips
Jan 4, 2026 • 1h 54min
Building & Scaling the AI Safety Research Community, with Ryan Kidd of MATS
Ryan Kidd, Co-Executive Director of MATS, delves into the landscape of AI safety research and the development of talent pipelines. He discusses the urgent need for governance in AI, sharing insights on AGI timelines and the complexities of aligning safety with capabilities. Ryan breaks down MATS' research archetypes and what top organizations seek in candidates. He emphasizes the growing demand for AI tools proficiency and the challenges facing applicants in this competitive field. Buckle up for a fascinating exploration of AI's future and safety!

187 snips
Jan 1, 2026 • 1h 16min
Confronting the Intelligence Curse, w/ Luke Drago of Workshop Labs, from the FLI Podcast
Join Luke Drago, co-author of The Intelligence Curse and co-founder of Workshop Labs, as he dives into the implications of AI on society and the economy. He discusses the potential risks of AI replacing human jobs, raising concerns about economic inequality and power concentration. Luke emphasizes the importance of open-source AI and protecting users' data while advocating for innovative career paths. He warns against a dystopian future driven by the Intelligence Curse and offers strategies to foster a more equitable technological landscape.

113 snips
Dec 27, 2025 • 1h 16min
Controlling Tools or Aligning Creatures? Emmett Shear (Softmax) & Séb Krier (GDM), from a16z Show
Emmett Shear, Founder of Softmax and former Twitch co-founder, teams up with Séb Krier, a frontier policy expert, to delve into AI alignment. They challenge traditional control methods, proposing that AIs should be seen as beings with their own values. The duo discusses 'organic alignment,' which emphasizes continuous learning and moral development over fixed goals. Emmett highlights the dangers of viewing AIs purely as tools, while Séb brings a pragmatic take on values and governance, exploring the potential for AIs to evolve into caring teammates.

88 snips
Dec 24, 2025 • 1h 39min
The Great Security Update: AI ∧ Formal Methods with Kathleen Fisher of RAND & Byron Cook of AWS
Kathleen Fisher, director at RAND and incoming CEO of ARIA, and Byron Cook, VP at AWS, share their pioneering insights into automated reasoning for cybersecurity. They explore how formal methods can enhance software security against AI-driven cyber threats. The duo discusses the significance of memory safety and policy verification while delving into AWS's innovative approaches to proving key components. They also envision a future where generative AI aids in creating more secure code, sparking a major rewrite of existing systems for better resilience against vulnerabilities.


