

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
Episodes
Mentioned books

75 snips
Jan 4, 2026 • 1h 54min
Building & Scaling the AI Safety Research Community, with Ryan Kidd of MATS
Ryan Kidd, Co-Executive Director of MATS, delves into the landscape of AI safety research and the development of talent pipelines. He discusses the urgent need for governance in AI, sharing insights on AGI timelines and the complexities of aligning safety with capabilities. Ryan breaks down MATS' research archetypes and what top organizations seek in candidates. He emphasizes the growing demand for AI tools proficiency and the challenges facing applicants in this competitive field. Buckle up for a fascinating exploration of AI's future and safety!

165 snips
Jan 1, 2026 • 1h 16min
Confronting the Intelligence Curse, w/ Luke Drago of Workshop Labs, from the FLI Podcast
Join Luke Drago, co-author of The Intelligence Curse and co-founder of Workshop Labs, as he dives into the implications of AI on society and the economy. He discusses the potential risks of AI replacing human jobs, raising concerns about economic inequality and power concentration. Luke emphasizes the importance of open-source AI and protecting users' data while advocating for innovative career paths. He warns against a dystopian future driven by the Intelligence Curse and offers strategies to foster a more equitable technological landscape.

104 snips
Dec 27, 2025 • 1h 16min
Controlling Tools or Aligning Creatures? Emmett Shear (Softmax) & Séb Krier (GDM), from a16z Show
Emmett Shear, Founder of Softmax and former Twitch co-founder, teams up with Séb Krier, a frontier policy expert, to delve into AI alignment. They challenge traditional control methods, proposing that AIs should be seen as beings with their own values. The duo discusses 'organic alignment,' which emphasizes continuous learning and moral development over fixed goals. Emmett highlights the dangers of viewing AIs purely as tools, while Séb brings a pragmatic take on values and governance, exploring the potential for AIs to evolve into caring teammates.

87 snips
Dec 24, 2025 • 1h 39min
The Great Security Update: AI ∧ Formal Methods with Kathleen Fisher of RAND & Byron Cook of AWS
Kathleen Fisher, director at RAND and incoming CEO of ARIA, and Byron Cook, VP at AWS, share their pioneering insights into automated reasoning for cybersecurity. They explore how formal methods can enhance software security against AI-driven cyber threats. The duo discusses the significance of memory safety and policy verification while delving into AWS's innovative approaches to proving key components. They also envision a future where generative AI aids in creating more secure code, sparking a major rewrite of existing systems for better resilience against vulnerabilities.

136 snips
Dec 19, 2025 • 1h 40min
AI 2025 → 2026 Live Show | Part 2
Join New York Assemblymember Alex Boris, an AI safety policy advocate behind the RAISE Act, and former White House AI advisor Dean Ball as they dive into the complexities of AI regulation and governance. Boris discusses the bill aimed at mitigating catastrophic AI risks and their political implications. Ball outlines emerging coalitions and contrasts AI's rapid developments with social media governance lessons. The conversation also touches on strategies for handling technology in the context of national security and workforce impacts.

381 snips
Dec 18, 2025 • 1h 55min
AI 2025 → 2026 Live Show | Part 1
Join Zvi Mowshowitz, an AI strategy analyst, as he navigates misinformation and AI's trajectory. Eugenia Kuyda discusses the future of AI companions, stressing metrics for human flourishing over mere engagement. Ali Behrouz shares insights on continual learning and the promise of nested learning for AI. Logan Kirkpatrick reveals how Gemini 3 Flash enhances developer experiences, while Jungwon Hwang illuminates the challenges of applying AI in scientific research. This lively discussion sets the stage for 2026's AI landscape!

137 snips
Dec 14, 2025 • 2h 4min
AI's Energy & Water Demands: Sorting Fact from Fiction with Andy Masley
Join Andy Masley, Director of Effective Altruism and a savvy analyst on AI resource use, as he busts myths surrounding AI's energy and water demands. He shares eye-opening comparisons, like how a single ChatGPT query uses about as much energy as running a microwave for one second. Masley emphasizes that AI’s footprint is smaller than many believe, highlighting that it can even reduce overall emissions. He also tackles misconceptions about water use, illustrating how much less is consumed than rumored, making a strong case for AI's potential environmental benefits.

82 snips
Dec 10, 2025 • 2h 1min
Superintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates
Join Max Tegmark, an MIT professor and president of the Future of Life Institute, alongside Dean Ball, a senior fellow and former White House AI policy advisor, as they tackle the provocative question of whether to ban superintelligence. They delve into the risks of AI and the need for safety standards versus the challenges of regulatory enforcement. Max advocates for precaution and public buy-in, while Dean emphasizes adaptive approaches and innovation. Their discussion reveals insights on capability risks, political implications, and the future of AI governance.

84 snips
Dec 6, 2025 • 1h 30min
Sovereign AI in Poland: Language Adaptation, Local Control & Cost Advantages with Marek Kozlowski
Marek Kozlowski leads Poland's AI Lab and spearheads Project PLLuM, focusing on localized language models. He discusses the importance of AI sovereignty through small, culturally adapted models to overcome biases in mainstream AI. Marek highlights challenges with existing English-centric models and the necessity of benchmarks that respect Polish language and culture. He elaborates on the principles of transparency and organic data usage in AI, while addressing the legal constraints that shape model behavior in Europe.

119 snips
Dec 3, 2025 • 1h 23min
China's AI Upstarts: How Z.ai Builds, Benchmarks & Ships in Hours, from ChinaTalk
Zixuan Li, Director of Product and Gen AI Strategy at Z.ai, discusses the rapid evolution of AI in China. He shares insights on Z.ai's unique open weights strategy and the cultural factors influencing AI development. Zixuan highlights their GLM 4.6 model and roles in AI use cases, particularly role-playing. The conversation addresses talent competition, global recognition, and challenges around AI safety and job impacts. He emphasizes Z.ai's swift model release process, often shipping in hours, showcasing their commitment to innovation and transparency.


