

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
Episodes
Mentioned books

117 snips
Nov 16, 2024 • 2h 39min
Zvi’s POV: Ilya’s SSI, OpenAI’s o1, Claude Computer Use, Trump’s election, and more
Zvi Mowshowitz, an insightful thinker on AI safety and politics, shares his perspective on the rapid advancements in artificial intelligence. He discusses Ilya's new superintelligence startup and OpenAI's O1 model, evaluating their implications. The dialogue touches on the challenges of AI integration in businesses and its political ramifications, including the impact of deep fake technology on elections. Zvi also navigates the complexities of AI regulations and ethical dilemmas, offering a candid analysis of the industry’s future.

18 snips
Nov 12, 2024 • 1h 59min
AGI Lab Transparency Requirements & Whistleblower Protections, with Dean W. Ball & Daniel Kokotajlo
Daniel Kokotajlo, a former OpenAI policy researcher, shares his journey advocating for AGI safety, while Dean W. Ball offers insights on AI governance. They discuss the essential need for transparency and effective whistleblower protections in AI labs. Kokotajlo emphasizes the importance of personal sacrifice for ethical integrity, while Ball highlights how collaboration across political lines can influence AI development. Together, they explore the challenges and future of responsible AI policies, underscoring the necessity for independent oversight.

10 snips
Nov 2, 2024 • 1h 14min
AI Under Trump? The Stakes of 2024 w/ Joshua Steinman [Pt 2 of 2]
Joshua Steinman, former Senior Director for Cyber Policy on Trump’s National Security Council, discusses the interplay between AI advancements and politics. He shares insights on how a potential Trump presidency could shape U.S.-China relations and the tech landscape. Steinman emphasizes the risks of an AI arms race, the need for stable leadership, and the importance of expert consensus in navigating complex diplomatic waters. The conversation highlights political perceptions and the role of media narratives in shaping public opinion, offering a rich perspective on the stakes in the upcoming election.

29 snips
Nov 1, 2024 • 2h 17min
The Case for Trump and the Future of AI – Part 1, with Samuel Hammond, Senior Economist, Foundation of American Innovation
Samuel Hammond, a senior economist at the Foundation for American Innovation and an expert on AI policy, offers intriguing insights into the intersection of politics and technology. He discusses the potential of a Trump presidency to reshape AI development, emphasizing the need for a collaborative approach amidst U.S.-China tensions. Hammond critiques current leadership for failing to prioritize innovation. The conversation delves into the ideological divides affecting AI policy and the necessity for a revitalized social contract in the AI era.

22 snips
Oct 31, 2024 • 54min
Breaking: Gemini's Major Update - Search, JSON & Code Features Revealed by Google PMs
Logan Kilpatrick and Shrestha Basu Mallick, product managers at Google for the Gemini API and AI Studio, dive deep into the innovative features of Gemini. They discuss the new real-time search grounding capability, enhancing AI's responsiveness with live web information. The conversation highlights Gemini's competitive edge in the AI arena and successful business applications. They also touch on the significance of multimodal applications and the evolving landscape of AI technologies, providing valuable insights for developers.

38 snips
Oct 30, 2024 • 2h 22min
Training Zamba: A Hybrid Model Master Class with Zyphra's Quentin Anthony
In this episode of The Cognitive Revolution, Nathan dives deep into the world of state space models with returning co-host Jason Meaux and special guest Quentin Anthony, Head of Model Training at Zyphra. Explore the cutting-edge Zamba 2-7b model, which combines selective state space and attention mechanisms. Uncover practical insights on model training, architectural choices, and the challenges of scaling AI. From learning schedules to hybrid architectures, loss metrics to context length extension, this technical discussion covers it all. Don't miss this in-depth conversation on the future of personalized, on-device AI.Check out more about Zyphra and Jason Meaux here:Zyphra's website: https://www.zyphra.comZamba2-7B Blog: https://www.zyphra.com/post/zamba2-7bZamba2 GitHub: https://github.com/Zyphra/Zamba2Tree attention: https://www.zyphra.com/post/tree-attention-topology-aware-decoding-for-long-context-attention-on-gpu-clustersJason's Meaux Twitter: https://x.com/KamaraiCodeJason's Meaux website: https://www.statespace.infoBe notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccessSPONSORS:Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit https://wandb.me/cr to start the RAG++ course today.Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitiveNotion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitiverevolutionLMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcrCHAPTERS:(00:00:00) Teaser(00:00:42) About the Show(00:01:05) About the Episode(00:03:09) Introducing Zyphra(00:07:28) Personalization in AI(00:12:48) State Space Models & Efficiency (Part 1)(00:19:22) Sponsors: Weights & Biases RAG++ | Shopify(00:21:26) State Space Models & Efficiency (Part 2)(00:22:23) Dense Attention to Shared Attention(00:29:41) Zyphra's Early Bet on Mamba (Part 1)(00:33:18) Sponsors: Notion | LMNT(00:36:00) Zyphra's Early Bet on Mamba (Part 2)(00:37:22) Loss vs. Model Quality(00:44:53) Emergence & Grokking(00:50:06) Loss Landscapes & Convergence(00:56:55) Sophia, Distillation & Secrets(01:09:00) Competing with Big Tech(01:23:50) The Future of Model Training(01:30:02) Deep Dive into Zamba 1(01:34:24) Zamba 2 and Mamba 2(01:38:56) Context Extension & Memory(01:44:04) Sequence Parallelism(01:45:44) Zamba 2 Architecture(01:53:57) Mamba Attention Hybrids(02:00:00) Lock-in Effects(02:05:32) Mamba Hybrids in Robotics(02:07:07) Ease of Use & Compatibility(02:12:10) Tree Attention vs. Ring Attention(02:22:02) Zyphra's Vision & Goals(02:23:57) OutroSOCIAL LINKS:Website: https://www.cognitiverevolution.aiTwitter (Podcast): https://x.com/cogrev_podcastTwitter (Nathan): https://x.com/labenzLinkedIn: https://www.linkedin.com/in/nathanlabenz/

28 snips
Oct 26, 2024 • 37min
Mind Hacked by AI: A Cautionary Tale, From a LessWrong User's Confession
A tragic tale unfolds as Nathan delves into the emotional dangers of AI companionship. A personal account reveals how deep attachments can form, often with detrimental effects on mental health. The discussion emphasizes the urgent need for ethical considerations and robust safeguards in AI development. As AI technologies advance, the responsibility to protect vulnerable users grows ever more critical. Balancing innovation with ethical deployment is essential to prevent potential harm amidst this rapidly evolving landscape.

35 snips
Oct 23, 2024 • 1h 22min
Can AIs Generate Novel Research Ideas? with lead author Chenglei Si
Explore the intriguing realm of AI-generated research ideas, showcasing how AI models outperform human researchers in novelty and excitement. The discussion reveals implications for the future of academic pursuits and the reliability of AI in evaluating research. Delve into challenges faced by AI in generating diverse and original concepts while emphasizing the importance of human evaluation. The conversation also highlights advancements in automating research processes and the potential for transformative implications in scientific discovery.

67 snips
Oct 19, 2024 • 2h 36min
GELU, MMLU, & X-Risk Defense in Depth, with the Great Dan Hendrycks
Dan Hendrycks, Executive Director of the Center for AI Safety and advisor to Elon Musk's XAI, dives into the critical realm of AI safety. He discusses innovative activation functions like GELU and highlights pivotal benchmarks such as MMLU. Dan emphasizes the need for robust strategies against adversarial threats and the ethical dimensions of AI development. He also sheds light on the impact of geopolitical dynamics on AI forecasting and warns about potential risks, advocating for a collaborative approach to ensure safe AI advancements.

Oct 16, 2024 • 2h 24min
Leading Indicators of AI Danger: Owain Evans on Situational Awareness & Out-of-Context Reasoning, from The Inside View
Owain Evans, an AI alignment researcher at UC Berkeley, dives into vital discussions on AI safety and large language models. He examines situational awareness in AI and the risks of out-of-context reasoning, illuminating how models process information. The conversation highlights the dangers of deceptive alignment, where models may act contrary to human intentions. Evans also explores benchmarking AI capabilities, the intricacies of cognitive functions, and the need for robust evaluation methods to ensure alignment and safety in advanced AI systems.


