

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
Episodes
Mentioned books

29 snips
Nov 1, 2024 • 2h 17min
The Case for Trump and the Future of AI – Part 1, with Samuel Hammond, Senior Economist, Foundation of American Innovation
Samuel Hammond, a senior economist at the Foundation for American Innovation and an expert on AI policy, offers intriguing insights into the intersection of politics and technology. He discusses the potential of a Trump presidency to reshape AI development, emphasizing the need for a collaborative approach amidst U.S.-China tensions. Hammond critiques current leadership for failing to prioritize innovation. The conversation delves into the ideological divides affecting AI policy and the necessity for a revitalized social contract in the AI era.

22 snips
Oct 31, 2024 • 54min
Breaking: Gemini's Major Update - Search, JSON & Code Features Revealed by Google PMs
Logan Kilpatrick and Shrestha Basu Mallick, product managers at Google for the Gemini API and AI Studio, dive deep into the innovative features of Gemini. They discuss the new real-time search grounding capability, enhancing AI's responsiveness with live web information. The conversation highlights Gemini's competitive edge in the AI arena and successful business applications. They also touch on the significance of multimodal applications and the evolving landscape of AI technologies, providing valuable insights for developers.

38 snips
Oct 30, 2024 • 2h 22min
Training Zamba: A Hybrid Model Master Class with Zyphra's Quentin Anthony
In this episode of The Cognitive Revolution, Nathan dives deep into the world of state space models with returning co-host Jason Meaux and special guest Quentin Anthony, Head of Model Training at Zyphra. Explore the cutting-edge Zamba 2-7b model, which combines selective state space and attention mechanisms. Uncover practical insights on model training, architectural choices, and the challenges of scaling AI. From learning schedules to hybrid architectures, loss metrics to context length extension, this technical discussion covers it all. Don't miss this in-depth conversation on the future of personalized, on-device AI.Check out more about Zyphra and Jason Meaux here:Zyphra's website: https://www.zyphra.comZamba2-7B Blog: https://www.zyphra.com/post/zamba2-7bZamba2 GitHub: https://github.com/Zyphra/Zamba2Tree attention: https://www.zyphra.com/post/tree-attention-topology-aware-decoding-for-long-context-attention-on-gpu-clustersJason's Meaux Twitter: https://x.com/KamaraiCodeJason's Meaux website: https://www.statespace.infoBe notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccessSPONSORS:Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit https://wandb.me/cr to start the RAG++ course today.Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitiveNotion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitiverevolutionLMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcrCHAPTERS:(00:00:00) Teaser(00:00:42) About the Show(00:01:05) About the Episode(00:03:09) Introducing Zyphra(00:07:28) Personalization in AI(00:12:48) State Space Models & Efficiency (Part 1)(00:19:22) Sponsors: Weights & Biases RAG++ | Shopify(00:21:26) State Space Models & Efficiency (Part 2)(00:22:23) Dense Attention to Shared Attention(00:29:41) Zyphra's Early Bet on Mamba (Part 1)(00:33:18) Sponsors: Notion | LMNT(00:36:00) Zyphra's Early Bet on Mamba (Part 2)(00:37:22) Loss vs. Model Quality(00:44:53) Emergence & Grokking(00:50:06) Loss Landscapes & Convergence(00:56:55) Sophia, Distillation & Secrets(01:09:00) Competing with Big Tech(01:23:50) The Future of Model Training(01:30:02) Deep Dive into Zamba 1(01:34:24) Zamba 2 and Mamba 2(01:38:56) Context Extension & Memory(01:44:04) Sequence Parallelism(01:45:44) Zamba 2 Architecture(01:53:57) Mamba Attention Hybrids(02:00:00) Lock-in Effects(02:05:32) Mamba Hybrids in Robotics(02:07:07) Ease of Use & Compatibility(02:12:10) Tree Attention vs. Ring Attention(02:22:02) Zyphra's Vision & Goals(02:23:57) OutroSOCIAL LINKS:Website: https://www.cognitiverevolution.aiTwitter (Podcast): https://x.com/cogrev_podcastTwitter (Nathan): https://x.com/labenzLinkedIn: https://www.linkedin.com/in/nathanlabenz/

28 snips
Oct 26, 2024 • 37min
Mind Hacked by AI: A Cautionary Tale, From a LessWrong User's Confession
A tragic tale unfolds as Nathan delves into the emotional dangers of AI companionship. A personal account reveals how deep attachments can form, often with detrimental effects on mental health. The discussion emphasizes the urgent need for ethical considerations and robust safeguards in AI development. As AI technologies advance, the responsibility to protect vulnerable users grows ever more critical. Balancing innovation with ethical deployment is essential to prevent potential harm amidst this rapidly evolving landscape.

33 snips
Oct 23, 2024 • 1h 22min
Can AIs Generate Novel Research Ideas? with lead author Chenglei Si
Explore the intriguing realm of AI-generated research ideas, showcasing how AI models outperform human researchers in novelty and excitement. The discussion reveals implications for the future of academic pursuits and the reliability of AI in evaluating research. Delve into challenges faced by AI in generating diverse and original concepts while emphasizing the importance of human evaluation. The conversation also highlights advancements in automating research processes and the potential for transformative implications in scientific discovery.

67 snips
Oct 19, 2024 • 2h 36min
GELU, MMLU, & X-Risk Defense in Depth, with the Great Dan Hendrycks
Dan Hendrycks, Executive Director of the Center for AI Safety and advisor to Elon Musk's XAI, dives into the critical realm of AI safety. He discusses innovative activation functions like GELU and highlights pivotal benchmarks such as MMLU. Dan emphasizes the need for robust strategies against adversarial threats and the ethical dimensions of AI development. He also sheds light on the impact of geopolitical dynamics on AI forecasting and warns about potential risks, advocating for a collaborative approach to ensure safe AI advancements.

Oct 16, 2024 • 2h 24min
Leading Indicators of AI Danger: Owain Evans on Situational Awareness & Out-of-Context Reasoning, from The Inside View
Owain Evans, an AI alignment researcher at UC Berkeley, dives into vital discussions on AI safety and large language models. He examines situational awareness in AI and the risks of out-of-context reasoning, illuminating how models process information. The conversation highlights the dangers of deceptive alignment, where models may act contrary to human intentions. Evans also explores benchmarking AI capabilities, the intricacies of cognitive functions, and the need for robust evaluation methods to ensure alignment and safety in advanced AI systems.

23 snips
Oct 12, 2024 • 1h 14min
Convergent Evolution: The Co-Revolution of AI & Biology with Professor Michael Levin & Staff Scientist Leo Pio Lopez
Professor Michael Levin, a leading expert on bioelectricity, teams up with Staff Scientist Leo Pio Lopez to explore the fascinating convergence of AI and biology. They discuss their innovative paper linking neurotransmitters to cancer, particularly melanoma, and the groundbreaking use of network embedding techniques for medical advancements. The duo delves into how AI enhances our understanding of complex biological systems, raises philosophical questions about intelligence, and envisions a future where biological and digital intelligences align to enhance human capabilities.

14 snips
Oct 9, 2024 • 54min
Runway's Video Revolution: Empowering Creators with General World Models, with CTO Anastasis Germanidis
In this enlightening discussion, Anastasis Germanidis, Co-Founder and CTO of RunwayML, shares insights on AI video generation and its creative potential. He explores the groundbreaking Gen 3 models and their impact on democratizing video creation. The conversation delves into the intersection of realism and surrealism in filmmaking, highlighting how generative AI can enhance human creativity. Anastasis also discusses the evolution of AI in the creative industry, touching on user expectations and the balance between advanced technology and traditional skills.

43 snips
Oct 5, 2024 • 2h
Biologically Inspired AI Alignment & Neglected Approaches to AI Safety, with Judd Rosenblatt and Mike Vaiana of AE Studio
Judd Rosenblatt is the CEO of AE Studio, a firm that shifted focus from brain-computer interfaces to AI alignment research, while Mike Vaiana serves as R&D Director, pioneering innovative approaches. They delve into biologically inspired methods for AI safety, emphasizing a unique self-other overlap for minimizing deception. Their research also addresses self-modeling in AI systems, highlighting the balance of predictability and cooperation. This thought-provoking dialogue showcases groundbreaking strategies that could reshape AI alignment and mitigate safety risks.