"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Erik Torenberg, Nathan Labenz
undefined
395 snips
Jan 25, 2025 • 1h 46min

Emergency Pod: Reinforcement Learning Works! Reflecting on Chinese Reasoning Models DeepSeek-R1 and Kimi k1.5

Discover the incredible advancements in artificial general intelligence with the reveal of two Chinese reasoning models, DeepSeek-R1 and Kimmy k1.5. The discussion highlights innovative reinforcement learning techniques that improve reasoning skills and drive emergent behaviors. A detailed comparison showcases how these models are closing the gap between Chinese and Western AI capabilities. Aspects of global competition and economic implications provide a broader context, making this an enlightening exploration of the future of AI.
undefined
71 snips
Jan 22, 2025 • 1h 38min

Material Progress: Developing AI's Scientific Intuition, with Orbital Materials' Jonathan Godwin & Tim Duignan

Join Jonathan Godwin, the visionary founder of Orbital Materials, and researcher Tim Duignan as they explore how AI is revolutionizing material science. They dive into breakthroughs in AI-driven simulations for developing new materials, crucial for tackling climate change. The duo discusses their innovative study on potassium ion channels, shedding light on potential medical applications. They also speculate on AI's future role in scientific discovery, blending technology with intuition to push the boundaries of material research.
undefined
54 snips
Jan 18, 2025 • 2h 7min

Dodging Latent Space Detectors: Obfuscated Activation Attacks with Luke, Erik, and Scott.

Luke Bailey and Eric Jenner, both leading experts on AI safety, dive into their research on obfuscated activation attacks. They dissect methods for bypassing latent-based defenses in AI while examining the vulnerabilities these systems face. The conversation highlights complex topics like backdoor attacks, the importance of diverse datasets, and the ongoing challenge of enhancing model robustness. Their work sheds light on the cat-and-mouse game between attackers and defenders, making it clear that the future of AI safety is as intricate as it is essential.
undefined
45 snips
Jan 15, 2025 • 1h 30min

Gene Hunting with o1-pro: Reasoning about Rare Diseases with ChatGPT Pro Grantee Dr. Catherine Brownstein

In this engaging conversation, Dr. Catherine Brownstein, an Assistant Professor at Boston Children's Hospital and Harvard Medical School, discusses her groundbreaking work in identifying genetic causes of rare diseases. She shares how AI, particularly through her ChatGPT Pro grant, is transforming diagnostics and accelerating patient care. The dialogue uncovers the complexities of genetic testing, the emotional impacts on families, and the critical need for collective data sharing to enhance our understanding and treatment of rare diseases. Dr. Brownstein's insights illuminate the promising future of AI in medicine.
undefined
121 snips
Jan 8, 2025 • 1h 59min

AI AMA – Part 2: AI Utopia, Consciousness, and the Future of Work

Dive into a mind-bending discussion on the future of AI and its potential to create a utopia. Explore the challenges of consciousness and how humans can adapt as technology evolves. Unpack the intersection of work, leisure, and societal values in an AI-driven world. Delve into the ethical implications of AI, examining the responsibility that comes with advancements. Reflect on the dynamic role of social media in shaping public discourse about artificial intelligence. Engage with diverse perspectives as we navigate this complex technological landscape together.
undefined
117 snips
Jan 7, 2025 • 2h 4min

AI AMA – Part 1: OpenAI’s o3, Deliberative Alignment, and AI Surprises of 2024

Adithyan Ilangovan, co-founder of AI Podcasting, dives into the exciting developments in AI, particularly OpenAI's o3 model. He discusses how deliberative alignment could reshape AI safety and governance. The conversation takes a turn towards the implications of AI in education and coding careers, emphasizing the need for adaptability in a rapidly changing job landscape. Adithyan also highlights the role of AI in optimizing communication methods in podcast production, shedding light on both challenges and innovations that lie ahead in the tech world.
undefined
163 snips
Jan 3, 2025 • 3h 54min

Teaching AI to See: A Technical Deep-Dive on Vision Language Models with Will Hardman of Veratai

Will Hardman, founder of AI advisory firm Veritai, delves into the intricacies of vision language models (VLMs). He discusses their evolution from traditional techniques to cutting-edge architectures like InternVL and Llama3V. The conversation highlights the importance of multimodality in AI, detailing innovations, architectural choices, and implications for artificial general intelligence. Hardman elaborates on the challenges of image processing, the significance of high-quality datasets, and emerging strategies that enhance VLM performance and reasoning capabilities.
undefined
187 snips
Dec 28, 2024 • 1h 55min

roon's Heroic Duty: Will "the Good Guys" Build AGI First? (from Doom Debates)

Guest roon, a notable figure on Twitter and part of OpenAI's technical staff, shares unique insights into the future of AI development and safety. The conversation dives into the complex challenges of AGI alignment and the responsibilities developers face in navigating existential risks. They discuss the tension between optimism and reality in AI, the nuances of AI corrigibility, and the potential impact of emerging technologies on humanity. Roon also reflects on personal responsibilities and the role of ethical considerations in transformative AI systems.
undefined
121 snips
Dec 25, 2024 • 2h 9min

Emad Mostaque on the Intelligent Internet and Universal Basic AI

Emad Mostaque, founder of Stability AI and The Intelligent Internet, dives into the future of artificial intelligence, advocating for universal basic AI. He discusses the balance of AI's benefits and risks, especially regarding ethics and military use. Mostaque shares a vision for an intelligent internet that enhances education and healthcare while emphasizing the need for transparent, open-source AI. He also addresses the complexities of U.S.-China relations in AI and proposes innovative solutions for equitable access across diverse populations.
undefined
35 snips
Dec 21, 2024 • 1h 45min

Can AIs do AI R&D? Reviewing REBench Results with Neev Parikh of METR

Niamh Parikh, a member of the technical staff at METR, discusses the innovative REBench evaluation framework designed for assessing AI systems' real-world research capabilities. The chat dives into how AI models like Claude 3.5 and GPT-4 perform in tasks ranging from optimizing GPU kernels to tuning language models. They explore the nuances of AI versus human problem-solving approaches, the challenges of benchmarking, and the impacts of AI performance on future research. Insights on the AI R&D capabilities and the need for effective evaluation metrics are also covered.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app