"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Erik Torenberg, Nathan Labenz
undefined
23 snips
Oct 12, 2024 • 1h 14min

Convergent Evolution: The Co-Revolution of AI & Biology with Professor Michael Levin & Staff Scientist Leo Pio Lopez

Professor Michael Levin, a leading expert on bioelectricity, teams up with Staff Scientist Leo Pio Lopez to explore the fascinating convergence of AI and biology. They discuss their innovative paper linking neurotransmitters to cancer, particularly melanoma, and the groundbreaking use of network embedding techniques for medical advancements. The duo delves into how AI enhances our understanding of complex biological systems, raises philosophical questions about intelligence, and envisions a future where biological and digital intelligences align to enhance human capabilities.
undefined
14 snips
Oct 9, 2024 • 54min

Runway's Video Revolution: Empowering Creators with General World Models, with CTO Anastasis Germanidis

In this enlightening discussion, Anastasis Germanidis, Co-Founder and CTO of RunwayML, shares insights on AI video generation and its creative potential. He explores the groundbreaking Gen 3 models and their impact on democratizing video creation. The conversation delves into the intersection of realism and surrealism in filmmaking, highlighting how generative AI can enhance human creativity. Anastasis also discusses the evolution of AI in the creative industry, touching on user expectations and the balance between advanced technology and traditional skills.
undefined
43 snips
Oct 5, 2024 • 2h

Biologically Inspired AI Alignment & Neglected Approaches to AI Safety, with Judd Rosenblatt and Mike Vaiana of AE Studio

Judd Rosenblatt is the CEO of AE Studio, a firm that shifted focus from brain-computer interfaces to AI alignment research, while Mike Vaiana serves as R&D Director, pioneering innovative approaches. They delve into biologically inspired methods for AI safety, emphasizing a unique self-other overlap for minimizing deception. Their research also addresses self-modeling in AI systems, highlighting the balance of predictability and cooperation. This thought-provoking dialogue showcases groundbreaking strategies that could reshape AI alignment and mitigate safety risks.
undefined
17 snips
Oct 2, 2024 • 1h 10min

Automating Software Engineering: Genie Tops SWE-Bench, w/ Alistair Pullen, from Latent.Space podcast

Alistair Pullen, Co-founder of Cosine, shares his journey from university to tech entrepreneur while developing AI tools like Genie. He discusses how Cosine achieves remarkable results on the SWE-bench benchmark through innovative automation techniques. The conversation dives into the pivotal role of generative AI in enhancing coding efficiency, challenges developers face, and the importance of historical customer data. Alistair emphasizes the need for adaptability in the ever-evolving landscape of software engineering.
undefined
72 snips
Sep 26, 2024 • 55min

Zapier's AI Revolution: From No-Code Pioneer to LLM Knowledge Worker

Wade Foster, co-founder and CEO of Zapier, explores the evolution of their platform from no-code pioneer to an AI-driven powerhouse. He discusses how Zapier is integrating AI for business efficiency and the challenges of implementing automation. Wade shares insights on the effective use of large language models and the importance of clear AI prompting. He also highlights advancements in AI technology and how they enhance workflow automation. Tune in for valuable perspectives on AI's transformative role in modern businesses.
undefined
11 snips
Sep 25, 2024 • 2h 39min

Anthropic's Responsible Scaling Policy, with Nick Joseph, from the 80,000 Hours Podcast

Nick Joseph, Head of Training at Anthropic, discusses the pivotal topic of responsible scaling in AI development. He examines Anthropic's proactive safety measures and the importance of transparency in AI risks. Joseph emphasizes the need for public scrutiny and collaboration among tech companies to enhance safety frameworks. Additionally, he shares insights about the career opportunities in AI safety and the evolving landscape of AI technology, advocating for rigorous testing and ethical practices to navigate potential challenges.
undefined
6 snips
Sep 20, 2024 • 1h 18min

The Evolution Revolution: Scouting Frontiers in AI for Biology with Brian Hie

Brian Hie, a Stanford assistant professor focused on AI in biology, shares groundbreaking insights into how artificial intelligence is transforming biological research. He discusses innovative AI architectures and the surprising capabilities of language models trained on DNA sequences. The conversation explores the role of AI in drug discovery, the evolution of antibodies, and the ethical considerations of employing AI in biotechnology. With an emphasis on interpretability and collaborative efforts, Hie illustrates a promising future where AI unlocks new possibilities in biology.
undefined
57 snips
Sep 18, 2024 • 2h 4min

The Professional Network for AI Agents, with Agent.ai Engineering Lead Andrei Oprisan

Andrei Oprisan, Engineering Lead at Agent.ai, shares his passion for AI agents and their revolutionary impact on the workplace. He dives into best practices for building AI models and emphasizes the importance of fine-tuning and database choices. The conversation touches on AI's evolving role in decision-making and social media management, highlighting the significance of collaboration between humans and AI. Oprisan also discusses the ethical implications of AI in society, advocating for a future where technology enhances human productivity without compromising job integrity.
undefined
22 snips
Sep 14, 2024 • 59min

Red Teaming o1 Part 2/2– Detecting Deception with Marius Hobbhahn of Apollo Research

Marius Hobbhahn, Founder and CEO of Apollo Research, specializes in AI safety and deception detection. In this discussion, he dives into the implications of OpenAI's O1 and O1 Mini models, emphasizing their enhanced reasoning skills and potential risks of deception. The conversation sheds light on new advancements at Apollo Research, the evaluation of AI models under pressure, and the significance of qualitative analysis in understanding AI behavior. Hobbhahn also addresses the ethical concerns surrounding AI autonomy and the need for effective benchmarks.
undefined
40 snips
Sep 14, 2024 • 1h 6min

Red Teaming o1 Part 1/2– Automated Jailbreaking with Haize Labs' Leonard Tang, Aidan Ewart, and Brian Huang

Leonard Tang and Brian Huang from Haize Labs share their insights on AI model vulnerabilities and automated jailbreaking techniques. They discuss the crucial role of the o1 Red Team in testing OpenAI's latest reasoning models, emphasizing the balance between AI's advanced capabilities and potential risks. The conversation delves into automated red teaming strategies, the challenges of evaluating AI safety, and the ongoing battle between model functionality and security measures. Tune in for a deep dive into the future of AI technology and its implications!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app