

Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Episodes
Mentioned books

Oct 31, 2024 • 54min
Interviewing Andrew Carr of Cartwheel on the State of Generative AI
Andrew Carr, co-founder and chief scientist at Cartwheel AI, is on a mission to create innovative text-to-motion models for creative fields. He dives into how generative AI can enhance creativity through niche applications, like AI-generated poetry. Andrew shares insights from his time at OpenAI and discusses the fascinating interplay between AI and art, emphasizing the need for human oversight. He also explores the evolving AI landscape and the importance of fostering a positive research culture in tech companies to drive impactful innovations.

Oct 30, 2024 • 10min
(Voiceover) Why I build open language models
Explore the compelling motivations behind the creation of open language models, where inclusivity and transparency are key. Discover how open-source systems can challenge corporate dominance while promoting diversity in tech. The urgency of engaging the public in developing these models is highlighted, stressing collaboration as essential for addressing regulatory challenges and ensuring responsible AI research. Tune in for insights on fostering impactful advancements in the realm of artificial intelligence!

Oct 23, 2024 • 11min
(Voiceover) Claude's agentic future and the current state of the frontier models
Explore the exciting frontier of AI as the podcast delves into the latest on Claude 3.5, Anthropic's cutting-edge model. Discover how it stacks up against Google's Gemini and OpenAI's systems. The discussion highlights the strengths, weaknesses, and future potential of these models. Who will dominate the AI landscape? Tune in for insights on the evolution of these powerful technologies and their implications for automation and reasoning.

9 snips
Oct 17, 2024 • 54min
Interviewing Arvind Narayanan on making sense of AI hype
Arvind Narayanan, a computer science professor at Princeton and director of the Center for Information Technology Policy, delves into the realities of AI amidst the hype. He discusses the pitfalls of AI policy, emphasizing the need for harm-focused research. The conversation covers the risks of open-source foundation models, critiques of traditional AI in risk prediction, and the implications of scaling laws. Narayanan also sheds light on the balance between innovation and societal impact, highlighting the necessary collaboration between researchers and policymakers.

Oct 16, 2024 • 17min
(Voiceover) Building on evaluation quicksand
Explore the complexities of evaluating language models in the fast-evolving AI landscape. Discover the hidden issues behind closed evaluation silos and the hurdles faced by open evaluation tools. Learn about the cutting-edge frontiers in evaluation methods and the emerging risks of synthetic data contamination. The conversation highlights the necessity for standardized practices to ensure transparency and reliability in model assessments. Tune in for insights that could reshape the evaluation process in artificial intelligence!

7 snips
Oct 10, 2024 • 1h
Interviewing Andrew Trask on how language models should store (and access) information
Andrew Trask, a passionate AI researcher and leader of the OpenMined organization, shares insights on privacy-preserving AI and data access. He discusses the importance of secure enclaves in AI evaluation and the complexities of copyright laws impacting language models. Trask explores the ethical dilemmas of using non-licensed data, federated learning's potential, and challenges startups face in the AI landscape. He emphasizes the need for innovative infrastructures and the synergy between Digital Rights Management and secure computing for better data governance.

Oct 9, 2024 • 12min
How scaling changes model behavior
Delve into how scaling computational resources impacts the behavior of language models. Discover the intriguing balance between benefits and challenges in striving for artificial general intelligence. Metaphors shed light on potential solutions, while short-term scaling efforts are assessed for viability. Tune in for insights on how these dynamics shape the future of AI.

Oct 2, 2024 • 10min
[Article Voiceover] AI Safety's Crux: Culture vs. Capitalism
The podcast dives into the clash between AI safety and the commercialization frenzy sweeping the industry. Discussions highlight the recent internal turmoil at OpenAI and California's SB 1047 as a test for AI regulations. It examines how the pressure to conform to big tech standards can undermine safety protocols. The tension of capitalism driving innovation while risking ethical considerations makes for a thought-provoking analysis of modern AI challenges.

14 snips
Sep 30, 2024 • 1h 9min
Interviewing Riley Goodside on the science of prompting
Riley Goodside, a staff prompt engineer at Scale AI and former data scientist, delves into the intricacies of prompt engineering. He shares how writing prompts can be likened to coding and the recent advancements spurred by ChatGPT. The discussion covers various AI models, including o1 and Reflection 70B, emphasizing the importance of evaluation methods and user control in AI interactions. Goodside also highlights the evolving community of prompt engineers and the pressing need for education in effectively utilizing AI.

Sep 27, 2024 • 14min
[Article Voiceover] Llama 3.2 Vision and Molmo: Foundations for the multimodal open-source ecosystem
Dive into the fascinating world of open-source AI with a detailed look at Llama 3.2 Vision and Molmo. Explore how multimodal models enhance capabilities by integrating visual inputs with text. Discover the architectural differences and performance comparisons among leading models. The discussion delves into current challenges, the future of generative AI, and what makes the open-source movement vital for developers. Tune in for insights that bridge technology and creativity in the evolving landscape of AI!