

Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Episodes
Mentioned books

18 snips
Dec 4, 2024 • 12min
(Voiceover) OpenAI's o1 using "search" was a PSYOP
Delve into the innovative training methodologies of OpenAI's O1 model, featuring techniques like guess and check and process rewards. Discover how compute management plays a critical role in testing and future AI developments. The discussion also unpacks the model’s relation to search methods and its use of reinforcement learning from human feedback. Speculation about advancements in AI generation control and the influence of reward systems adds an intriguing twist.

Nov 26, 2024 • 10min
(Voiceover) OLMo 2 and building effective teams for training language models
Full post: https://www.interconnects.ai/p/olmo-2-and-building-language-model-trainingOLMo 2 demo: https://playground.allenai.org/OLMo 2 artifacts: https://huggingface.co/collections/allenai/olmo-2-674117b93ab84e98afc72edcChapters00:00 Building AI Teams06:35 OLMo 2FiguresFig 1, pretrain plot: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/pretrain.webpFig 2, pretrain table: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/pretrain-table.webpFig 3, post-train table: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/postrain-table.webp This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.interconnects.ai/subscribe

Nov 21, 2024 • 8min
(Voiceover) Tülu 3: The next era in open post-training
Dive into the fascinating evolution of open post-training for language models! Discover how techniques like direct preference optimization are reshaping the landscape post-chatGPT. The conversation unveils innovative methodologies such as scaling prompts and the role of reinforcement learning with verifiable rewards. Get a sneak peek into future developments aimed at enhancing open weight models, and see how this competitive drive is pushing the boundaries of what AI can achieve!

Nov 14, 2024 • 4min
(Voiceover) Scaling realities
Dive into the debate surrounding AI scalability versus AGI expectations. Discover the successes and limitations of large AI models, and why specialized models might hold the key to future advancements. Engage with insights on how the landscape of artificial intelligence is evolving amidst varying expectations. This thought-provoking discussion sheds light on the complexities of the AI field and its potential.

Nov 13, 2024 • 11min
(Voiceover) Saving the National AI Research Resource & my AI policy outlook
Explore the vital role of the National AI Research Resource in shaping the future of AI in the U.S. The discussion emphasizes the importance of accountability and transparency in AI policy. Additionally, the potential impact of political changes on AI research and development is examined, providing insights into what the future may hold for the industry.

40 snips
Nov 7, 2024 • 1h 16min
Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next
Join Tim Dettmers, a leading figure in open-source AI development and a future Carnegie Mellon professor, as he shares insights on the transformative potential of open-source AI models. He discusses the challenges of quantization and GPU resource efficiency, emphasizing their role in driving innovation. Tim also explores the evolving landscape of AI technology, comparing its impact to the internet revolution, while addressing the delicate balance between academic research and real-world applications. His passionate perspective offers a fresh take on the future of AI!

Oct 31, 2024 • 54min
Interviewing Andrew Carr of Cartwheel on the State of Generative AI
Andrew Carr, co-founder and chief scientist at Cartwheel AI, is on a mission to create innovative text-to-motion models for creative fields. He dives into how generative AI can enhance creativity through niche applications, like AI-generated poetry. Andrew shares insights from his time at OpenAI and discusses the fascinating interplay between AI and art, emphasizing the need for human oversight. He also explores the evolving AI landscape and the importance of fostering a positive research culture in tech companies to drive impactful innovations.

Oct 30, 2024 • 10min
(Voiceover) Why I build open language models
Explore the compelling motivations behind the creation of open language models, where inclusivity and transparency are key. Discover how open-source systems can challenge corporate dominance while promoting diversity in tech. The urgency of engaging the public in developing these models is highlighted, stressing collaboration as essential for addressing regulatory challenges and ensuring responsible AI research. Tune in for insights on fostering impactful advancements in the realm of artificial intelligence!

Oct 23, 2024 • 11min
(Voiceover) Claude's agentic future and the current state of the frontier models
Explore the exciting frontier of AI as the podcast delves into the latest on Claude 3.5, Anthropic's cutting-edge model. Discover how it stacks up against Google's Gemini and OpenAI's systems. The discussion highlights the strengths, weaknesses, and future potential of these models. Who will dominate the AI landscape? Tune in for insights on the evolution of these powerful technologies and their implications for automation and reasoning.

9 snips
Oct 17, 2024 • 54min
Interviewing Arvind Narayanan on making sense of AI hype
Arvind Narayanan, a computer science professor at Princeton and director of the Center for Information Technology Policy, delves into the realities of AI amidst the hype. He discusses the pitfalls of AI policy, emphasizing the need for harm-focused research. The conversation covers the risks of open-source foundation models, critiques of traditional AI in risk prediction, and the implications of scaling laws. Narayanan also sheds light on the balance between innovation and societal impact, highlighting the necessary collaboration between researchers and policymakers.


