

Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Episodes
Mentioned books

Dec 20, 2024 • 18min
(Voiceover) OpenAI's o3: The grand finale of AI in 2024
Original post: https://www.interconnects.ai/p/openais-o3-the-2024-finale-of-aiChapters00:00 Introduction02:51 o3 overview05:57 Solving the Abstraction and Reasoning Corpus (ARC)10:41 o3’s architecture, cost, and training (hint: still no tree search)16:36 2024: RL returnsFiguresFig 1, Frontier Math resultsFig 2, Coding resultsFig 3, ARC AGI resultsFig 4, ARC AGI result detailsFig 5, ARC AGI example 1Fig 6, ARC AGI example in textFig 7, ARC AGI example “easy” This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.interconnects.ai/subscribe

16 snips
Dec 18, 2024 • 11min
(Voiceover) The AI agent spectrum
Dive into the intriguing world of AI agents and their diverse applications. Explore how the categorization of these agents is evolving, with a focus on their complexities and future potential. Discover the dynamics of feedback in reinforcement learning, and the differences between closed and open-ended agents. The discussion also delves into regulation and societal impact, shedding light on user experiences and expectations for AI. Prepare for a thought-provoking look at the next frontier of artificial intelligence.

Dec 11, 2024 • 13min
(Voiceover) OpenAI's Reinforcement Finetuning and RL for the masses
Original post: https://www.interconnects.ai/p/openais-reinforcement-finetuningChapters00:00 Introduction04:19 The impact of reinforcement finetuning’s existence07:29 Hypotheses on reinforcement finetuning’s implementationFiguresFig. 1, Yann’s CakeFig. 2, Grader configFig. 3, RLVR learning curves This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.interconnects.ai/subscribe

21 snips
Dec 5, 2024 • 1h 9min
Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning
Finbarr Timbers, an AI researcher with a background at DeepMind and Midjourney, dives deep into the world of reinforcement learning. He explains the evolution of RL, from fundamental algorithms to its resurgence with breakthroughs like AlphaZero and ChatGPT. Timbers shares stories about teaching AI to tackle Atari games and discusses modern advancements in natural language processing. He highlights the growing importance of data annotation in RL and contrasts the pressure of deadlines in tech with lessons from endurance sports, emphasizing innovation.

18 snips
Dec 4, 2024 • 12min
(Voiceover) OpenAI's o1 using "search" was a PSYOP
Delve into the innovative training methodologies of OpenAI's O1 model, featuring techniques like guess and check and process rewards. Discover how compute management plays a critical role in testing and future AI developments. The discussion also unpacks the model’s relation to search methods and its use of reinforcement learning from human feedback. Speculation about advancements in AI generation control and the influence of reward systems adds an intriguing twist.

Nov 26, 2024 • 10min
(Voiceover) OLMo 2 and building effective teams for training language models
Full post: https://www.interconnects.ai/p/olmo-2-and-building-language-model-trainingOLMo 2 demo: https://playground.allenai.org/OLMo 2 artifacts: https://huggingface.co/collections/allenai/olmo-2-674117b93ab84e98afc72edcChapters00:00 Building AI Teams06:35 OLMo 2FiguresFig 1, pretrain plot: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/pretrain.webpFig 2, pretrain table: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/pretrain-table.webpFig 3, post-train table: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/postrain-table.webp This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.interconnects.ai/subscribe

Nov 21, 2024 • 8min
(Voiceover) Tülu 3: The next era in open post-training
Dive into the fascinating evolution of open post-training for language models! Discover how techniques like direct preference optimization are reshaping the landscape post-chatGPT. The conversation unveils innovative methodologies such as scaling prompts and the role of reinforcement learning with verifiable rewards. Get a sneak peek into future developments aimed at enhancing open weight models, and see how this competitive drive is pushing the boundaries of what AI can achieve!

Nov 14, 2024 • 4min
(Voiceover) Scaling realities
Dive into the debate surrounding AI scalability versus AGI expectations. Discover the successes and limitations of large AI models, and why specialized models might hold the key to future advancements. Engage with insights on how the landscape of artificial intelligence is evolving amidst varying expectations. This thought-provoking discussion sheds light on the complexities of the AI field and its potential.

Nov 13, 2024 • 11min
(Voiceover) Saving the National AI Research Resource & my AI policy outlook
Explore the vital role of the National AI Research Resource in shaping the future of AI in the U.S. The discussion emphasizes the importance of accountability and transparency in AI policy. Additionally, the potential impact of political changes on AI research and development is examined, providing insights into what the future may hold for the industry.

40 snips
Nov 7, 2024 • 1h 16min
Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next
Join Tim Dettmers, a leading figure in open-source AI development and a future Carnegie Mellon professor, as he shares insights on the transformative potential of open-source AI models. He discusses the challenges of quantization and GPU resource efficiency, emphasizing their role in driving innovation. Tim also explores the evolving landscape of AI technology, comparing its impact to the internet revolution, while addressing the delicate balance between academic research and real-world applications. His passionate perspective offers a fresh take on the future of AI!