

Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Episodes
Mentioned books

Oct 16, 2024 • 17min
(Voiceover) Building on evaluation quicksand
Explore the complexities of evaluating language models in the fast-evolving AI landscape. Discover the hidden issues behind closed evaluation silos and the hurdles faced by open evaluation tools. Learn about the cutting-edge frontiers in evaluation methods and the emerging risks of synthetic data contamination. The conversation highlights the necessity for standardized practices to ensure transparency and reliability in model assessments. Tune in for insights that could reshape the evaluation process in artificial intelligence!

7 snips
Oct 10, 2024 • 1h
Interviewing Andrew Trask on how language models should store (and access) information
Andrew Trask, a passionate AI researcher and leader of the OpenMined organization, shares insights on privacy-preserving AI and data access. He discusses the importance of secure enclaves in AI evaluation and the complexities of copyright laws impacting language models. Trask explores the ethical dilemmas of using non-licensed data, federated learning's potential, and challenges startups face in the AI landscape. He emphasizes the need for innovative infrastructures and the synergy between Digital Rights Management and secure computing for better data governance.

Oct 9, 2024 • 12min
How scaling changes model behavior
Delve into how scaling computational resources impacts the behavior of language models. Discover the intriguing balance between benefits and challenges in striving for artificial general intelligence. Metaphors shed light on potential solutions, while short-term scaling efforts are assessed for viability. Tune in for insights on how these dynamics shape the future of AI.

Oct 2, 2024 • 10min
[Article Voiceover] AI Safety's Crux: Culture vs. Capitalism
The podcast dives into the clash between AI safety and the commercialization frenzy sweeping the industry. Discussions highlight the recent internal turmoil at OpenAI and California's SB 1047 as a test for AI regulations. It examines how the pressure to conform to big tech standards can undermine safety protocols. The tension of capitalism driving innovation while risking ethical considerations makes for a thought-provoking analysis of modern AI challenges.

14 snips
Sep 30, 2024 • 1h 9min
Interviewing Riley Goodside on the science of prompting
Riley Goodside, a staff prompt engineer at Scale AI and former data scientist, delves into the intricacies of prompt engineering. He shares how writing prompts can be likened to coding and the recent advancements spurred by ChatGPT. The discussion covers various AI models, including o1 and Reflection 70B, emphasizing the importance of evaluation methods and user control in AI interactions. Goodside also highlights the evolving community of prompt engineers and the pressing need for education in effectively utilizing AI.

Sep 27, 2024 • 14min
[Article Voiceover] Llama 3.2 Vision and Molmo: Foundations for the multimodal open-source ecosystem
Dive into the fascinating world of open-source AI with a detailed look at Llama 3.2 Vision and Molmo. Explore how multimodal models enhance capabilities by integrating visual inputs with text. Discover the architectural differences and performance comparisons among leading models. The discussion delves into current challenges, the future of generative AI, and what makes the open-source movement vital for developers. Tune in for insights that bridge technology and creativity in the evolving landscape of AI!

Sep 17, 2024 • 19min
[Article Voiceover] Reverse engineering OpenAI's o1
Dive into the future of AI with OpenAI's groundbreaking O1 reasoning system. Explore its innovative training methods and real-time inference capabilities. Delve into the challenges of scaling reinforcement learning models and the complexities of balancing human preferences with computational needs. Discover the evolution of language models and their potential to become more integrated tools in our lives. It's an exciting look at cutting-edge developments in artificial intelligence!

Sep 11, 2024 • 12min
Futures of the data foundry business model
The discussion dives into the competitive dynamics of data foundries, contrasting synthetic and human-annotated data for AI training. It explores the implications of advancing reinforcement learning with human feedback. A key focus is the future of data foundries as AI dependence escalates, highlighting potential growth vectors and the associated risks. The conversation also touches on how companies like Nvidia could dominate profits in the evolving data market. Expect insights that provoke thought about the future of AI and data sourcing!

Sep 10, 2024 • 6min
A post-training approach to AI regulation with Model Specs
Discover the pivotal role of model specifications in AI regulation. The discussion delves into current regulatory trends, emphasizing transparency and responsible AI use. It highlights the importance of documenting intentions behind computational models, fostering connections among stakeholders. The hosts explore how clear specifications can mitigate risks and anticipate future developments, paving the way for ongoing dialogue in the rapidly evolving AI landscape.

Sep 5, 2024 • 11min
OpenAI's Strawberry, LM self-talk, inference scaling laws, and spending more on inference
Discover the fascinating advancements in AI with OpenAI's Strawberry method, designed to enhance reasoning in language models. The discussion reveals the importance of inference spending and structural changes shaping future AI products. Dive into the complexities of scaling inference, where reinforcement learning and reward models play a pivotal role. Understand why optimizing inference time is crucial and explore promising avenues for further research in this rapidly evolving field.


