
Interconnects
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Latest episodes

Nov 14, 2024 • 4min
(Voiceover) Scaling realities
Dive into the debate surrounding AI scalability versus AGI expectations. Discover the successes and limitations of large AI models, and why specialized models might hold the key to future advancements. Engage with insights on how the landscape of artificial intelligence is evolving amidst varying expectations. This thought-provoking discussion sheds light on the complexities of the AI field and its potential.

Nov 13, 2024 • 11min
(Voiceover) Saving the National AI Research Resource & my AI policy outlook
Explore the vital role of the National AI Research Resource in shaping the future of AI in the U.S. The discussion emphasizes the importance of accountability and transparency in AI policy. Additionally, the potential impact of political changes on AI research and development is examined, providing insights into what the future may hold for the industry.

40 snips
Nov 7, 2024 • 1h 16min
Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next
Join Tim Dettmers, a leading figure in open-source AI development and a future Carnegie Mellon professor, as he shares insights on the transformative potential of open-source AI models. He discusses the challenges of quantization and GPU resource efficiency, emphasizing their role in driving innovation. Tim also explores the evolving landscape of AI technology, comparing its impact to the internet revolution, while addressing the delicate balance between academic research and real-world applications. His passionate perspective offers a fresh take on the future of AI!

Oct 31, 2024 • 54min
Interviewing Andrew Carr of Cartwheel on the State of Generative AI
Andrew Carr, co-founder and chief scientist at Cartwheel AI, is on a mission to create innovative text-to-motion models for creative fields. He dives into how generative AI can enhance creativity through niche applications, like AI-generated poetry. Andrew shares insights from his time at OpenAI and discusses the fascinating interplay between AI and art, emphasizing the need for human oversight. He also explores the evolving AI landscape and the importance of fostering a positive research culture in tech companies to drive impactful innovations.

Oct 30, 2024 • 10min
(Voiceover) Why I build open language models
Explore the compelling motivations behind the creation of open language models, where inclusivity and transparency are key. Discover how open-source systems can challenge corporate dominance while promoting diversity in tech. The urgency of engaging the public in developing these models is highlighted, stressing collaboration as essential for addressing regulatory challenges and ensuring responsible AI research. Tune in for insights on fostering impactful advancements in the realm of artificial intelligence!

Oct 23, 2024 • 11min
(Voiceover) Claude's agentic future and the current state of the frontier models
Explore the exciting frontier of AI as the podcast delves into the latest on Claude 3.5, Anthropic's cutting-edge model. Discover how it stacks up against Google's Gemini and OpenAI's systems. The discussion highlights the strengths, weaknesses, and future potential of these models. Who will dominate the AI landscape? Tune in for insights on the evolution of these powerful technologies and their implications for automation and reasoning.

9 snips
Oct 17, 2024 • 54min
Interviewing Arvind Narayanan on making sense of AI hype
Arvind Narayanan, a computer science professor at Princeton and director of the Center for Information Technology Policy, delves into the realities of AI amidst the hype. He discusses the pitfalls of AI policy, emphasizing the need for harm-focused research. The conversation covers the risks of open-source foundation models, critiques of traditional AI in risk prediction, and the implications of scaling laws. Narayanan also sheds light on the balance between innovation and societal impact, highlighting the necessary collaboration between researchers and policymakers.

Oct 16, 2024 • 17min
(Voiceover) Building on evaluation quicksand
Explore the complexities of evaluating language models in the fast-evolving AI landscape. Discover the hidden issues behind closed evaluation silos and the hurdles faced by open evaluation tools. Learn about the cutting-edge frontiers in evaluation methods and the emerging risks of synthetic data contamination. The conversation highlights the necessity for standardized practices to ensure transparency and reliability in model assessments. Tune in for insights that could reshape the evaluation process in artificial intelligence!

7 snips
Oct 10, 2024 • 1h
Interviewing Andrew Trask on how language models should store (and access) information
Andrew Trask, a passionate AI researcher and leader of the OpenMined organization, shares insights on privacy-preserving AI and data access. He discusses the importance of secure enclaves in AI evaluation and the complexities of copyright laws impacting language models. Trask explores the ethical dilemmas of using non-licensed data, federated learning's potential, and challenges startups face in the AI landscape. He emphasizes the need for innovative infrastructures and the synergy between Digital Rights Management and secure computing for better data governance.

Oct 9, 2024 • 12min
How scaling changes model behavior
Delve into how scaling computational resources impacts the behavior of language models. Discover the intriguing balance between benefits and challenges in striving for artificial general intelligence. Metaphors shed light on potential solutions, while short-term scaling efforts are assessed for viability. Tune in for insights on how these dynamics shape the future of AI.