Interconnects cover image

Interconnects

Latest episodes

undefined
Aug 7, 2024 • 10min

A recipe for frontier model post-training

The discussion dives into the latest advancements in reinforcement learning from human feedback, focusing on the Llama 3.1 model. Key players like Apple, Meta, and Nvidia emphasize the importance of synthetic data and iterative training. Data quality emerges as a pivotal theme, with agreements on new standards in model training. The episode showcases how companies are adapting to this evolving landscape, highlighting a shift towards refined approaches that include rigorous filtering and human preference data.
undefined
Aug 1, 2024 • 1h 4min

Interviewing Sebastian Raschka on the state of open LLMs, Llama 3.1, and AI education

Sebastian Raschka, a staff research engineer at Lightning AI and AI educator, dives into the dynamic landscape of open language models. He discusses the evolution of Llama 3.1 and its implications for AI research. Sebastian shares insights from his experience as an Arxiv moderator, shedding light on the challenges of navigating academic papers. The conversation also covers advancements in model training techniques, the importance of ethics in AI, and how open access enhances AI education. Tune in for a fascinating look at the future of AI and language models!
undefined
Jul 31, 2024 • 8min

GPT-4o-mini changed ChatBotArena

Uncover the transformation in the Chatbot Arena brought about by GPT-4o-mini. Delve into the fascinating world of model evaluations, exploring the strengths and weaknesses of the platform. Discover insights from recent performances of Llama 3 and the impact of community feedback on AI understanding. Hear about the intriguing partial solutions being developed and the roadmap ahead in the evolving landscape of language models.
undefined
Jul 23, 2024 • 15min

Llama 3.1 405b, Meta's AI strategy, and the new open frontier model ecosystem

Discussing Meta's AI strategy in the open-source AI ecosystem, comparing it to the Unix stack. Analyzing Zuckerberg's vision for open-source AI and the implications of the Llama 3.1 license. Exploring different futures for regulating frontier models in the AI economy.
undefined
Jul 17, 2024 • 14min

SB 1047, AI regulation, and unlikely allies for open models

A podcast discusses the open-source community's opposition to SB 1047 and its potential impact on AI regulation. They delve into the challenges of regulating AI developers, the emergence of unlikely allies for open models, and ponder on what should be regulated in the AI landscape today.
undefined
Jul 3, 2024 • 7min

Switched to Claude 3.5

Speculations on the role of RLHF, transitioning to Claude 3.5 for enhanced performance, product priorities, and the peak of RLHF discussed. AI generated audio with Python and 11Labs.
undefined
Jun 27, 2024 • 57min

Interviewing Dean Ball on AI policy: CA SB 1047, upcoming AI disaster response, Llama 3 405B, Chinese open-source AI, and scaling laws

Dean W. Ball, a research fellow at the Mercatus Center and author of the Hyperdimensional Substack, dives deep into California's SB 1047, outlining its implications for AI regulation. He discusses potential AI disaster scenarios, the significance of Meta's upcoming 405B model, and the rise of open-source AI in China. Ball also sheds light on AI safety strategies and the complexities surrounding scaling laws, emphasizing the need for effective governance as technology rapidly evolves. His insights offer a thought-provoking perspective on the future of AI policy.
undefined
Jun 26, 2024 • 12min

RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition

Exploring the impact of RLHF in training language models, a retrospective on RewardBench's performance, and the competition for reward modeling are discussed in this insightful podcast. The podcast also delves into the challenges and progress in reinforcement learning through human feedback, comparing DPO and PPO models, and a competition predicting user preferences among large language models.
undefined
Jun 21, 2024 • 11min

Frontiers in synthetic data

Exploring the impact of synthetic data in language modeling, filtering techniques, and structured synthetic data. The podcast discusses the pros and cons of training on multi-output-source synthetic datasets and weak-to-strong generalization. They also touch on creating synthetic prompts and the strategy behind synthetic data in AI.
undefined
Jun 18, 2024 • 8min

Text-to-video AI is already abundant

Discussion on the abundance of text-to-video AI models, potential for a Sora-like model with open-weights, ethical implications of these models, and growth in the competitive landscape of text-to-video AI market.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode