Interconnects

Nathan Lambert
undefined
Jun 27, 2024 • 57min

Interviewing Dean Ball on AI policy: CA SB 1047, upcoming AI disaster response, Llama 3 405B, Chinese open-source AI, and scaling laws

Dean W. Ball, a research fellow at the Mercatus Center and author of the Hyperdimensional Substack, dives deep into California's SB 1047, outlining its implications for AI regulation. He discusses potential AI disaster scenarios, the significance of Meta's upcoming 405B model, and the rise of open-source AI in China. Ball also sheds light on AI safety strategies and the complexities surrounding scaling laws, emphasizing the need for effective governance as technology rapidly evolves. His insights offer a thought-provoking perspective on the future of AI policy.
undefined
Jun 26, 2024 • 12min

RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition

Exploring the impact of RLHF in training language models, a retrospective on RewardBench's performance, and the competition for reward modeling are discussed in this insightful podcast. The podcast also delves into the challenges and progress in reinforcement learning through human feedback, comparing DPO and PPO models, and a competition predicting user preferences among large language models.
undefined
Jun 21, 2024 • 11min

Frontiers in synthetic data

Exploring the impact of synthetic data in language modeling, filtering techniques, and structured synthetic data. The podcast discusses the pros and cons of training on multi-output-source synthetic datasets and weak-to-strong generalization. They also touch on creating synthetic prompts and the strategy behind synthetic data in AI.
undefined
Jun 18, 2024 • 8min

Text-to-video AI is already abundant

Discussion on the abundance of text-to-video AI models, potential for a Sora-like model with open-weights, ethical implications of these models, and growth in the competitive landscape of text-to-video AI market.
undefined
Jun 12, 2024 • 13min

AI for the rest of us

The podcast delves into Apple's innovative AI system, discussing core models, alignment strategies, and on-device magic. They explore how Apple optimizes machine learning models on their devices, using adapters and pioneering AppIntents for standardized app functionality.
undefined
Jun 5, 2024 • 8min

A realistic path to robotic foundation models

Sergey Levine and Chelsea Finn from Physical Intelligence discuss a realistic path to robotic foundation models, key factors for the future of robotics, and the transformerification of robotics. They explore the shift towards horizontal robotics companies and the importance of building general robotics models for various tasks.
undefined
May 29, 2024 • 8min

We aren't running out of training data, we are running out of open training data

Exploring the scarcity of open training data, data licensing deals, scaling language models, and the shift towards synthetic and multimodal data for training. Synthetic data generation at a rate of 1 trillion tokens per day. High costs of data licensing deals. Search for better tokens and new frontiers in AI development.
undefined
May 22, 2024 • 9min

Name, image, and AI's likeness

Exploring AI's impact on personal branding and the controversy with OpenAI's latest model. Legal and ethical dimensions of AI on name, image, and likeness. Influence of AI on media, culture, and creativity, and the challenges faced by public figures in protecting their features from unauthorized use.
undefined
May 16, 2024 • 12min

OpenAI chases Her

OpenAI chases Her in this podcast, discussing GPT-4O's advancements and Google's mirror with Gemini. The episode explores creating AI companions like Samantha from 'Her' and the competitive drive for product innovation in AI.
undefined
May 13, 2024 • 14min

OpenAI's Model (behavior) Spec, RLHF transparency, and personalization questions

Exploring OpenAI's model behavior specification for transparency and steering AI models, the importance of detailing behaviors, system prompt complexities, balancing complexity in AI model behavior, and the significance of adhering to OpenAI's model behavior specification and community norms in AI usage.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app