The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
12 snips
Aug 27, 2024 • 46min

The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699

In this engaging discussion, Peter van der Putten, director of the AI Lab at Pega and an assistant professor at Leiden University, dives deep into the implications of the newly adopted European AI Act. He explains the ethical principles that motivate this regulation and the complexities of applying fairness metrics in real-world AI applications. The conversation highlights the challenges of mitigating bias, the significance of transparency, and how the Act could shape global AI practices, similarly to GDPR's impact on data privacy.
undefined
135 snips
Aug 19, 2024 • 59min

The Building Blocks of Agentic Systems with Harrison Chase - #698

Harrison Chase, co-founder and CEO of LangChain, shares insights from his extensive background in machine learning and MLOps. He discusses the evolution of agentic systems, emphasizing their real-world applications and communication needs. Harrison delves into Retrieval-Augmented Generation (RAG) and the importance of observability tools for enhancing agent development. He also highlights the challenges of transitioning prototypes to production and offers his hot takes on prompting and multi-modal models, providing a glimpse into the future of LLM applications.
undefined
Aug 12, 2024 • 47min

Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697

Siddhika Nevrekar, Head of AI Hub at Qualcomm Technologies, discusses simplifying on-device AI for developers. She highlights the shift from cloud to local device processing, emphasizing privacy and offline access. The conversation covers challenges in optimizing AI across varied hardware and the collaboration needed between AI frameworks and manufacturers. Siddhika also introduces Qualcomm's AI Hub, aimed at streamlining model testing and fostering innovation in IoT, autonomous vehicles, and enhancing user experiences with AI-integrated solutions.
undefined
5 snips
Aug 5, 2024 • 47min

Genie: Generative Interactive Environments with Ashley Edwards - #696

In this conversation, Ashley Edwards, a member of the technical staff at Runway with past affiliations at Google DeepMind and Uber, reveals the innovative Genie project. They discuss Genie’s ability to create interactive video environments for training reinforcement learning agents without supervision. Topics include the mechanics of latent action models, video tokenization, and dynamics modeling for frame prediction. Ashley highlights the practical implications of Genie and compares it to other models like Sora, mapping out future directions in video generation.
undefined
12 snips
Jul 30, 2024 • 57min

Bridging the Sim2real Gap in Robotics with Marius Memmel - #695

Marius Memmel, a PhD student at the University of Washington, dives into the fascinating world of sim-to-real transfer in robotics. He discusses the complexities of training robots in cluttered environments and how his ASID framework helps improve simulation models. They explore Fisher information's role in optimizing robot learning and the importance of balancing exploration and exploitation. The conversation also highlights his URDFormer model for realistic scene reconstruction, showcasing innovative methods to enhance robotic interactions with their surroundings.
undefined
149 snips
Jul 23, 2024 • 1h 20min

Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - #694

In this discussion with Hamel Husain, founder of Parlance Labs, they dive into the practicalities of leveraging large language models (LLMs) for real-world applications. Husain shares insights on fine-tuning techniques, including tools like Axolotl and the advantages of LoRa for efficient model adjustments. He emphasizes the importance of thoughtful user interface design and systematic evaluation strategies to enhance AI's effectiveness. The conversation also highlights challenges in data curation and the need for accurate metrics in domain-specific projects, ensuring robust AI development.
undefined
20 snips
Jul 17, 2024 • 58min

Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693

In this discussion, Albert Gu, an assistant professor at Carnegie Mellon University, dives into his research on post-transformer architectures. He explains the efficiency and challenges of the attention mechanism, particularly in managing high-resolution data. The conversation highlights the significance of tokenization in enhancing model effectiveness. Gu also explores hybrid models that blend attention with state-space elements and emphasizes the groundbreaking advancements brought by his Mamba and Mamba-2 frameworks. His vision for the future of multi-modal foundation models is both insightful and inspiring.
undefined
Jul 9, 2024 • 43min

Decoding Animal Behavior to Train Robots with EgoPet with Amir Bar - #692

Join Amir Bar, a PhD candidate at Tel Aviv University and UC Berkeley, as he unpacks his groundbreaking research on visual-based learning and self-supervised object detection. He introduces ‘EgoPet,’ a unique dataset that captures animal behavior from their perspective, aiming to bridge the gap between AI and nature. The discussion dives into challenges of current classification methods, the significance of ego-centric data in robotic training, and the potential to enhance robotic navigation by mimicking animal locomotion. Exploration of these topics reveals fascinating insights into future AI advancements.
undefined
9 snips
Jul 1, 2024 • 57min

How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691

Join Sarah Bird, Chief Product Officer of Responsible AI at Microsoft, as she dives into the essential realms of generative AI testing and safety. Explore the challenges of AI hallucinations and the importance of balancing fairness with security. Hear insights from Microsoft's past failures like Tay and Bing Chat, stressing the need for adaptive testing and human oversight. Sarah also discusses innovative methods like automated safety testing and red teaming, emphasizing a robust governance framework for evolving AI technologies.
undefined
9 snips
Jun 25, 2024 • 46min

Long Context Language Models and their Biological Applications with Eric Nguyen - #690

Eric Nguyen, a PhD student at Stanford, dives deep into his research on long context foundation models, specifically Hyena and its applications in biology. He explains the limitations of traditional transformers in processing lengthy sequences and how convolutional models provide innovative solutions. Nguyen introduces Hyena DNA, designed for long-range DNA dependencies, and discusses Evo, a hybrid model with massive parameters for DNA generation. The podcast touches on exciting applications in CRISPR gene editing and the implications of using AI in biological research.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app