

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Episodes
Mentioned books

17 snips
May 12, 2022 • 42min
Data Rights, Quantification and Governance for Ethical AI with Margaret Mitchell - #572
Meg Mitchell, Chief Ethics Scientist at Hugging Face, dives into the crucial interplay between ethical AI and data governance. She discusses her transition from big tech to prioritizing coding in her current role, emphasizing the importance of diverse data representation. Meg highlights evolving data curation practices, ethical documentation through Model Cards, and the pressing need for transparency to mitigate biases in AI. The conversation also touches on challenges in distinguishing AI-generated content from human-written material, raising concerns about misinformation.

11 snips
May 9, 2022 • 53min
Studying Machine Intelligence with Been Kim - #571
Been Kim, a staff research scientist at Google Brain and ICLR 2022 speaker, dives into the fascinating world of AI interpretability. She discusses the current state of interpretability techniques, exploring how Gestalt principles can enhance our understanding of neural networks. Been proposes a novel language for human-AI communication, aimed at improving collaboration and transparency. The conversation also touches on the evolution of AI tools, the unique insights from AlphaZero in chess, and the implications of model fingerprints for data privacy.

May 2, 2022 • 38min
Advances in Neural Compression with Auke Wiggers - #570
Auke Wiggers, an AI research scientist at Qualcomm, dives into the exciting realm of neural data compression. He discusses how generative models and transformer architectures are revolutionizing image and video coding. The conversation highlights the shift from traditional techniques to neural codecs that learn from examples, and the impressive real-time performance on mobile devices. Auke also touches on innovations like transformer-based transform coding and shares insights from recent ICLR papers, showcasing the future of efficient data compression.

10 snips
Apr 25, 2022 • 46min
Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569
Irwan Bello, a research scientist formerly with Google Brain and now part of a stealth AI startup, dives into the world of sparse expert models. He discusses his recent work on designing effective architectures that improve performance while managing computational costs. The conversation uncovers how the mixture-of-experts technique can extend beyond NLP to various tasks, including vision. Bello also shares insights on enhancing alignment in language models through instruction tuning and the challenges of optimizing these large-scale systems.

16 snips
Apr 18, 2022 • 52min
Daring to DAIR: Distributed AI Research with Timnit Gebru - #568
Timnit Gebru, founder of the Distributed AI Research Institute, joins the conversation to share her journey after her controversial departure from Google. She discusses the challenges of establishing independent research structures and the need for ethical AI practices. The importance of fairness beyond technical terms is highlighted, along with tackling systemic issues. Timnit also explores innovative projects, like examining spatial apartheid using AI. Throughout, she emphasizes the value of diverse voices and community engagement in reshaping AI research.

Apr 11, 2022 • 50min
Hierarchical and Continual RL with Doina Precup - #567
In this engaging conversation, Doina Precup, a Research team lead at DeepMind Montreal and a professor at McGill University, dives into her research on hierarchical and continual reinforcement learning. She discusses how agents can learn abstract representations and the critical role of reward specifications in shaping intelligent behaviors. Doina draws intriguing parallels between hierarchical RL and CNNs while exploring the challenges and future of reinforcement learning in dynamic environments, all while emphasizing the importance of adaptability and multi-level reasoning.

11 snips
Apr 4, 2022 • 30min
Open-Source Drug Discovery with DeepChem with Bharath Ramsundar - #566
Bharath Ramsundar, founder and CEO of Deep Forest Sciences, shares his expertise in AI-driven drug discovery and molecular design. He delves into the challenges biotech firms face in integrating AI, highlighting the need for collaboration and a solid infrastructure. The discussion includes the innovative DeepChem library and its datasets like MoleculeNet, which aim to enhance drug development processes. Bharath also emphasizes the importance of chemistry-aware validation methods for better model generalization and the evolving partnership between AI and traditional sciences.

24 snips
Mar 28, 2022 • 41min
Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565
Sebastian Raschka, an assistant professor at the University of Wisconsin-Madison and lead AI educator at Grid.ai, discusses his hands-on approach to AI education. He shares insights from his book, stressing the importance of practical applications for beginners. The conversation also covers PyTorch Lightning’s role in streamlining deep learning and explores ordinal regression's significance in real-world scenarios. Razchka emphasizes creating accessible resources and the innovative course design that enhances learning experiences in machine learning.

4 snips
Mar 21, 2022 • 47min
Big Science and Embodied Learning at Hugging Face 🤗 with Thomas Wolf - #564
Thomas Wolf, co-founder and chief science officer at Hugging Face, shares his fascinating journey from quantum physics and patent law to machine learning. He discusses the BigScience project, which unites over 1,000 researchers to create a vast multilingual dataset, emphasizing the importance of diverse data and ethical AI. The conversation dives into the innovations of transformers in NLP, multimodality, and the implications for the metaverse. Thomas also touches on his new book and the evolving landscape of AI, advocating for collaborative and responsible advancements.

16 snips
Mar 14, 2022 • 44min
Full-Stack AI Systems Development with Murali Akula - #563
Murali Akula, Sr. Director of Software Engineering at Qualcomm, leads innovations in AI for Snapdragon chips. He discusses the full-stack approach to AI development, emphasizing collaboration between research and deployment teams. The conversation uncovers challenges of deploying machine learning on mobile devices, including optimizing for power and memory constraints. Murali also highlights advancements like the X-Distill algorithm for depth estimation and the shift to localized AI training, showcasing how these breakthroughs are revolutionizing AI applications.