The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Aug 16, 2021 • 42min

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

José Miguel Hernández Lobato, a machine learning lecturer at the University of Cambridge, shares insights on the fusion of Bayesian learning and deep learning in molecular design. He discusses innovative methods for predicting chemical reactions and explores the challenges of sample efficiency in reinforcement learning. José elaborates on deep generative models, their role in molecular property prediction, and strategies for enhancing the robustness of machine learning through invariant risk minimization. His research reveals exciting pathways in optimizing molecule discovery.
undefined
19 snips
Aug 12, 2021 • 47min

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Greg Brockman, co-founder and CTO of OpenAI, dives into the innovative Codex API, which extends the capabilities of GPT-3 for coding tasks. He discusses the key differences in performance between Codex and GPT-3, emphasizing Codex's reliability with programming instructions. The potential of Codex as an educational tool is highlighted, alongside its implications for job automation and fairness in AI. Brockman also details the Copilot collaboration with GitHub and the exciting rollout strategies for engaging users with this groundbreaking technology.
undefined
Aug 9, 2021 • 32min

Spatiotemporal Data Analysis with Rose Yu - #508

In this engaging discussion, Rose Yu, an assistant professor at UC San Diego, delves into her groundbreaking work on machine learning for spatiotemporal data. She explains how integrating physical principles and symmetry enhances neural network architectures. The conversation covers innovative approaches in climate modeling, including turbulent prediction and the application of Physics Guided AI. Rose also addresses uncertainty quantification in models, crucial for applications like COVID-19 forecasting, showcasing the importance of confidence in predictions.
undefined
8 snips
Aug 5, 2021 • 51min

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

In this engaging discussion, Bryan Catanzaro, VP of Applied Deep Learning Research at NVIDIA, delves into high-performance computing's intersection with AI. He reveals insights about the Megatron framework for training large language models and the three parallelism types that enhance model efficiency. Bryan also highlights the challenges in supercomputing, the pioneering Deep Learning Super Sampling technology for gaming graphics, and innovative methods for generating high-resolution synthetic data to improve image quality in AI applications.
undefined
Aug 2, 2021 • 54min

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Join Lina Montoya, a postdoctoral researcher at UNC Chapel Hill focused on causal inference in precision medicine. She dives into her innovative work on Optimal Dynamic Treatment rules, particularly in the U.S. criminal justice system. Lina discusses the critical role of neglected assumptions in causal inference, the super learner algorithm's impact on predicting treatment effectiveness, and future research directions aimed at optimizing therapy delivery in resource-constrained settings like rural Kenya. This engaging discussion highlights the intersection of AI, healthcare, and justice.
undefined
Jul 29, 2021 • 51min

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Gustavo Malkomes, a research engineer at Intel with expertise in active learning and multi-objective optimization, dives into an innovative algorithm for multiobjective experimental design. He discusses how his work allows teams to explore multiple metrics simultaneously and efficiently, enhancing human-in-the-loop optimization. The conversation covers the balance between competing goals, the significance of stable solutions, and the fascinating applications of his research in real-world scenarios, such as optimization and drug discovery.
undefined
Jul 26, 2021 • 37min

Fairness and Robustness in Federated Learning with Virginia Smith -#504

Virginia Smith, an assistant professor at Carnegie Mellon University, delves into her innovative work on federated learning. She discusses her research on fairness and robustness, highlighting the challenges of maintaining model performance across diverse data inputs. The conversation touches on her findings from the paper 'Ditto', exploring the trade-offs in AI ethics. Additionally, she shares insights on leveraging data heterogeneity in federated clustering to enhance model effectiveness and the balance between privacy and robust learning.
undefined
Jul 22, 2021 • 41min

Scaling AI at H&M Group with Errol Koolmeister - #503

Errol Koolmeister, head of AI Foundation at H&M Group, shares insights on the fashion retail giant's transformative AI journey. He discusses implementing AI for fashion forecasting and pricing, emphasizing the significance of data accessibility and stakeholder engagement. Highlighting the challenges of scaling AI, Errol explains the importance of balancing simplicity with complexity in modeling. He also addresses managing AI initiatives within a large organization, focusing on building a robust infrastructure and fostering an 'AI-first' culture.
undefined
Jul 19, 2021 • 49min

Evolving AI Systems Gracefully with Stefano Soatto - #502

Stefano Soatto, VP of AI Application Science at AWS and a professor at UCLA, dives into the fascinating world of Graceful AI. He discusses the challenges of evolving AI in real-world applications while avoiding the pitfalls of constant retraining. Topics include the critical timing of regularization in deep learning, the parallels between model compression and material science, and the intricacies of model reliability. Stefano also unpacks innovations like focal distillation and their potential to enhance lifelong learning in AI systems.
undefined
Jul 15, 2021 • 45min

ML Innovation in Healthcare with Suchi Saria - #501

In this engaging discussion, Suchi Saria, Founder and CEO of Bayesian Health and an esteemed professor at Johns Hopkins University, shares her journey at the intersection of machine learning and healthcare. She highlights the slow acceptance of AI in medical practice and discusses pockets of success in the field. Saria elaborates on groundbreaking advancements in sepsis detection and the challenges of integrating ML tools into clinical workflows. Finally, she envisions a future where improved data accessibility drives better patient outcomes in healthcare.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app