The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest episodes

undefined
Aug 19, 2021 • 44min

A Social Scientist’s Perspective on AI with Eric Rice - #511

Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society. Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some of the most important lessons Eric has learned while doing interdisciplinary projects, how the social scientist’s approach to assessment and measurement would be different from a computer scientist's approach to assessing the algorithmic performance of a model. We specifically explore a few projects he’s worked on including HIV prevention amongst the homeless youth population in LA, a project he spearheaded with former guest Milind Tambe, as well as a project focused on using ML techniques to assist in the identification of people in need of housing resources, and ensuring that they get the best interventions possible. If you enjoyed this conversation, I encourage you to check out our conversation with Milind Tambe from last year’s TWIMLfest on Why AI Innovation and Social Impact Go Hand in Hand.The complete show notes for this episode can be found at https://twimlai.com/go/511.
undefined
Aug 16, 2021 • 42min

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

Today we’re joined by José Miguel Hernández-Lobato, a university lecturer in machine learning at the University of Cambridge. In our conversation with Miguel, we explore his work at the intersection of Bayesian learning and deep learning. We discuss how he’s been applying this to the field of molecular design and discovery via two different methods, with one paper searching for possible chemical reactions, and the other doing the same, but in 3D and in 3D space. We also discuss the challenges of sample efficiency, creating objective functions, and how those manifest themselves in these experiments, and how he integrated the Bayesian approach to RL problems. We also talk through a handful of other papers that Miguel has presented at recent conferences, which are all linked at twimlai.com/go/510.
undefined
Aug 12, 2021 • 47min

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of reconnecting with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it's being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs. The complete show notes for this episode can be found at twimlai.com/go/509.
undefined
Aug 9, 2021 • 32min

Spatiotemporal Data Analysis with Rose Yu - #508

Today we’re joined by Rose Yu, an assistant professor at the Jacobs School of Engineering at UC San Diego. Rose’s research focuses on advancing machine learning algorithms and methods for analyzing large-scale time-series and spatial-temporal data, then applying those developments to climate, transportation, and other physical sciences. We discuss how Rose incorporates physical knowledge and partial differential equations in these use cases and how symmetries are being exploited. We also explore their novel neural network design that is focused on non-traditional convolution operators and allows for general symmetry, how we get from these representations to the network architectures that she has developed and another recent paper on deep spatio-temporal models. The complete show note for this episode can be found at twimlai.com/go/508.
undefined
Aug 5, 2021 • 51min

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure. We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing. The complete show notes for this episode can be found at twimlai.com/go/507.
undefined
Aug 2, 2021 • 54min

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill. In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley. Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like.The complete show notes for this episode can be found at twimlai.com/go/506.
undefined
Jul 29, 2021 • 51min

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. In our conversation with Gustavo, we explore his paper Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios.The complete show notes for this episode can be found at twimlai.com/go/505.
undefined
Jul 26, 2021 • 37min

Fairness and Robustness in Federated Learning with Virginia Smith -#504

Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness.We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting.The complete show notes for this episode can be found at twimlai.com/go/504.
undefined
Jul 22, 2021 • 41min

Scaling AI at H&M Group with Errol Koolmeister - #503

Today we’re joined by Errol Koolmeister, the head of AI foundation at H&M Group.In our conversation with Errol, we explore H&M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.
undefined
Jul 19, 2021 • 49min

Evolving AI Systems Gracefully with Stefano Soatto - #502

Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA. Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more.The complete show notes for this episode can be found at twimlai.com/go/502.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode