The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Aug 26, 2021 • 36min

Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513

Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta. We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.The complete show notes for this episode can be found at twimlai.com/go/513.
undefined
Aug 23, 2021 • 50min

Adaptivity in Machine Learning with Samory Kpotufe - #512

In this engaging conversation, Samory Kpotufe, an associate professor at Columbia University, delves into the complexities of adaptive algorithms in machine learning. He highlights the importance of self-tuning algorithms that can adjust to varying data. The discussion covers transfer learning, emphasizing practical applications and challenges. Samory also touches on innovative methods in unsupervised learning and anomaly detection, especially within resource-constrained devices. His insights into the intersection of fractals and high-dimensional data add a fascinating layer to the conversation.
undefined
Aug 19, 2021 • 44min

A Social Scientist’s Perspective on AI with Eric Rice - #511

Eric Rice, an associate professor at USC and co-director of the USC Center for Artificial Intelligence in Society, sheds light on the intersection of AI and social science. He shares his experiences working on projects like HIV prevention for homeless youth and using machine learning to aid in housing resource allocation. Eric emphasizes the need for interdisciplinary collaboration and discusses how social scientists approach assessment differently than computer scientists, focusing on real-world impacts of AI solutions.
undefined
Aug 16, 2021 • 42min

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

José Miguel Hernández Lobato, a machine learning lecturer at the University of Cambridge, shares insights on the fusion of Bayesian learning and deep learning in molecular design. He discusses innovative methods for predicting chemical reactions and explores the challenges of sample efficiency in reinforcement learning. José elaborates on deep generative models, their role in molecular property prediction, and strategies for enhancing the robustness of machine learning through invariant risk minimization. His research reveals exciting pathways in optimizing molecule discovery.
undefined
8 snips
Aug 12, 2021 • 47min

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Greg Brockman, co-founder and CTO of OpenAI, dives into the innovative Codex API, which extends the capabilities of GPT-3 for coding tasks. He discusses the key differences in performance between Codex and GPT-3, emphasizing Codex's reliability with programming instructions. The potential of Codex as an educational tool is highlighted, alongside its implications for job automation and fairness in AI. Brockman also details the Copilot collaboration with GitHub and the exciting rollout strategies for engaging users with this groundbreaking technology.
undefined
Aug 9, 2021 • 32min

Spatiotemporal Data Analysis with Rose Yu - #508

In this engaging discussion, Rose Yu, an assistant professor at UC San Diego, delves into her groundbreaking work on machine learning for spatiotemporal data. She explains how integrating physical principles and symmetry enhances neural network architectures. The conversation covers innovative approaches in climate modeling, including turbulent prediction and the application of Physics Guided AI. Rose also addresses uncertainty quantification in models, crucial for applications like COVID-19 forecasting, showcasing the importance of confidence in predictions.
undefined
8 snips
Aug 5, 2021 • 51min

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

In this engaging discussion, Bryan Catanzaro, VP of Applied Deep Learning Research at NVIDIA, delves into high-performance computing's intersection with AI. He reveals insights about the Megatron framework for training large language models and the three parallelism types that enhance model efficiency. Bryan also highlights the challenges in supercomputing, the pioneering Deep Learning Super Sampling technology for gaming graphics, and innovative methods for generating high-resolution synthetic data to improve image quality in AI applications.
undefined
Aug 2, 2021 • 54min

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Join Lina Montoya, a postdoctoral researcher at UNC Chapel Hill focused on causal inference in precision medicine. She dives into her innovative work on Optimal Dynamic Treatment rules, particularly in the U.S. criminal justice system. Lina discusses the critical role of neglected assumptions in causal inference, the super learner algorithm's impact on predicting treatment effectiveness, and future research directions aimed at optimizing therapy delivery in resource-constrained settings like rural Kenya. This engaging discussion highlights the intersection of AI, healthcare, and justice.
undefined
Jul 29, 2021 • 51min

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Gustavo Malkomes, a research engineer at Intel with expertise in active learning and multi-objective optimization, dives into an innovative algorithm for multiobjective experimental design. He discusses how his work allows teams to explore multiple metrics simultaneously and efficiently, enhancing human-in-the-loop optimization. The conversation covers the balance between competing goals, the significance of stable solutions, and the fascinating applications of his research in real-world scenarios, such as optimization and drug discovery.
undefined
Jul 26, 2021 • 37min

Fairness and Robustness in Federated Learning with Virginia Smith -#504

Virginia Smith, an assistant professor at Carnegie Mellon University, delves into her innovative work on federated learning. She discusses her research on fairness and robustness, highlighting the challenges of maintaining model performance across diverse data inputs. The conversation touches on her findings from the paper 'Ditto', exploring the trade-offs in AI ethics. Additionally, she shares insights on leveraging data heterogeneity in federated clustering to enhance model effectiveness and the balance between privacy and robust learning.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app