The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest episodes

undefined
Feb 1, 2021 • 39min

Expressive Deep Learning with Magenta DDSP w/ Jesse Engel - #452

Today we’re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project.  In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity through ML and deep learning. We dig deep into their Differentiable Digital Signal Processing (DDSP) library, which “lets you combine the interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning.” Finally, Jesse walks us through some of the other projects that the Magenta team undertakes, including NLP and language modeling, and what he wants to see come out of the work that he and others are doing in creative AI research. The complete show notes for this episode can be found at twimlai.com/go/452. 
undefined
Jan 29, 2021 • 55min

Semantic Folding for Natural Language Understanding with Francisco Weber - #451

Today we’re joined by return guest Francisco Webber, CEO & Co-founder of Cortical.io. Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding. The complete show notes for this episode can be found at twimlai.com/go/451.
undefined
Jan 25, 2021 • 53min

The Future of Autonomous Systems with Gurdeep Pall - #450

Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft. Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as a part of a general-purpose operating system. In our conversation with Gurdeep, we discuss Microsoft’s acquisition of Bonsai and how they fit in the toolchain for creating brains for autonomous systems with “machine teaching,” and other practical applications of machine teaching in autonomous systems. We also explore the challenges of simulation, and how they’ve evolved to make the problems that the physical world brings more tenable. Finally, Gurdeep shares concrete use cases for autonomous systems, and how to get the best ROI on those investments, and of course, what’s next in the very broad space of autonomous systems. The complete show notes for this episode can be found at twimlai.com/go/450.
undefined
Jan 21, 2021 • 36min

AI for Ecology and Ecosystem Preservation with Bryan Carstens - #449

Today we’re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology & Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University. In our conversation with Bryan, who comes from a traditional biology background, we cover a ton of ground, including a foundational layer of understanding for the vast known unknowns in species and biodiversity, and how he came to apply machine learning to his lab’s research. We explore a few of his lab’s projects, including applying ML to genetic data to understand the geographic and environmental structure of DNA, what factors keep machine learning from being used more frequently used in biology, and what’s next for his group. The complete show notes for this episode can be found at twimlai.com/go/449.
undefined
Jan 18, 2021 • 1h 2min

Off-Line, Off-Policy RL for Real-World Decision Making at Facebook - #448

Today we’re joined by Jason Gauci, a Software Engineering Manager at Facebook AI. In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the platform and the types of decisions they’re using Re-Agent to make, from ranking and recommendations to their eCommerce marketplace. Jason also walks us through the differences between online/offline and on/off policy model training, and where Re-Agent sits in this spectrum. Finally, we discuss the concept of counterfactual causality, and how they ensure safety in the results of their models. The complete show notes for this episode can be found at twimlai.com/go/448.
undefined
Jan 14, 2021 • 38min

A Future of Work for the Invisible Workers in A.I. with Saiph Savage - #447

Today we’re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM. We caught up with Saiph during NeurIPS where she delivered an insightful invited talk “A Future of Work for the Invisible Workers in A.I.”. In our conversation with Saiph, we gain a better understanding of the “Invisible workers,” or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs. We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph’s participatory design work with rural workers in the global south. The complete show notes for this episode can be found at twimlai.com/go/447.
undefined
Jan 11, 2021 • 1h 14min

Trends in Graph Machine Learning with Michael Bronstein - #446

Today we’re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter. In our conversation with Michael, we touch on his thoughts about the year in Machine Learning overall, including GPT-3 and Implicit Neural Representations, but spend a major chunk of time on the sub-field of Graph Machine Learning.  We talk through the application of Graph ML across domains like physics and bioinformatics, and the tools to look out for. Finally, we discuss what Michael thinks is in store for 2021, including graph ml applied to molecule discovery and non-human communication translation.
undefined
Jan 7, 2021 • 1h 22min

Trends in Natural Language Processing with Sameer Singh - #445

Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine.  We last spoke with Sameer at our Natural Language Processing office hours back at TWIMLfest, and was the perfect person to help us break down 2020 in NLP. Sameer tackles the review in 4 main categories, Massive Language Modeling, Fundamental Problems with Language Models, Practical Vulnerabilities with Language Models, and Evaluation.  We also explore the impact of GPT-3 and Transformer models, the intersection of vision and language models, and the injection of causal thinking and modeling into language models, and much more. The complete show notes for this episode can be found at twimlai.com/go/445.
undefined
Jan 4, 2021 • 1h 9min

Trends in Computer Vision with Pavan Turaga - #444

AI Rewind continues today as we’re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering & Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona State University. Pavan, who joined us back in June to talk through his work from CVPR ‘20, Invariance, Geometry and Deep Neural Networks, is back to walk us through the trends he’s seen in Computer Vision last year. We explore the revival of physics-based thinking about scenes, differential rendering, the best papers, and where the field is going in the near future. We want to hear from you! Send your thoughts on the year that was 2020 below in the comments, or via Twitter at @samcharrington or @twimlai. The complete show notes for this episode can be found at twimlai.com/go/444
undefined
Dec 30, 2020 • 1h 27min

Trends in Reinforcement Learning with Pablo Samuel Castro - #443

Guest Pablo Samuel Castro, Staff Research Software Developer at Google Brain, discusses the latest advancements in Reinforcement Learning, including metrics/representations, understanding and evaluating deep RL, RL in the real world. They also explore the differences between deep RL and other types of deep networks, the role of transitions and contrastive loss, the value of small and mid-scale environments, challenges of running benchmarks, analogies between RL and language models, exciting tools and environments in RL.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode