The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest episodes

undefined
Dec 2, 2021 • 46min

Multi-modal Deep Learning for Complex Document Understanding with Doug Burdick - #541

Today we’re joined by Doug Burdick, a principal research staff member at IBM Research. In a recent interview, Doug’s colleague Yunyao Li joined us to talk through some of the broader enterprise NLP problems she’s working on. One of those problems is making documents machine consumable, especially with the traditionally archival file type, the PDF. That’s where Doug and his team come in.In our conversation, we discuss the multimodal approach they’ve taken to identify, interpret, contextualize and extract things like tables from a document, the challenges they’ve faced when dealing with the tables and how they evaluate the performance of models on tables. We also explore how he’s handled generalizing across different formats, how fine-tuning has to be in order to be effective, the problems that appear on the NLP side of things, and how deep learning models are being leveraged within the group.The complete show notes for this episode can be found at twimlai.com/go/541
undefined
Nov 29, 2021 • 49min

Predictive Maintenance Using Deep Learning and Reliability Engineering with Shayan Mortazavi - #540

Today we’re joined by Shayan Mortazavi, a data science manager at Accenture. In our conversation with Shayan, we discuss his talk from the recent SigOpt HPC & AI Summit, titled A Novel Framework Predictive Maintenance Using Dl and Reliability Engineering. In the talk, Shayan proposes a novel deep learning-based approach for prognosis prediction of oil and gas plant equipment in an effort to prevent critical damage or failure. We explore the evolution of reliability engineering, the decision to use a residual-based approach rather than traditional anomaly detection to determine when an anomaly was happening, the challenges of using LSTMs when building these models, the amount of human labeling required to build the models, and much more!The complete show notes for this episode can be found at twimlai.com/go/540
undefined
Nov 24, 2021 • 51min

Building a Deep Tech Startup in NLP with Nasrin Mostafazadeh - #539

Today we’re joined by friend-of-the-show Nasrin Mostafazadeh, co-founder of Verneek. Though Verneek is still in stealth, Nasrin was gracious enough to share a bit about the company, including their goal of enabling anyone to make data-informed decisions without the need for a technical background, through the use of innovative human-machine interfaces. In our conversation, we explore the state of AI research in the domains relevant to the problem they’re trying to solve and how they use those insights to inform and prioritize their research agenda. We also discuss what advice Nasrin would give to someone thinking about starting a deep tech startup or going from research to product development. The complete show notes for today’s show can be found at twimlai.com/go/539.
undefined
Nov 22, 2021 • 42min

Models for Human-Robot Collaboration with Julie Shah - #538

Today we’re joined by Julie Shah, a professor at the Massachusetts Institute of Technology (MIT). Julie’s work lies at the intersection of aeronautics, astronautics, and robotics, with a specific focus on collaborative and interactive robotics. In our conversation, we explore how robots would achieve the ability to predict what their human collaborators are thinking, what the process of building knowledge into these systems looks like, and her big picture idea of developing a field robot that doesn’t “require a human to be a robot” to work with it. We also discuss work Julie has done on cross-training between humans and robots with the focus on getting them to co-learn how to work together, as well as future projects that she’s excited about.The complete show notes for this episode can be found at twimlai.com/go/538.
undefined
Nov 18, 2021 • 58min

Four Key Tools for Robust Enterprise NLP with Yunyao Li - #537

Today we’re joined by Yunyao Li, a senior research manager at IBM Research. Yunyao is in a somewhat unique position at IBM, addressing the challenges of enterprise NLP in a traditional research environment, while also having customer engagement responsibilities. In our conversation with Yunyao, we explore the challenges associated with productizing NLP in the enterprise, and if she focuses on solving these problems independent of one another, or through a more unified approach. We then ground the conversation with real-world examples of these enterprise challenges, including enabling level document discovery at scale using combinations of techniques like deep neural networks and supervised and/or unsupervised learning, and entity extraction and semantic parsing to identify text. Finally, we talk through data augmentation in the context of NLP, and how we enable the humans in-the-loop to generate high-quality data.The complete show notes for this episode can be found at twimlai.com/go/537
undefined
Nov 15, 2021 • 1h 1min

Machine Learning at GSK with Kim Branson - #536

Today we’re joined by Kim Branson, the SVP and global head of artificial intelligence and machine learning at GSK. We cover a lot of ground in our conversation, starting with a breakdown of GSK’s core pharmaceutical business, and how ML/AI fits into that equation, use cases that appear using genetics data as a data source, including sequential learning for drug discovery. We also explore the 500 billion node knowledge graph Kim’s team built to mine scientific literature, and their “AI Hub”, the ML/AI infrastructure team that handles all tooling and engineering problems within their organization. Finally, we explore their recent cancer research collaboration with King’s College, which is tasked with understanding the individualized needs of high- and low-risk cancer patients using ML/AI amongst other technologies. The complete show notes for this episode can be found at twimlai.com/go/536.
undefined
Nov 11, 2021 • 59min

The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha - #535

Today we’re joined by David Ha, a research scientist at Google. In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including Neuroevolution of self-interpretable agents, World Models and Attention for Reinforcement Learning, and The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning.This interview is Nerd Alert certified, so get your notes ready! PS. David is one of our favorite follows on Twitter (@hardmaru), so check him out and share your thoughts on this interview and his work!The complete show notes for this episode can be found at twimlai.com/go/535
undefined
Nov 8, 2021 • 42min

Facebook Abandons Facial Recognition. Should Everyone Else Follow Suit? With Luke Stark - #534

Today we’re joined by Luke Stark, an assistant professor at Western University in London, Ontario. In our conversation with Luke, we explore the existence and use of facial recognition technology, something Luke has been critical of in his work over the past few years, comparing it to plutonium. We discuss Luke’s recent paper, “Physiognomic Artificial Intelligence”, in which he critiques studies that will attempt to use faces and facial expressions and features to make determinations about people, a practice fundamental to facial recognition, also one that Luke believes is inherently racist at its core. Finally, briefly discuss the recent wave of hires at the FTC, and the news that broke (mid-recording) announcing that Facebook will be shutting down their facial recognition system and why it's not necessarily the game-changing announcement it seemed on its… face. The complete show notes for this episode can be found at twimlai.com/go/534.
undefined
Nov 4, 2021 • 43min

Building Blocks of Machine Learning at LEGO with Francesc Joan Riera - #533

Today we’re joined by Francesc Joan Riera, an applied machine learning engineer at The LEGO Group. In our conversation, we explore the ML infrastructure at LEGO, specifically around two use cases, content moderation and user engagement. While content moderation is not a new or novel task, but because their apps and products are marketed towards children, their need for heightened levels of moderation makes it very interesting. We discuss if the moderation system is built specifically to weed out bad actors or passive behaviors if their system has a human-in-the-loop component, why they built a feature store as opposed to a traditional database, and challenges they faced along that journey. We also talk through the range of skill sets on their team, the use of MLflow for experimentation, the adoption of AWS for serverless, and so much more!The complete show notes for this episode can be found at twimlai.com/go/534.
undefined
Nov 1, 2021 • 40min

Exploring the FastAI Tooling Ecosystem with Hamel Husain - #532

Today we’re joined by Hamel Husain, Staff Machine Learning Engineer at GitHub. Over the last few years, Hamel has had the opportunity to work on some of the most popular open source projects in the ML world, including fast.ai, nbdev, fastpages, and fastcore, just to name a few. In our conversation with Hamel, we discuss his journey into Silicon Valley, and how he discovered that the ML tooling and infrastructure weren’t quite as advanced as he’d assumed, and how that led him to help build some of the foundational pieces of Airbnb’s Bighead Platform. We also spend time exploring Hamel’s time working with Jeremy Howard and the team creating fast.ai, how nbdev came about, and how it plans to change the way practitioners interact with traditional jupyter notebooks. Finally, talk through a few more tools in the fast.ai ecosystem, fastpages, fastcore, how these tools interact with Github Actions, and the up and coming ML tools that Hamel is excited about. The complete show notes for this episode can be found at twimlai.com/go/532.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode