The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Jun 7, 2021 • 40min

Data Science on AWS with Chris Fregly and Antje Barth - #490

Joining the conversation are Chris Fregly, Principal Developer Advocate at AWS and co-author of 'Data Science on AWS,' and Antje Barth, Senior Developer Advocate and co-author as well. They dive into their new book, revealing strategies for reducing costs and improving performance in data science projects. The duo also discusses their Practical Data Science Specialization on Coursera, community-building initiatives, and innovative approaches like multi-armed bandit strategies for model optimization. Insights from the recent ML Summit showcase advancements in making AI more accessible.
undefined
Jun 3, 2021 • 40min

Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489

Ziad Asghar, Vice President of Product Management at Qualcomm, discusses the dynamic interplay of AI and 5G, showcasing how Qualcomm's Snapdragon platform fuels mobile AI advancements. He dives into the evolution of the Cloud AI 100 and its impact on smart city technologies. Ziad also highlights the significance of federated learning and the importance of privacy in data security. Additionally, he explores AI innovations in automotive technology, including collaborations with NASA to enhance vehicle autonomy and safety.
undefined
May 31, 2021 • 43min

Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488

Nir Bar-Lev, co-founder and CEO of ClearML, shares insights from his extensive tech background including time at Google. He discusses the evolving landscape of machine learning platforms and the critical decision between building versus buying solutions. Nir emphasizes the importance of effective experiment management, the risks of relying solely on cloud vendors, and the balance needed to combat overfitting. He also touches on advancements in federated learning and how ClearML integrates innovative techniques to empower businesses in their AI journeys.
undefined
May 27, 2021 • 56min

Applied AI Research at AWS with Alex Smola - #487

In this engaging discussion, Alex Smola, Vice President and Distinguished Scientist at AWS AI, explores cutting-edge AI research. He delves into deep learning on graphs, highlighting its role in enhancing data interpretation and applications like fraud detection. Alex also discusses the significance of AutoML, designed to make machine learning more accessible. He introduces Granger causality in causal modeling and shares insights about the growing AWS ML Summit, showcasing speaker highlights and exciting trends in AI.
undefined
May 24, 2021 • 40min

Causal Models in Practice at Lyft with Sean Taylor - #486

Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs, shares his journey from lab director to hands-on innovator. He dives into the moonshot approaches his team takes towards marketplace experimentation and forecasting. The conversation highlights the significance of causality in their modeling efforts and the challenges of balancing supply and demand. Moreover, he discusses the application of neural networks for decision-making, emphasizing collaboration and the transformation of traditional statistical methods to drive business insights.
undefined
May 20, 2021 • 42min

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Jabran Zahid, a Senior Researcher at Microsoft Research, dives into the fascinating world of mapping the human immune system using AI. With a unique background in astrophysics, he discusses the Antigen Map Project and its adaptation during the COVID-19 pandemic. Jabran shares insights into the complexities of T cell development and the challenges faced in using machine learning for immunology. He highlights the importance of model interpretability and the progress made towards developing FDA-approved diagnostic tools for enhanced health understanding.
undefined
May 17, 2021 • 38min

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

In this discussion, Konstantin Rusch, a PhD student at ETH Zurich, dives into innovative recurrent neural networks (RNNs) aimed at tackling long-time dependencies. He shares insights from his papers on coRNN and uniCORNN, inspired by neuroscience, and how these architectures compare to traditional models like LSTMs. Konstantin also reveals challenges in ensuring gradient stability and innovative techniques that enhance RNNs' expressive power. Plus, he discusses his ambitions for future advancements in memory efficiency and performance.
undefined
May 13, 2021 • 38min

What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483

In this engaging discussion, Allyson Ettinger, an Assistant Professor at the University of Chicago, dives into the intriguing intersection of machine learning and neuroscience. She shares insights on how brain research can enhance AI, particularly in natural language processing (NLP). The conversation highlights the importance of controlled evaluation methods and the challenges AI faces in truly understanding language. Ettinger also touches on the predictive abilities of NLP models and how they compare to human cognitive processing, revealing the ongoing quest to mimic brain functionality in AI.
undefined
May 10, 2021 • 41min

Probabilistic Numeric CNNs with Roberto Bondesan - #482

Roberto Bondesan, an AI researcher at Qualcomm, shares his groundbreaking work on probabilistic numeric CNNs, which leverage Gaussian processes for enhanced error correction. He delves into innovative adaptive neural compression techniques that optimize data transmission efficiency. The conversation also touches on the exciting intersection of quantum computing and AI, where Bondesan discusses the future potential of combinatorial optimization in revolutionizing logistics and design. His insights bridge physics and advanced AI applications, highlighting a promising frontier in technology.
undefined
May 6, 2021 • 35min

Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481

Huiji Gao, Senior Engineering Manager at LinkedIn, shares his passion for building sophisticated NLP tools, like the open-source DeText framework. He discusses how DeText revolutionized LinkedIn’s approach to model training and its broad applications across the company. The conversation highlights the synergy between DeText and LiBERT, optimized for LinkedIn's data. They delve into the challenges of model evaluation, the importance of user interaction in enhancing performance, and techniques for document ranking optimization.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app