The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Jun 17, 2021 • 54min

AI and Society: Past, Present and Future with Eric Horvitz - #493

Eric Horvitz, Chief Scientific Officer at Microsoft, delves into the future of AI and its ethical implications. He shares insights from his time as AAAI president and discusses the transformation of AI since 2009. The conversation highlights the critical role of responsible AI development, particularly through the Aether committee. Horvitz also addresses the National Security Commission on AI's comprehensive report covering AI R&D, trustworthy systems, and the ethical ramifications of technology like facial recognition in law enforcement.
undefined
9 snips
Jun 14, 2021 • 44min

Agile Applied AI Research with Parvez Ahammad - #492

Parvez Ahammad, Head of Data Science Applied Research at LinkedIn, shares his insights on organizing data science teams for success. He discusses the balance of long-term project investments while navigating the challenges of experimentation. Parvez also delves into the impact of differential privacy on member data and the ambitious launch of the GreyKite forecasting library. The conversation highlights the dynamic relationship between applied research and engineering in AI, emphasizing the need for strategic alignment and effective team dynamics in driving innovation.
undefined
Jun 10, 2021 • 38min

Haptic Intelligence with Katherine J. Kuchenbecker - #491

Join Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems, as she dives into the fascinating world of haptic intelligence. Discover how she merges haptics with machine learning to enhance human-robot interactions, including a robotic finger that feels and sees! Hear about her quirky HuggyBot, designed for personalized hugs. Katherine also shares her passion for mentoring and the crucial need for diversity in robotics, revealing inspiring insights from her journey in this evolving field.
undefined
Jun 7, 2021 • 40min

Data Science on AWS with Chris Fregly and Antje Barth - #490

Joining the conversation are Chris Fregly, Principal Developer Advocate at AWS and co-author of 'Data Science on AWS,' and Antje Barth, Senior Developer Advocate and co-author as well. They dive into their new book, revealing strategies for reducing costs and improving performance in data science projects. The duo also discusses their Practical Data Science Specialization on Coursera, community-building initiatives, and innovative approaches like multi-armed bandit strategies for model optimization. Insights from the recent ML Summit showcase advancements in making AI more accessible.
undefined
Jun 3, 2021 • 40min

Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489

Ziad Asghar, Vice President of Product Management at Qualcomm, discusses the dynamic interplay of AI and 5G, showcasing how Qualcomm's Snapdragon platform fuels mobile AI advancements. He dives into the evolution of the Cloud AI 100 and its impact on smart city technologies. Ziad also highlights the significance of federated learning and the importance of privacy in data security. Additionally, he explores AI innovations in automotive technology, including collaborations with NASA to enhance vehicle autonomy and safety.
undefined
May 31, 2021 • 43min

Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488

Nir Bar-Lev, co-founder and CEO of ClearML, shares insights from his extensive tech background including time at Google. He discusses the evolving landscape of machine learning platforms and the critical decision between building versus buying solutions. Nir emphasizes the importance of effective experiment management, the risks of relying solely on cloud vendors, and the balance needed to combat overfitting. He also touches on advancements in federated learning and how ClearML integrates innovative techniques to empower businesses in their AI journeys.
undefined
May 27, 2021 • 56min

Applied AI Research at AWS with Alex Smola - #487

In this engaging discussion, Alex Smola, Vice President and Distinguished Scientist at AWS AI, explores cutting-edge AI research. He delves into deep learning on graphs, highlighting its role in enhancing data interpretation and applications like fraud detection. Alex also discusses the significance of AutoML, designed to make machine learning more accessible. He introduces Granger causality in causal modeling and shares insights about the growing AWS ML Summit, showcasing speaker highlights and exciting trends in AI.
undefined
May 24, 2021 • 40min

Causal Models in Practice at Lyft with Sean Taylor - #486

Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs, shares his journey from lab director to hands-on innovator. He dives into the moonshot approaches his team takes towards marketplace experimentation and forecasting. The conversation highlights the significance of causality in their modeling efforts and the challenges of balancing supply and demand. Moreover, he discusses the application of neural networks for decision-making, emphasizing collaboration and the transformation of traditional statistical methods to drive business insights.
undefined
May 20, 2021 • 42min

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Jabran Zahid, a Senior Researcher at Microsoft Research, dives into the fascinating world of mapping the human immune system using AI. With a unique background in astrophysics, he discusses the Antigen Map Project and its adaptation during the COVID-19 pandemic. Jabran shares insights into the complexities of T cell development and the challenges faced in using machine learning for immunology. He highlights the importance of model interpretability and the progress made towards developing FDA-approved diagnostic tools for enhanced health understanding.
undefined
May 17, 2021 • 38min

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

In this discussion, Konstantin Rusch, a PhD student at ETH Zurich, dives into innovative recurrent neural networks (RNNs) aimed at tackling long-time dependencies. He shares insights from his papers on coRNN and uniCORNN, inspired by neuroscience, and how these architectures compare to traditional models like LSTMs. Konstantin also reveals challenges in ensuring gradient stability and innovative techniques that enhance RNNs' expressive power. Plus, he discusses his ambitions for future advancements in memory efficiency and performance.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app