The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Oct 18, 2021 • 37min

Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino - #528

Andrea Banino, a research scientist at DeepMind, dives into the fascinating world of artificial general intelligence and episodic memory. He discusses how past experiences shape intelligent behavior and the challenges of integrating memory into neural networks. The conversation highlights his innovative work on PonderNet, a model that optimizes computational resources based on problem complexity. Banino also touches on the importance of grid cells in navigation and the synergy between transformers and reinforcement learning for enhanced performance.
undefined
Oct 14, 2021 • 43min

Advancing Deep Reinforcement Learning with NetHack, w/ Tim Rocktäschel - #527

Tim Rocktäschel, a research scientist at Facebook AI Research and associate professor at University College London, dives into the intricate world of training reinforcement learning agents using the complex game NetHack. He discusses the challenges of generalization in simulated environments and the innovative MiniHack framework. The conversation highlights the significance of procedural generation, the intricacies of creating effective scoring systems, and the computational demands for training these advanced AI models. Tim's insights illuminate the future of AI in dynamic settings.
undefined
Oct 11, 2021 • 41min

Building Technical Communities at Stack Overflow with Prashanth Chandrasekar - #526

In this engaging conversation, Prashanth Chandrasekar, CEO of Stack Overflow, shares insights into the platform's evolution and impact since 2008. He discusses the significant surge in user engagement during the pandemic, emphasizing the need for adaptability in a hybrid workforce. Prashanth also explores how Stack Overflow leverages AI to enhance community interactions and user experience. Additionally, he highlights strategic partnerships and the platform's role in developer education, offering a glimpse into the future of coding collaboration.
undefined
Oct 7, 2021 • 40min

Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525

Joseph Soriaga, Senior Director of Technology at Qualcomm, explores the exciting intersection of deep learning and 5G technology. He discusses groundbreaking research on augmenting Kalman filters to enhance model efficiency and interpretability. Moreover, he unveils WiCluster, a method for passive indoor positioning using WiFi, shedding light on how AI can optimize 5G networks. Soriaga also highlights the transformative potential of machine learning in delivering connected services, paving the way for a more efficient and interconnected future.
undefined
15 snips
Oct 4, 2021 • 47min

Modeling Human Cognition with RNNs and Curriculum Learning, w/ Kanaka Rajan - #524

Kanaka Rajan, an assistant professor at the Icahn School of Medicine, specializes in merging biology with AI. She discusses her innovative 'lego models' of the brain designed to emulate cognitive functions and memory processes. The conversation dives into the potential of recurrent neural networks (RNNs) in simulating complex learning. Rajan also explains curriculum learning, where tasks gradually increase in complexity, and reflects on the relationship between biological cognition and AI, touching on the challenges of understanding memory and its implications for mental health.
undefined
Sep 30, 2021 • 41min

Do You Dare Run Your ML Experiments in Production? with Ville Tuulos - #523

Ville Tuulos, CEO and co-founder of Outerbounds, shares his journey from building Metaflow at Netflix to writing his upcoming book on effective data science infrastructure. He discusses the unique challenges of ML production, emphasizing the need for innovative solutions and robust experimentation techniques. Ville highlights Metaflow's role in enhancing productivity and its integration with platforms like Kubernetes. The conversation also delves into the evolution of MLOps, especially in a post-pandemic world, and the growing community around these tools.
undefined
Sep 27, 2021 • 49min

Delivering Neural Speech Services at Scale with Li Jiang - #522

Li Jiang, a distinguished engineer at Microsoft with 27 years of experience in speech technologies, dives into the rapid advancements in speech recognition. He discusses the trade-offs between hybrid and end-to-end models and their implications for accuracy and service quality. Jiang also highlights the importance of customizing voice solutions for different industries and emphasizes the ethical considerations surrounding text-to-speech technologies. With a forward-looking perspective, he envisions the future of speech services, focusing on achieving human-like communication.
undefined
Sep 23, 2021 • 49min

AI’s Legal and Ethical Implications with Sandra Wachter - #521

Sandra Wachter, an associate professor and senior research fellow at the University of Oxford, dives deep into the intersection of law and AI. She unpacks algorithmic accountability, focusing on issues like explainability, data protection, and biases in machine learning. Wachter discusses the challenge of black box algorithms and introduces counterfactual explanations to enhance transparency. She also highlights her conditional demographic disparity test, recently adopted by Amazon, aimed at combating bias in models and improving compliance with European non-discrimination laws.
undefined
12 snips
Sep 20, 2021 • 41min

Compositional ML and the Future of Software Development with Dillon Erb - #520

Dillon Erb, CEO of Paperspace, dives into the world of compositional machine learning and its potential to revolutionize software development. He discusses the evolution from experimental ML to a disciplined engineering framework and the challenges of integrating cloud technology. The conversation highlights the debate around the role of notebooks versus traditional coding practices in data science. Also on the table is Paperspace’s innovative new product, Workflows, which aims to streamline ML application development into a more cohesive system.
undefined
17 snips
Sep 16, 2021 • 38min

Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519

Yanshuai Cao, a Senior Research Team Lead at Borealis AI, discusses his groundbreaking work on Turing, an engine transforming natural language into SQL queries. He compares it with OpenAI's Codex, highlighting the unique challenges of SQL generation. The conversation reveals insights into the crucial role of reasoning and common sense in accurate query creation. They also tackle complexities in multilingual datasets, data augmentation, and the ongoing quest for model explainability, shedding light on fascinating advancements in AI technology.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app