Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
5 snips
Jun 4, 2021 • 2h 24min

#54 Gary Marcus and Luis Lamb - Neurosymbolic models

In this engaging discussion, Gary Marcus, a renowned scientist and AI entrepreneur, alongside Luis Lamb, Secretary of Innovation for Science and Technology in Brazil, dive into the future of artificial intelligence. They challenge the limitations of deep learning and advocate for a hybrid neurosymbolic approach to enhance AI understanding and reasoning. Topics include the importance of integrating symbolic reasoning, exploring the complexities of abstraction, and the role of intention in knowledge acquisition. Their insights illuminate the path towards more sophisticated AI systems that can genuinely understand and reason like humans.
undefined
May 19, 2021 • 2h 18min

#53 Quantum Natural Language Processing - Prof. Bob Coecke (Oxford)

In this engaging conversation, Bob Coecke, a prominent physicist and quantum professor from Oxford, dives into the fascinating interplay between quantum mechanics and natural language processing. He shares his groundbreaking ideas on how quantum principles can redefine word meanings and critiques traditional linguistics. Bob also discusses his invention, ZX-calculus, which visually represents quantum circuits. Additionally, he explores the evolving culture in academia, emphasizing the need for genuine research over management-driven strategies.
undefined
May 1, 2021 • 1h 48min

#52 - Unadversarial Examples (Hadi Salman, MIT)

Hadi Salman, a PhD student at MIT with experience at Uber and Microsoft Research, dives into the intriguing world of adversarial and unadversarial examples. He discusses how slight image alterations can mislead classifiers and explores innovative ways to flip this problem on its head. By designing unadversarial examples, Hadi aims to create more robust models. The conversation also touches on the balance between accuracy and robustness, as well as the potential of adversarial training to enhance transfer learning outcomes.
undefined
10 snips
Apr 16, 2021 • 2h 2min

#51 Francois Chollet - Intelligence and Generalisation

Francois Chollet, the genius behind Keras and author of 'Deep Learning with Python,' shares his profound insights on intelligence as generalization. He challenges the limitations of neural networks, arguing they struggle with reasoning and planning. The discussion explores the future of AI, emphasizing the need for program synthesis and the integration of discrete methods. Chollet dives into the nuances of generalization and abstraction, highlighting how these concepts can shape a new era in AI innovation. Expect a fascinating journey through the complexities of intelligence!
undefined
7 snips
Apr 4, 2021 • 1h 33min

#50 Christian Szegedy - Formal Reasoning, Program Synthesis

Dr. Christian Szegedy, a deep learning pioneer at Google, dives into the potential of automating mathematical reasoning and program synthesis. He discusses autoformalisation, envisioning a super-human mathematician that comprehends natural language. Szegedy shares insights on the evolution of machine learning, particularly with transformers, and their impact on formal proofs and reasoning. The conversation also highlights challenges in research and the path toward human-level AGI, questioning traditional programming methods while exploring the nature of mathematical creativity.
undefined
12 snips
Mar 23, 2021 • 1h 25min

#49 - Meta-Gradients in RL - Dr. Tom Zahavy (DeepMind)

In this conversation, Dr. Tom Zahavy, a Research Scientist at DeepMind specializing in reinforcement learning, discusses his journey into AI and the potential of reinforcement learning for achieving artificial general intelligence. Alongside Robert Lange, a PhD candidate and insightful blogger, they delve into the concept of meta-gradients, exploring their role in optimizing learning dynamics and hyperparameter tuning. The duo also tackles the challenges of balancing exploration and exploitation, and the significance of recognizing patterns in developing intelligent systems.
undefined
Mar 16, 2021 • 37min

#48 Machine Learning Security - Andy Smith

Andy Smith, a cybersecurity expert and YouTube content creator, dives into the often-overlooked realm of security in ML DevOps. He highlights the importance of threat modeling and the complexities posed by adversarial examples. The conversation sheds light on trust boundaries in machine learning systems and the need for a collaborative approach between ML and security teams. Andy also discusses the unpredictability of state space and the essential role of human oversight, advocating for a pragmatic focus on risk management to enhance data integrity.
undefined
10 snips
Mar 14, 2021 • 1h 40min

047 Interpretable Machine Learning - Christoph Molnar

Christoph Molnar, an expert in interpretable machine learning and author of a notable book on the subject, dives deep into the complexities of model transparency. He discusses the crucial role of interpretability in enhancing trust and societal acceptance. The conversation critiques common methods like saliency maps and highlights pitfalls of reliance on complex models. Molnar also emphasizes the importance of simplicity and statistical rigor in model predictions, advocating for strategies that improve understanding while addressing ethical considerations in machine learning.
undefined
16 snips
Mar 6, 2021 • 1h 40min

#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)

Mark Saroufim, author of "Machine Learning: The Great Stagnation," joins Mathew Salvaris, a lead ML scientist at iRobot, to dissect the stagnation in machine learning. They discuss how academia’s incentive structures stifle innovation and the implications of 'state-of-the-art' chasing. They highlight the rise of the 'gentleman scientist,' the complexities of achieving measurable success, and the need for a user-focused approach in research. The duo emphasizes collaboration and the importance of embracing failures as part of the learning process.
undefined
Feb 28, 2021 • 2h 30min

#045 Microsoft's Platform for Reinforcement Learning (Bonsai)

Scott Stanfield and Megan Bloemsma from Microsoft's Autonomous Systems team dive into the ambitious Project Bonsai. They discuss its goal to simplify reinforcement learning, making it accessible for developers without PhDs. The conversation highlights the role of machine teaching in enhancing AI training, using real-world applications like balancing robots. They emphasize the need for expert guidance and domain knowledge in overcoming traditional challenges in the field. Innovations in simulation and collaboration are also spotlighted, showcasing a future where complex tasks become manageable.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app