Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
29 snips
Aug 11, 2021 • 2h 28min

#58 Dr. Ben Goertzel - Artificial General Intelligence

Ben Goertzel, a leading AI researcher and CEO of SingularityNET, dives into the ambitious quest for Artificial General Intelligence (AGI). He critiques current deep learning approaches, advocating for architectures inspired by human cognition rather than mere brain modeling. Discussing the potential of SingularityNET, Goertzel highlights the synergy of cognitive methods and knowledge representation. He also explores the importance of integrating neuroscience insights to enhance AI development, raising thought-provoking questions about creativity, consciousness, and the future of intelligence.
undefined
18 snips
Jul 25, 2021 • 2h 31min

#57 - Prof. Melanie Mitchell - Why AI is harder than we think

In this engaging discussion, Professor Melanie Mitchell, a leading expert in complexity and AI, teams up with Letitia Parcalabescu, an AI researcher and YouTuber. They tackle the contrasting cycles of optimism and disappointment in AI development. Topics include the challenges of achieving common-sense reasoning and effective analogy-making in machine learning. They delve into the philosophical underpinnings of intelligence, the nuances of creativity in AI, and the limitations of current neural networks, all while advocating for a deeper understanding of both human and artificial cognition.
undefined
14 snips
Jul 8, 2021 • 1h 11min

#56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

In this engaging discussion, guests Walid Saba, Gadi Singer, and J. Mark Bishop explore the future of AI beyond deep learning. Saba critiques the limitations of current statistical methods in conversational agents, while Singer emphasizes the need for hybrid models that blend reasoning with data. Bishop dives into the philosophical boundaries of computational cognition, challenging the notion that AI can replicate human understanding. Together, they advocate for a cognitive approach to align AI closer to human values and reasoning.
undefined
21 snips
Jun 21, 2021 • 1h 36min

#55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

Dr. Ishan Misra, a prolific Research Scientist at Facebook AI Research, dives into the world of self-supervised vision models. He discusses groundbreaking papers like DINO and Barlow Twins, addressing how these innovative approaches reduce the need for human supervision in visual learning. Ishan explores the nuances of neural networks, object recognition challenges, and the philosophical implications of AI's common sense knowledge. Plus, he compares self-supervised models with semi-supervised techniques, showcasing the advancements in harnessing human knowledge for enhanced machine learning.
undefined
5 snips
Jun 4, 2021 • 2h 24min

#54 Gary Marcus and Luis Lamb - Neurosymbolic models

In this engaging discussion, Gary Marcus, a renowned scientist and AI entrepreneur, alongside Luis Lamb, Secretary of Innovation for Science and Technology in Brazil, dive into the future of artificial intelligence. They challenge the limitations of deep learning and advocate for a hybrid neurosymbolic approach to enhance AI understanding and reasoning. Topics include the importance of integrating symbolic reasoning, exploring the complexities of abstraction, and the role of intention in knowledge acquisition. Their insights illuminate the path towards more sophisticated AI systems that can genuinely understand and reason like humans.
undefined
May 19, 2021 • 2h 18min

#53 Quantum Natural Language Processing - Prof. Bob Coecke (Oxford)

In this engaging conversation, Bob Coecke, a prominent physicist and quantum professor from Oxford, dives into the fascinating interplay between quantum mechanics and natural language processing. He shares his groundbreaking ideas on how quantum principles can redefine word meanings and critiques traditional linguistics. Bob also discusses his invention, ZX-calculus, which visually represents quantum circuits. Additionally, he explores the evolving culture in academia, emphasizing the need for genuine research over management-driven strategies.
undefined
May 1, 2021 • 1h 48min

#52 - Unadversarial Examples (Hadi Salman, MIT)

Hadi Salman, a PhD student at MIT with experience at Uber and Microsoft Research, dives into the intriguing world of adversarial and unadversarial examples. He discusses how slight image alterations can mislead classifiers and explores innovative ways to flip this problem on its head. By designing unadversarial examples, Hadi aims to create more robust models. The conversation also touches on the balance between accuracy and robustness, as well as the potential of adversarial training to enhance transfer learning outcomes.
undefined
10 snips
Apr 16, 2021 • 2h 2min

#51 Francois Chollet - Intelligence and Generalisation

Francois Chollet, the genius behind Keras and author of 'Deep Learning with Python,' shares his profound insights on intelligence as generalization. He challenges the limitations of neural networks, arguing they struggle with reasoning and planning. The discussion explores the future of AI, emphasizing the need for program synthesis and the integration of discrete methods. Chollet dives into the nuances of generalization and abstraction, highlighting how these concepts can shape a new era in AI innovation. Expect a fascinating journey through the complexities of intelligence!
undefined
7 snips
Apr 4, 2021 • 1h 33min

#50 Christian Szegedy - Formal Reasoning, Program Synthesis

Dr. Christian Szegedy, a deep learning pioneer at Google, dives into the potential of automating mathematical reasoning and program synthesis. He discusses autoformalisation, envisioning a super-human mathematician that comprehends natural language. Szegedy shares insights on the evolution of machine learning, particularly with transformers, and their impact on formal proofs and reasoning. The conversation also highlights challenges in research and the path toward human-level AGI, questioning traditional programming methods while exploring the nature of mathematical creativity.
undefined
12 snips
Mar 23, 2021 • 1h 25min

#49 - Meta-Gradients in RL - Dr. Tom Zahavy (DeepMind)

In this conversation, Dr. Tom Zahavy, a Research Scientist at DeepMind specializing in reinforcement learning, discusses his journey into AI and the potential of reinforcement learning for achieving artificial general intelligence. Alongside Robert Lange, a PhD candidate and insightful blogger, they delve into the concept of meta-gradients, exploring their role in optimizing learning dynamics and hyperparameter tuning. The duo also tackles the challenges of balancing exploration and exploitation, and the significance of recognizing patterns in developing intelligent systems.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app