Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
Feb 24, 2022 • 52min

#64 Prof. Gary Marcus 3.0

In this engaging conversation, cognitive scientist Gary Marcus, founder of Robust AI, tackles profound questions about AI and consciousness. He expresses skepticism about AI systems claiming self-awareness and the philosophical debates around consciousness. The discussion dives into the challenges of abstract models and the importance of stable symbolic representations, particularly in self-driving technology. Marcus also reveals insights on extrapolation in high dimensions and scaling laws, emphasizing the complexities of true understanding in neural networks.
undefined
6 snips
Feb 22, 2022 • 1h 33min

#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality

Yoshua Bengio, a Turing Award recipient and a leader in AI, dives into the fascinating world of GFlowNets, which he believes can revolutionize machine learning by generating diverse training data. The discussion covers the balance between exploration and exploitation in decision-making, particularly in drug discovery and gaming. Bengio also addresses the philosophical implications of consciousness in AI, urging a cautious perspective on claims of AI sentience. His reflections on the evolution of thought in neural networks reveal a journey shaped by key insights into causal representation learning.
undefined
Feb 3, 2022 • 1h 30min

#062 - Dr. Guy Emerson - Linguistics, Distributional Semantics

Dr. Guy Emerson, a computational linguist at Cambridge, shares insights into distributional semantics and truth-conditional semantics. The conversations delve into the challenges of representing meaning in machine learning, the importance of grounding language in real-world contexts, and the interplay between cognition and linguistics. Emerson critiques traditional linguistic models, emphasizing the need for flexible frameworks. The discussion also touches on Bayesian inference in language, examining how context influences meaning and the complexities of vocabulary like 'heap' and 'tall'.
undefined
55 snips
Jan 4, 2022 • 3h 20min

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

Yann LeCun, Meta's Chief AI Scientist and Turing Award winner, joins Randall Balestriero, a researcher at Meta AI, to dive into the complexities of interpolation and extrapolation in neural networks. They discuss how heavily dimensional data challenges traditional views, presenting their groundbreaking paper on high-dimensional extrapolation. Yann critiques the notion of interpolation in deep learning, while Randall emphasizes the geometric principles that can redefine our understanding of neural network behavior. Expect eye-opening insights into AI's evolving landscape!
undefined
64 snips
Sep 19, 2021 • 3h 33min

#60 Geometric Deep Learning Blueprint (Special Edition)

Joining the discussion are Petar Veličković from DeepMind, renowned for his work on graph neural networks, Taco Cohen from Qualcomm AI Research, specializing in geometric deep learning, and Joan Bruna, an influential figure in data science at NYU. They delve into geometric deep learning, exploring its foundations in symmetry and invariance. The conversation highlights innovative mathematical frameworks, the unification of geometries, and their implications in AI. Insights on dimensionality, algorithmic reasoning, and historical perspectives on geometry further enrich this engaging dialogue.
undefined
17 snips
Sep 3, 2021 • 2h 35min

#59 - Jeff Hawkins (Thousand Brains Theory)

In this engaging discussion, neuroscientist and entrepreneur Jeff Hawkins, known for his Thousand Brains Theory, joins Connor Leahy to unravel how our brains construct reality through a multitude of models. They dive into the role of the neocortex in intelligence and sensory perception, explore Sparse Distributed Representations and their applications in AI, and highlight the key differences between traditional neural networks and Hawkins' innovative ideas. The conversation also touches on the ethical integration of AI with human values and the philosophical implications of emerging technologies.
undefined
29 snips
Aug 11, 2021 • 2h 28min

#58 Dr. Ben Goertzel - Artificial General Intelligence

Ben Goertzel, a leading AI researcher and CEO of SingularityNET, dives into the ambitious quest for Artificial General Intelligence (AGI). He critiques current deep learning approaches, advocating for architectures inspired by human cognition rather than mere brain modeling. Discussing the potential of SingularityNET, Goertzel highlights the synergy of cognitive methods and knowledge representation. He also explores the importance of integrating neuroscience insights to enhance AI development, raising thought-provoking questions about creativity, consciousness, and the future of intelligence.
undefined
18 snips
Jul 25, 2021 • 2h 31min

#57 - Prof. Melanie Mitchell - Why AI is harder than we think

In this engaging discussion, Professor Melanie Mitchell, a leading expert in complexity and AI, teams up with Letitia Parcalabescu, an AI researcher and YouTuber. They tackle the contrasting cycles of optimism and disappointment in AI development. Topics include the challenges of achieving common-sense reasoning and effective analogy-making in machine learning. They delve into the philosophical underpinnings of intelligence, the nuances of creativity in AI, and the limitations of current neural networks, all while advocating for a deeper understanding of both human and artificial cognition.
undefined
14 snips
Jul 8, 2021 • 1h 11min

#56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

In this engaging discussion, guests Walid Saba, Gadi Singer, and J. Mark Bishop explore the future of AI beyond deep learning. Saba critiques the limitations of current statistical methods in conversational agents, while Singer emphasizes the need for hybrid models that blend reasoning with data. Bishop dives into the philosophical boundaries of computational cognition, challenging the notion that AI can replicate human understanding. Together, they advocate for a cognitive approach to align AI closer to human values and reasoning.
undefined
21 snips
Jun 21, 2021 • 1h 36min

#55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

Dr. Ishan Misra, a prolific Research Scientist at Facebook AI Research, dives into the world of self-supervised vision models. He discusses groundbreaking papers like DINO and Barlow Twins, addressing how these innovative approaches reduce the need for human supervision in visual learning. Ishan explores the nuances of neural networks, object recognition challenges, and the philosophical implications of AI's common sense knowledge. Plus, he compares self-supervised models with semi-supervised techniques, showcasing the advancements in harnessing human knowledge for enhanced machine learning.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app