Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
Apr 14, 2022 • 1h 6min

#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]

Dr. Andrew Lampinen, a Senior Research Scientist at DeepMind with a PhD from Stanford, dives into the nuanced world of symbolic behavior in AI. He discusses how machines struggle to replicate human symbol use and emphasizes that meanings are shaped by user agreements rather than symbol content. Lampinen critiques traditional intelligence notions, advocating for a meaning-first approach in AI. The conversation also touches on the complexity of subjectivity, the limits of formal logic, and the ethical challenges in aligning AI with human values.
undefined
Apr 7, 2022 • 56min

#73 - YASAMAN RAZEGHI & Prof. SAMEER SINGH - NLP benchmarks

Yasaman Razeghi, a PhD student at UC Irvine, discusses her groundbreaking research showing that large language models excel at reasoning tasks primarily due to dataset memorization. Prof. Sameer Singh, an expert in machine learning interpretability, shares insights on the perils of metric obsession in evaluating AI. They delve into the importance of understanding human-like reasoning in AI and advocate for nuanced metrics that truly assess model capabilities. Their engaging conversation shines a light on the future of model testing and explainability.
undefined
Mar 29, 2022 • 1h 25min

#72 Prof. KEN STANLEY 2.0 - On Art and Subjectivity [UNPLUGGED]

Prof. Ken Stanley, a pioneer in the study of open-endedness and author of 'Why Greatness Cannot Be Planned,' challenges conventional goal-setting. He argues that rigid objectives often stifle creativity, advocating for a focus on subjectivity and serendipity. The discussion spans artificial intelligence's link to art, the deceptive nature of ambitious objectives, and the unexpected paths innovation can take. By embracing uncertainty, Stanley posits that we can unlock profound creative insights and truly understand the interplay between intelligence and artistic expression.
undefined
17 snips
Mar 25, 2022 • 1h 3min

#71 - ZAK JOST (Graph Neural Networks + Geometric DL) [UNPLUGGED]

Zak Jost, an applied scientist at AWS and YouTuber from The Welcome AI Overlords channel, dives deep into the world of graph neural networks and geometric deep learning. He discusses the intricacies of message passing and the balance between top-down and bottom-up approaches. Zak highlights the importance of equivariant subgraph aggregation networks and addresses the challenges of over-smoothing in GNNs. He also introduces his upcoming GNN course, emphasizing community engagement and collaborative learning.
undefined
Mar 19, 2022 • 1h 19min

#70 - LETITIA PARCALABESCU - Symbolics, Linguistics [UNPLUGGED]

Letitia Parcalabescu, a PhD student at Heidelberg University focused on computational linguistics, shares her insights and experiences as the creator of the AI Coffee Break YouTube channel. She discusses the intricate relationship between symbolic AI and deep learning, emphasizing the need for a hybrid approach. Letitia reflects on her journey from physics to AI and the challenges of multimodal research. The conversation also touches on the importance of embracing imperfection in content creation while pursuing passion and innovation.
undefined
18 snips
Mar 12, 2022 • 51min

#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data

Dr. Thomas Lux, a research scientist at Meta in Silicon Valley, dives deep into the geometry behind machine learning. He discusses the unique advantages of neural networks over classical methods for high-dimensional data interpolation. Lux explains how neural networks excel at tasks like image recognition by effectively reducing dimensions and ignoring irrelevant data. He explores the challenges of placing basis functions and the importance of data density. Their ability to focus on crucial input regions reveals why they outperform traditional algorithms.
undefined
Mar 7, 2022 • 1h 42min

#68 DR. WALID SABA 2.0 - Natural Language Understanding [UNPLUGGED]

Dr. Walid Saba, a Senior Scientist at Sorcero, critiques deep learning's approach to natural language understanding. He argues that reliance on statistical learning leads to failure, akin to memorizing infinity. Saba emphasizes the importance of symbolic logic and human cognitive processes in AI development. He explores the complexities of memory in neural networks, the distinctions between top-down and bottom-up problem-solving, and the need for hybrid models that integrate logic and prior knowledge. His insights challenge conventional methods and advocate for a deeper understanding of cognition in AI.
undefined
62 snips
Mar 2, 2022 • 1h 42min

#67 Prof. KARL FRISTON 2.0

In this enlightening discussion, Karl Friston, a leading British neuroscientist from University College London, delves into the intriguing free energy principle and its impact on cognition and consciousness. They explore the challenges of simplifying complex scientific concepts and the balance of order and chaos in existence. Friston also interrogates the essence of consciousness, questioning if it can be recreated in silico, while tackling the complexities of free will in a deterministic universe. A thought-provoking conversation that intertwines science, philosophy, and self-discovery!
undefined
Feb 28, 2022 • 51min

#66 ALEXANDER MATTICK - [Unplugged / Community Edition]

Join Alexander Mattick, a prominent voice in Yannic's Discord community and an AI aficionado, as he dives deep into the intricacies of neural networks. He reveals fascinating insights on spline theory and the complexities of abstraction in machine learning. The discussion also touches on the balance between exploration and control in knowledge acquisition, alongside the philosophical implications of causality and discrete versus continuous modeling. Alex champions the value of a broad knowledge base, illustrating how diverse insights can enhance problem-solving.
undefined
8 snips
Feb 26, 2022 • 1h 28min

#65 Prof. PEDRO DOMINGOS [Unplugged]

Pedro Domingos, a renowned professor of computer science and author of "The Master Algorithm," dives deep into the fundamentals of machine learning. He emphasizes the need for a solid understanding of AI for both professionals and the public, likening it to learning to drive. Domingos discusses the evolution of generative and discriminative models, critiques existing algorithms, and explores the interplay of entropy and reality. He also proposes a unifying 'master algorithm' while questioning the complexities of causality in learning systems, ultimately advocating for a broader understanding of AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app