
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

53 snips
Jul 8, 2022 • 3h 37min
MLST #78 - Prof. NOAM CHOMSKY (Special Edition)
In this captivating discussion, Prof. Noam Chomsky, the father of modern linguistics and a towering intellectual, shares insights on the evolution of language and cognition. He critiques misconceptions about his work while exploring the boundaries between AI and human understanding. The conversation delves into the significance of probabilistic methods in neural networks and the innate aspects of language acquisition. Chomsky also reflects on the philosophical challenges surrounding determinism and free will, emphasizing the complexities of thought and communication.

Jun 16, 2022 • 1h 8min
#77 - Vitaliy Chiley (Cerebras)
Vitaliy Chiley, a Machine Learning Research Engineer at Cerebras Systems, dives into the revolutionary hardware that accelerates deep learning workloads. He discusses the efficiency of Cerebras' architecture compared to traditional GPUs and the importance of memory management. Chiley explores the impact of sparsity in neural networks, debating the trade-offs between weight and activation sparsity. With insights on optimizing deep learning models, he also touches on why starting with dense networks can be beneficial before moving towards sparsity.

4 snips
Jun 9, 2022 • 58min
#76 - LUKAS BIEWALD (Weights and Biases CEO)
Lukas Biewald, the CEO of Weights and Biases, shares his insights as a successful entrepreneur in the AI space. He discusses the recent $15 million funding round for his company and the challenges of improving training data quality. The conversation touches on the balance between generalization and specialization in machine learning, alongside the critical need for explainability in AI tools. Biewald also highlights innovative community engagement strategies through YouTube sponsorships, emphasizing the evolving nature of machine learning and its entrepreneurial dynamics.

34 snips
Apr 29, 2022 • 1h 55min
#75 - Emergence [Special Edition] with Dr. DANIELE GRATTAROLA
Dr. Daniele Grattarola, a postdoctoral researcher at EPFL, specializes in graph neural networks and protein design. He dives deep into the captivating concept of emergence, comparing weak and strong emergence in complex systems. The discussion touches on how simple rules in cellular automata can lead to complex, intelligent behaviors. They also explore the philosophical implications of emergence, the predictability of complex systems, and how these ideas relate to advanced applications in artificial intelligence and protein folding.

Apr 14, 2022 • 1h 6min
#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]
Dr. Andrew Lampinen, a Senior Research Scientist at DeepMind with a PhD from Stanford, dives into the nuanced world of symbolic behavior in AI. He discusses how machines struggle to replicate human symbol use and emphasizes that meanings are shaped by user agreements rather than symbol content. Lampinen critiques traditional intelligence notions, advocating for a meaning-first approach in AI. The conversation also touches on the complexity of subjectivity, the limits of formal logic, and the ethical challenges in aligning AI with human values.

Apr 7, 2022 • 56min
#73 - YASAMAN RAZEGHI & Prof. SAMEER SINGH - NLP benchmarks
Yasaman Razeghi, a PhD student at UC Irvine, discusses her groundbreaking research showing that large language models excel at reasoning tasks primarily due to dataset memorization. Prof. Sameer Singh, an expert in machine learning interpretability, shares insights on the perils of metric obsession in evaluating AI. They delve into the importance of understanding human-like reasoning in AI and advocate for nuanced metrics that truly assess model capabilities. Their engaging conversation shines a light on the future of model testing and explainability.

Mar 29, 2022 • 1h 25min
#72 Prof. KEN STANLEY 2.0 - On Art and Subjectivity [UNPLUGGED]
Prof. Ken Stanley, a pioneer in the study of open-endedness and author of 'Why Greatness Cannot Be Planned,' challenges conventional goal-setting. He argues that rigid objectives often stifle creativity, advocating for a focus on subjectivity and serendipity. The discussion spans artificial intelligence's link to art, the deceptive nature of ambitious objectives, and the unexpected paths innovation can take. By embracing uncertainty, Stanley posits that we can unlock profound creative insights and truly understand the interplay between intelligence and artistic expression.

17 snips
Mar 25, 2022 • 1h 3min
#71 - ZAK JOST (Graph Neural Networks + Geometric DL) [UNPLUGGED]
Zak Jost, an applied scientist at AWS and YouTuber from The Welcome AI Overlords channel, dives deep into the world of graph neural networks and geometric deep learning. He discusses the intricacies of message passing and the balance between top-down and bottom-up approaches. Zak highlights the importance of equivariant subgraph aggregation networks and addresses the challenges of over-smoothing in GNNs. He also introduces his upcoming GNN course, emphasizing community engagement and collaborative learning.

Mar 19, 2022 • 1h 19min
#70 - LETITIA PARCALABESCU - Symbolics, Linguistics [UNPLUGGED]
Letitia Parcalabescu, a PhD student at Heidelberg University focused on computational linguistics, shares her insights and experiences as the creator of the AI Coffee Break YouTube channel. She discusses the intricate relationship between symbolic AI and deep learning, emphasizing the need for a hybrid approach. Letitia reflects on her journey from physics to AI and the challenges of multimodal research. The conversation also touches on the importance of embracing imperfection in content creation while pursuing passion and innovation.

18 snips
Mar 12, 2022 • 51min
#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data
Dr. Thomas Lux, a research scientist at Meta in Silicon Valley, dives deep into the geometry behind machine learning. He discusses the unique advantages of neural networks over classical methods for high-dimensional data interpolation. Lux explains how neural networks excel at tasks like image recognition by effectively reducing dimensions and ignoring irrelevant data. He explores the challenges of placing basis functions and the importance of data density. Their ability to focus on crucial input regions reveals why they outperform traditional algorithms.