

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Episodes
Mentioned books

26 snips
Aug 22, 2024 • 1h 28min
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)
Andrew Ilyas, a PhD student at MIT soon to be a professor at CMU, dives deep into the fascinating world of machine learning. He explains how datasets influence model predictions and why adversarial examples are crucial features rather than mere bugs. The discussion spans the complexities of robustness, black box attacks, and biases in data collection, especially in the ImageNet dataset. Ilyas also shares innovative solutions to self-selection bias and his ambitious future research plans in the field.

31 snips
Aug 21, 2024 • 57min
Joscha Bach - AGI24 Keynote (Cyberanimism)
Dr. Joscha Bach, an AI researcher renowned for his insights into consciousness and AGI, introduces the intriguing concept of "cyber animism." He explores the idea that nature might host self-organizing software agents, similar to ancient spirits. Bach combines philosophy, history, and cutting-edge science, suggesting consciousness could be more widespread than we think. He encourages a rethinking of the distinctions between human, artificial, and natural intelligence, probing the connections between consciousness, self-awareness, and even plant signaling.

18 snips
Aug 17, 2024 • 1h 12min
Gary Marcus' keynote at AGI-24
In this discussion, Gary Marcus, a renowned professor and AI expert, critiques the current state of large language models and generative AI, highlighting their unreliability and tendency to hallucinate. He argues that merely scaling data won't lead us to AGI and proposes a hybrid AI approach that integrates deep learning with symbolic reasoning. Marcus voices concerns about the ethical implications of AI deployment and predicts a potential 'AI winter' due to overhyped technologies and inadequate regulation, emphasizing the necessity for deeper conceptual understanding in AI.

12 snips
Aug 15, 2024 • 33min
Is ChatGPT an N-gram model on steroids?
In this discussion, Timothy Nguyen, a DeepMind Research Scientist and MIT scholar, shares insights from his innovative research on transformers and n-gram statistics. He reveals a method to analyze transformer predictions without tapping into internal mechanisms. The conversation covers how transformers evolve during training, particularly in curriculum learning, and how to detect overfitting without traditional holdout methods. Nguyen also dives into philosophical questions about AI understanding, highlighting the complexities of interpreting neural network behavior.

42 snips
Aug 11, 2024 • 57min
Jay Alammar on LLMs, RAG, and AI Engineering
Jay Alammar, a prominent AI educator and researcher at Cohere, dives into the latest on large language models (LLMs) and retrieval augmented generation (RAG). He explores how RAG enhances data interactions, helping reduce hallucination in AI outputs. Jay also addresses the challenges of implementing AI in enterprises, emphasizing the importance of education for developers. The conversation highlights semantic search innovations and the future of AI architectures, offering insights on effective deployment strategies and the need for continuous learning in this rapidly evolving field.

163 snips
Aug 8, 2024 • 2h 14min
Can AI therapy be more effective than drugs?
Daniel Cahn, co-founder of Slingshot AI, discusses the revolutionary potential of AI in therapy. He examines the rising rates of anxiety and depression, challenging the notion of mental health categories as societal constructs. The conversation delves into the ethical implications of AI in therapeutic settings, including the effectiveness of AI chatbots as support tools. Cahn also explores the impact of technology on human agency, emotional connections, and how it might reshape our understanding of mental health interventions.

84 snips
Jul 29, 2024 • 1h 42min
Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)
In this engaging discussion, Subbarao Kambhampati, a Professor at Arizona State University specializing in AI, tackles the limitations of large language models. He argues that these models primarily memorize rather than reason, raising questions about their reliability. Kambhampati explores the need for hybrid approaches that combine LLMs with external verification systems to ensure accuracy. He also delves into the distinctions between human reasoning and LLM capabilities, emphasizing the importance of critical skepticism in AI research.

30 snips
Jul 28, 2024 • 50min
Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)
Sayash Kapoor, a Ph.D. candidate at Princeton, dives deep into the complexities of assessing existential risks from AI. He argues that the reliability of probability estimates can mislead policymakers, drawing parallels to other fields of risk assessment. The discussion critiques utilitarian approaches in decision-making and the challenges with cognitive biases. Kapoor also highlights concerns around AI's rapid growth, pressures on education, and workplace dynamics, emphasizing the need for informed policies that balance technological advancement with societal impact.

11 snips
Jul 18, 2024 • 1h 6min
Sara Hooker - Why US AI Act Compute Thresholds Are Misguided
Sara Hooker, VP of Research at Cohere and a leading voice in AI efficiency, shares insights on AI governance and the pitfalls of using compute thresholds, like FLOPs, as risk metrics. She critiques current US and EU policies for oversimplifying AI capabilities and emphasizes the need for a holistic view that includes data diversity. Hooker also discusses her research on 'The AI Language Gap,' revealing the complexities of creating inclusive AI that serves multilingual populations, highlighting ethical concerns and the societal implications of underrepresentation in AI development.

42 snips
Jul 14, 2024 • 2h 15min
Prof. Murray Shanahan - Machines Don't Think Like Us
Murray Shanahan, a Professor of Cognitive Robotics at Imperial College London and a senior research scientist at DeepMind, dives deep into AI consciousness and the perils of anthropomorphizing machines. He discusses the limitations of current language in describing AI and stresses the need for nuanced vocabulary. Shanahan explores Reinforcement Learning and the 'Waluigi Effect,' as well as the complexities of agency in AI. He also touches on consciousness in relation to non-human entities, emphasizing how our perceptions shape understanding and the philosophical implications behind it.


