

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Episodes
Mentioned books

6 snips
Dec 26, 2022 • 39min
#95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism
Irina Rish, a leading AI researcher and professor at the University of Montreal, dives deep into the future of artificial intelligence. She advocates for viewing AI as a tool to enhance human abilities rather than a competitor. The conversation highlights the philosophical implications of transhumanism and the potential for hybrid intelligence, blending human creativity with machine efficiency. Irina also explores the moral quandaries in AI development and the complexities of decision-making in deep learning, emphasizing the need for ethical frameworks in this evolving field.

Dec 26, 2022 • 14min
#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS
In this talk, Alan Chan, a PhD student at Mila with a focus on AI alignment and governance, shares insights from his research on aligning AI with human values. He discusses the skepticism around AI alignment and the complexities of defining intelligence in AI systems. The conversation touches on the moral implications of AI decision-making, the risks of oversimplifying reward mechanisms in reinforcement learning, and the urgent need for collaborative safety measures in AI development. Chan's enthusiasm for the ethical aspects of AI governance is truly infectious.

Dec 24, 2022 • 1h 20min
#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models
Murray Shanahan, a Professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, dives into the intricate tapestry of consciousness and AI. He critiques the notion of subjective experience, arguing that diverse forms of minds extend beyond the natural realm. Shanahan discusses the capabilities and limitations of large language models, cautioning against anthropomorphism. Emphasizing the importance of embodiment, he highlights that true consciousness requires real-world interaction, a quality current AI systems lack.

7 snips
Dec 23, 2022 • 52min
#92 - SARA HOOKER - Fairness, Interpretability, Language Models
Sara Hooker, founder of Cohere For AI and a leader in machine learning research, discusses pivotal topics in the field. She explores the 'hardware lottery' concept, emphasizing how hardware compatibility affects ideas' success. The conversation delves into fairness, highlighting challenges like annotator bias and the need for fairness objectives in model training. Hooker also tackles model efficiency versus size, self-supervised learning's capabilities, and the nuances of prompting in language models, offering insights into making machine learning more accessible and trustworthy.

Dec 20, 2022 • 21min
#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS
In an engaging conversation, Hattie Zhou, a PhD student at Université de Montréal and Mila, discusses her groundbreaking work on teaching algorithmic reasoning to large language models at Google Brain. She outlines the four essential stages for this task, including how to combine and use algorithms as tools. Hattie also shares innovative strategies for enhancing the reasoning capabilities of these models, the computational limits they face, and the exciting prospects for their applications in mathematical conjecturing.

12 snips
Dec 19, 2022 • 54min
(Music Removed) #90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]
David Chalmers, a leading philosopher and neural scientist, explores the thrilling yet complex intersection of consciousness and AI. He tackles the hard problem of consciousness, questioning whether machines could ever experience subjective awareness. Chalmers dives into the concept of philosophical zombies and their ethical implications in artificial general intelligence. He even compares human consciousness with that of insects, igniting a debate on the moral responsibilities we hold toward advanced AI systems. Expect thought-provoking insights and challenging questions!

Dec 19, 2022 • 54min
#90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]
Delve into the intriguing debate on whether language models can possess consciousness. Explore the ties between consciousness and artificial general intelligence, questioning the necessity of consciousness in machines. Consider the philosophical zombie argument and its implications for understanding intelligence. The discussion also touches on the complexities of insect consciousness and its ethical ramifications concerning AI. Finally, discover how explainability in AI affects both performance and human reasoning, raising vital questions about our responsibilities in creating conscious entities.

4 snips
Dec 16, 2022 • 1h 22min
#88 Dr. WALID SABA - Why machines will never rule the world [UNPLUGGED]
Dr. Walid Saba, a knowledgeable AI expert and computational linguist, shares his contrarian views on the potential of machines to rule the world. He critiques the limitations of strong AI while acknowledging the impressive achievements of large language models in understanding language. Their discussion covers the challenges of semantics and symbol grounding, highlighting that current models struggle with true comprehension. Saba argues that deep learning demonstrates language competency beyond human replication, emphasizing the ongoing quest for advancing AI capabilities.

18 snips
Dec 11, 2022 • 30min
#86 - Prof. YANN LECUN and Dr. RANDALL BALESTRIERO - SSL, Data Augmentation, Reward isn't enough [NEURIPS2022]
Yann LeCun, a pioneer in deep learning and Chief AI Scientist at Meta, joins researcher Randall Balestriero, an expert in learnable signal processing. They dive into self-supervised learning's advancements and the role of data augmentation in improving model efficiency. Exciting topics include innovative techniques for enhancing representations, the challenges of defining intelligence in learning, and the potential of new methodologies like NNClear. Their insights from NeurIPS capture the cutting edge of AI research and its applications, including Marsquake detection.

8 snips
Dec 8, 2022 • 37min
#85 Dr. Petar Veličković (Deepmind) - Categories, Graphs, Reasoning [NEURIPS22 UNPLUGGED]
Dr. Petar Veličković, a Staff Research Scientist at DeepMind known for his work on Graph Attention Networks, discusses fascinating advancements in deep learning. He explores how category theory enhances geometric deep learning and innovates graph neural networks. The conversation dives into algorithmic reasoning, exposing the shift from manual feature engineering to automated processes. Petar also addresses the challenges of neural networks with extrapolation versus interpolation and shares insights on expander graphs to overcome obstacles in information propagation.


