
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

18 snips
Feb 3, 2023 • 1h 6min
[NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information
Professor Luciano Floridi, a noted philosopher and expert in digital ethics from the University of Oxford, discusses the implications of living in an information-driven society. He highlights how the overwhelming data we create is eroding human agency and muddying the infosphere. Professor Floridi emphasizes the need for a robust philosophy of information to address ethical concerns, particularly regarding misinformation and AI's impact on reality. He also advocates for responsible AI governance to ensure technology serves humanity equitably.

5 snips
Feb 3, 2023 • 1h 7min
#98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information
Professor Luciano Floridi, a leading thinker in digital ethics from the University of Oxford, delves into the implications of the Information Revolution. He discusses the overwhelming data generation and the erosion of human agency. Floridi critiques the imbalance between tech growth and our understanding, emphasizing the need for ethical governance in AI. He also explores issues like misinformation and the transformation of societal engagement, advocating for collective responsibility and an information-centric worldview to navigate the complexities of our digital age.

Jan 28, 2023 • 25min
#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language
Sreejan Kumar, a fourth-year PhD student at Princeton Neuroscience Institute, dives into the fascinating world of human inductive biases in machines. He discusses his award-winning research on how humans learn and generalize quickly, and how to instill these biases in AI systems. The conversation explores the importance of using human language influences to enhance AI's understanding and capabilities. Sreejan emphasizes the potential of combining neural networks with program induction for a well-rounded intelligence, allowing for better collaboration between humans and machines.

36 snips
Dec 30, 2022 • 2h 49min
#96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic
Pedro Domingos, Professor Emeritus at the University of Washington and author of "The Master Algorithm," dives deep into the intricate world of machine learning. He explores the concept of a master algorithm and debates its existence. The conversation branches into how utility functions shape AI behavior and the risks of misrepresenting truth in narratives. Domingos also discusses the relationship between human creativity and AI, emphasizing the importance of integrating different approaches, like neurosymbolic AI, to better understand intelligence.

6 snips
Dec 26, 2022 • 39min
#95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism
Irina Rish, a leading AI researcher and professor at the University of Montreal, dives deep into the future of artificial intelligence. She advocates for viewing AI as a tool to enhance human abilities rather than a competitor. The conversation highlights the philosophical implications of transhumanism and the potential for hybrid intelligence, blending human creativity with machine efficiency. Irina also explores the moral quandaries in AI development and the complexities of decision-making in deep learning, emphasizing the need for ethical frameworks in this evolving field.

Dec 26, 2022 • 14min
#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS
In this talk, Alan Chan, a PhD student at Mila with a focus on AI alignment and governance, shares insights from his research on aligning AI with human values. He discusses the skepticism around AI alignment and the complexities of defining intelligence in AI systems. The conversation touches on the moral implications of AI decision-making, the risks of oversimplifying reward mechanisms in reinforcement learning, and the urgent need for collaborative safety measures in AI development. Chan's enthusiasm for the ethical aspects of AI governance is truly infectious.

Dec 24, 2022 • 1h 20min
#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models
Murray Shanahan, a Professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, dives into the intricate tapestry of consciousness and AI. He critiques the notion of subjective experience, arguing that diverse forms of minds extend beyond the natural realm. Shanahan discusses the capabilities and limitations of large language models, cautioning against anthropomorphism. Emphasizing the importance of embodiment, he highlights that true consciousness requires real-world interaction, a quality current AI systems lack.

7 snips
Dec 23, 2022 • 52min
#92 - SARA HOOKER - Fairness, Interpretability, Language Models
Sara Hooker, founder of Cohere For AI and a leader in machine learning research, discusses pivotal topics in the field. She explores the 'hardware lottery' concept, emphasizing how hardware compatibility affects ideas' success. The conversation delves into fairness, highlighting challenges like annotator bias and the need for fairness objectives in model training. Hooker also tackles model efficiency versus size, self-supervised learning's capabilities, and the nuances of prompting in language models, offering insights into making machine learning more accessible and trustworthy.

Dec 20, 2022 • 21min
#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS
In an engaging conversation, Hattie Zhou, a PhD student at Université de Montréal and Mila, discusses her groundbreaking work on teaching algorithmic reasoning to large language models at Google Brain. She outlines the four essential stages for this task, including how to combine and use algorithms as tools. Hattie also shares innovative strategies for enhancing the reasoning capabilities of these models, the computational limits they face, and the exciting prospects for their applications in mathematical conjecturing.

Dec 19, 2022 • 54min
(Music Removed) #90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]
David Chalmers, a leading philosopher and neural scientist, explores the thrilling yet complex intersection of consciousness and AI. He tackles the hard problem of consciousness, questioning whether machines could ever experience subjective awareness. Chalmers dives into the concept of philosophical zombies and their ethical implications in artificial general intelligence. He even compares human consciousness with that of insects, igniting a debate on the moral responsibilities we hold toward advanced AI systems. Expect thought-provoking insights and challenging questions!