Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
13 snips
Feb 10, 2023 • 26min

#100 Dr. PATRICK LEWIS (co:here) - Retrieval Augmented Generation

Dr. Patrick Lewis, an AI and NLP Research Scientist at co:here, delves into the cutting-edge world of Retrieval-Augmented Language Models. He discusses the limitations of existing transformer models in handling large inputs, revealing the need for better techniques. The conversation highlights the importance of enhancing verifiability in language models by integrating credible sources. Patrick also explores the complexities of information retrieval in improving contextual relevance, using the innovative Atlas project as a prime example.
undefined
9 snips
Feb 5, 2023 • 1h 40min

#99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

Carla Cremer, a doctoral student at Oxford, and Igor Krawczuk, a researcher at EPFL, dive into the intricate world of AI risk and governance. They argue that AI risks are deeply rooted in traditional political issues, advocating for democratic approaches in risk assessment. Their discussion tackles the Effective Altruism movement's paradoxes, highlighting the need for institutional accountability. They emphasize the importance of transparency in AI tools and call for diverse perspectives in decision-making to navigate the complexities of governance and societal impact.
undefined
18 snips
Feb 3, 2023 • 1h 6min

[NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

Professor Luciano Floridi, a noted philosopher and expert in digital ethics from the University of Oxford, discusses the implications of living in an information-driven society. He highlights how the overwhelming data we create is eroding human agency and muddying the infosphere. Professor Floridi emphasizes the need for a robust philosophy of information to address ethical concerns, particularly regarding misinformation and AI's impact on reality. He also advocates for responsible AI governance to ensure technology serves humanity equitably.
undefined
5 snips
Feb 3, 2023 • 1h 7min

#98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

Professor Luciano Floridi, a leading thinker in digital ethics from the University of Oxford, delves into the implications of the Information Revolution. He discusses the overwhelming data generation and the erosion of human agency. Floridi critiques the imbalance between tech growth and our understanding, emphasizing the need for ethical governance in AI. He also explores issues like misinformation and the transformation of societal engagement, advocating for collective responsibility and an information-centric worldview to navigate the complexities of our digital age.
undefined
Jan 28, 2023 • 25min

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

Sreejan Kumar, a fourth-year PhD student at Princeton Neuroscience Institute, dives into the fascinating world of human inductive biases in machines. He discusses his award-winning research on how humans learn and generalize quickly, and how to instill these biases in AI systems. The conversation explores the importance of using human language influences to enhance AI's understanding and capabilities. Sreejan emphasizes the potential of combining neural networks with program induction for a well-rounded intelligence, allowing for better collaboration between humans and machines.
undefined
36 snips
Dec 30, 2022 • 2h 49min

#96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic

Pedro Domingos, Professor Emeritus at the University of Washington and author of "The Master Algorithm," dives deep into the intricate world of machine learning. He explores the concept of a master algorithm and debates its existence. The conversation branches into how utility functions shape AI behavior and the risks of misrepresenting truth in narratives. Domingos also discusses the relationship between human creativity and AI, emphasizing the importance of integrating different approaches, like neurosymbolic AI, to better understand intelligence.
undefined
6 snips
Dec 26, 2022 • 39min

#95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism

Irina Rish, a leading AI researcher and professor at the University of Montreal, dives deep into the future of artificial intelligence. She advocates for viewing AI as a tool to enhance human abilities rather than a competitor. The conversation highlights the philosophical implications of transhumanism and the potential for hybrid intelligence, blending human creativity with machine efficiency. Irina also explores the moral quandaries in AI development and the complexities of decision-making in deep learning, emphasizing the need for ethical frameworks in this evolving field.
undefined
Dec 26, 2022 • 14min

#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS

In this talk, Alan Chan, a PhD student at Mila with a focus on AI alignment and governance, shares insights from his research on aligning AI with human values. He discusses the skepticism around AI alignment and the complexities of defining intelligence in AI systems. The conversation touches on the moral implications of AI decision-making, the risks of oversimplifying reward mechanisms in reinforcement learning, and the urgent need for collaborative safety measures in AI development. Chan's enthusiasm for the ethical aspects of AI governance is truly infectious.
undefined
Dec 24, 2022 • 1h 20min

#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models

Murray Shanahan, a Professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, dives into the intricate tapestry of consciousness and AI. He critiques the notion of subjective experience, arguing that diverse forms of minds extend beyond the natural realm. Shanahan discusses the capabilities and limitations of large language models, cautioning against anthropomorphism. Emphasizing the importance of embodiment, he highlights that true consciousness requires real-world interaction, a quality current AI systems lack.
undefined
7 snips
Dec 23, 2022 • 52min

#92 - SARA HOOKER - Fairness, Interpretability, Language Models

Sara Hooker, founder of Cohere For AI and a leader in machine learning research, discusses pivotal topics in the field. She explores the 'hardware lottery' concept, emphasizing how hardware compatibility affects ideas' success. The conversation delves into fairness, highlighting challenges like annotator bias and the need for fairness objectives in model training. Hooker also tackles model efficiency versus size, self-supervised learning's capabilities, and the nuances of prompting in language models, offering insights into making machine learning more accessible and trustworthy.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app