Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
4 snips
Dec 16, 2022 • 1h 22min

#88 Dr. WALID SABA - Why machines will never rule the world [UNPLUGGED]

Dr. Walid Saba, a knowledgeable AI expert and computational linguist, shares his contrarian views on the potential of machines to rule the world. He critiques the limitations of strong AI while acknowledging the impressive achievements of large language models in understanding language. Their discussion covers the challenges of semantics and symbol grounding, highlighting that current models struggle with true comprehension. Saba argues that deep learning demonstrates language competency beyond human replication, emphasizing the ongoing quest for advancing AI capabilities.
undefined
18 snips
Dec 11, 2022 • 30min

#86 - Prof. YANN LECUN and Dr. RANDALL BALESTRIERO - SSL, Data Augmentation, Reward isn't enough [NEURIPS2022]

Yann LeCun, a pioneer in deep learning and Chief AI Scientist at Meta, joins researcher Randall Balestriero, an expert in learnable signal processing. They dive into self-supervised learning's advancements and the role of data augmentation in improving model efficiency. Exciting topics include innovative techniques for enhancing representations, the challenges of defining intelligence in learning, and the potential of new methodologies like NNClear. Their insights from NeurIPS capture the cutting edge of AI research and its applications, including Marsquake detection.
undefined
8 snips
Dec 8, 2022 • 37min

#85 Dr. Petar Veličković (Deepmind) - Categories, Graphs, Reasoning [NEURIPS22 UNPLUGGED]

Dr. Petar Veličković, a Staff Research Scientist at DeepMind known for his work on Graph Attention Networks, discusses fascinating advancements in deep learning. He explores how category theory enhances geometric deep learning and innovates graph neural networks. The conversation dives into algorithmic reasoning, exposing the shift from manual feature engineering to automated processes. Petar also addresses the challenges of neural networks with extrapolation versus interpolation and shares insights on expander graphs to overcome obstacles in information propagation.
undefined
Dec 6, 2022 • 28min

#84 LAURA RUIS - Large language models are not zero-shot communicators [NEURIPS UNPLUGGED]

In this insightful discussion, Laura Ruis, a researcher focused on pragmatic inferences in conversational AI, delves into the limitations of large language models. She reveals how these models struggle with context and implicature, causing misunderstandings in communication. Ruis also examines zero-shot learning capabilities, showcasing disparities in performance across different models. Additionally, she highlights the importance of human feedback in refining these AI systems, aiming for a future where they can more effectively interpret and engage in nuanced conversations.
undefined
Dec 4, 2022 • 21min

#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]

Dr. Andrew Lampinen, a DeepMind researcher specializing in natural language understanding and reinforcement learning, dives deep into the complexities of AI language models. He explores the grounding problem and critiques the distinctions between AI and human cognitive abilities. The discussion covers philosophical debates on human agency, the nuances of syntax versus semantics, and the shifting perspectives on deep learning's role in language comprehension. Lampinen also highlights the intricacies of compositionality and the significance of embodied learning in AI.
undefined
28 snips
Nov 27, 2022 • 1h 15min

#82 - Dr. JOSCHA BACH - Digital Physics, DL and Consciousness [UNPLUGGED]

Dr. Joscha Bach, a German AI researcher and cognitive scientist, dives into the profound relationship between computation and consciousness. He discusses the limitations of deep learning and the challenges posed by Gödel's incompleteness theorems. The conversation highlights the importance of mental models in behavior and decision-making. Bach also explores the artistic potential of AI, suggesting that human creativity plays a crucial role. Finally, he examines how consciousness relates to predictive coding, urging a broader understanding of agency in both biological and artificial systems.
undefined
14 snips
Nov 20, 2022 • 1h 10min

#81 JULIAN TOGELIUS, Prof. KEN STANLEY - AGI, Games, Diversity & Creativity [UNPLUGGED]

Julian Togelius, an NYU Associate Professor and co-founder of model.ai, teams up with Ken Stanley from OpenAI, who leads research on open-endedness. The duo dives into the intersection of AI and games, emphasizing how games serve as a testing ground for AGI. They discuss the need for diversity in AI, tackling challenges in integrating different learning approaches. The conversation also touches on the balance between creativity and technology, and the philosophical debates shaping AI's future.
undefined
6 snips
Nov 15, 2022 • 52min

#80 AIDAN GOMEZ [CEO Cohere] - Language as Software

Aidan Gomez, Co-founder and CEO of Cohere, shares insights from his journey in AI and language technology. He discusses how language might revolutionize software development, making it accessible to more people. Aidan reflects on the evolution of transformer models and the challenges they face, emphasizing a future where language-driven interfaces can transform applications. He also explores the potential implications of large language models in enhancing user interactions and fostering innovation in the tech landscape.
undefined
22 snips
Nov 8, 2022 • 2h 10min

#79 Consciousness and the Chinese Room [Special Edition] (CHOLLET, BISHOP, CHALMERS, BACH)

Francois Chollet, an AI researcher at Google Brain and creator of Keras, joins a panel featuring philosopher David Chalmers and cognitive scientists to delve into the Chinese Room argument. They explore whether machines can genuinely understand language or only simulate it. The discussion challenges conventional views on consciousness, emphasizing that true understanding stems from complex interactions rather than mere rule-following. Insights into syntax versus semantics reveal the deeper philosophical implications of AI and the nature of consciousness.
undefined
53 snips
Jul 8, 2022 • 3h 37min

MLST #78 - Prof. NOAM CHOMSKY (Special Edition)

In this captivating discussion, Prof. Noam Chomsky, the father of modern linguistics and a towering intellectual, shares insights on the evolution of language and cognition. He critiques misconceptions about his work while exploring the boundaries between AI and human understanding. The conversation delves into the significance of probabilistic methods in neural networks and the innate aspects of language acquisition. Chomsky also reflects on the philosophical challenges surrounding determinism and free will, emphasizing the complexities of thought and communication.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app