Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
Dec 6, 2022 • 28min

#84 LAURA RUIS - Large language models are not zero-shot communicators [NEURIPS UNPLUGGED]

In this insightful discussion, Laura Ruis, a researcher focused on pragmatic inferences in conversational AI, delves into the limitations of large language models. She reveals how these models struggle with context and implicature, causing misunderstandings in communication. Ruis also examines zero-shot learning capabilities, showcasing disparities in performance across different models. Additionally, she highlights the importance of human feedback in refining these AI systems, aiming for a future where they can more effectively interpret and engage in nuanced conversations.
undefined
Dec 4, 2022 • 21min

#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]

Dr. Andrew Lampinen, a DeepMind researcher specializing in natural language understanding and reinforcement learning, dives deep into the complexities of AI language models. He explores the grounding problem and critiques the distinctions between AI and human cognitive abilities. The discussion covers philosophical debates on human agency, the nuances of syntax versus semantics, and the shifting perspectives on deep learning's role in language comprehension. Lampinen also highlights the intricacies of compositionality and the significance of embodied learning in AI.
undefined
28 snips
Nov 27, 2022 • 1h 15min

#82 - Dr. JOSCHA BACH - Digital Physics, DL and Consciousness [UNPLUGGED]

Dr. Joscha Bach, a German AI researcher and cognitive scientist, dives into the profound relationship between computation and consciousness. He discusses the limitations of deep learning and the challenges posed by Gödel's incompleteness theorems. The conversation highlights the importance of mental models in behavior and decision-making. Bach also explores the artistic potential of AI, suggesting that human creativity plays a crucial role. Finally, he examines how consciousness relates to predictive coding, urging a broader understanding of agency in both biological and artificial systems.
undefined
14 snips
Nov 20, 2022 • 1h 10min

#81 JULIAN TOGELIUS, Prof. KEN STANLEY - AGI, Games, Diversity & Creativity [UNPLUGGED]

Julian Togelius, an NYU Associate Professor and co-founder of model.ai, teams up with Ken Stanley from OpenAI, who leads research on open-endedness. The duo dives into the intersection of AI and games, emphasizing how games serve as a testing ground for AGI. They discuss the need for diversity in AI, tackling challenges in integrating different learning approaches. The conversation also touches on the balance between creativity and technology, and the philosophical debates shaping AI's future.
undefined
6 snips
Nov 15, 2022 • 52min

#80 AIDAN GOMEZ [CEO Cohere] - Language as Software

Aidan Gomez, Co-founder and CEO of Cohere, shares insights from his journey in AI and language technology. He discusses how language might revolutionize software development, making it accessible to more people. Aidan reflects on the evolution of transformer models and the challenges they face, emphasizing a future where language-driven interfaces can transform applications. He also explores the potential implications of large language models in enhancing user interactions and fostering innovation in the tech landscape.
undefined
22 snips
Nov 8, 2022 • 2h 10min

#79 Consciousness and the Chinese Room [Special Edition] (CHOLLET, BISHOP, CHALMERS, BACH)

Francois Chollet, an AI researcher at Google Brain and creator of Keras, joins a panel featuring philosopher David Chalmers and cognitive scientists to delve into the Chinese Room argument. They explore whether machines can genuinely understand language or only simulate it. The discussion challenges conventional views on consciousness, emphasizing that true understanding stems from complex interactions rather than mere rule-following. Insights into syntax versus semantics reveal the deeper philosophical implications of AI and the nature of consciousness.
undefined
53 snips
Jul 8, 2022 • 3h 37min

MLST #78 - Prof. NOAM CHOMSKY (Special Edition)

In this captivating discussion, Prof. Noam Chomsky, the father of modern linguistics and a towering intellectual, shares insights on the evolution of language and cognition. He critiques misconceptions about his work while exploring the boundaries between AI and human understanding. The conversation delves into the significance of probabilistic methods in neural networks and the innate aspects of language acquisition. Chomsky also reflects on the philosophical challenges surrounding determinism and free will, emphasizing the complexities of thought and communication.
undefined
Jun 16, 2022 • 1h 8min

#77 - Vitaliy Chiley (Cerebras)

Vitaliy Chiley, a Machine Learning Research Engineer at Cerebras Systems, dives into the revolutionary hardware that accelerates deep learning workloads. He discusses the efficiency of Cerebras' architecture compared to traditional GPUs and the importance of memory management. Chiley explores the impact of sparsity in neural networks, debating the trade-offs between weight and activation sparsity. With insights on optimizing deep learning models, he also touches on why starting with dense networks can be beneficial before moving towards sparsity.
undefined
4 snips
Jun 9, 2022 • 58min

#76 - LUKAS BIEWALD (Weights and Biases CEO)

Lukas Biewald, the CEO of Weights and Biases, shares his insights as a successful entrepreneur in the AI space. He discusses the recent $15 million funding round for his company and the challenges of improving training data quality. The conversation touches on the balance between generalization and specialization in machine learning, alongside the critical need for explainability in AI tools. Biewald also highlights innovative community engagement strategies through YouTube sponsorships, emphasizing the evolving nature of machine learning and its entrepreneurial dynamics.
undefined
34 snips
Apr 29, 2022 • 1h 55min

#75 - Emergence [Special Edition] with Dr. DANIELE GRATTAROLA

Dr. Daniele Grattarola, a postdoctoral researcher at EPFL, specializes in graph neural networks and protein design. He dives deep into the captivating concept of emergence, comparing weak and strong emergence in complex systems. The discussion touches on how simple rules in cellular automata can lead to complex, intelligent behaviors. They also explore the philosophical implications of emergence, the predictability of complex systems, and how these ideas relate to advanced applications in artificial intelligence and protein folding.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app