Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
Sep 5, 2024 • 46min

The Fabric of Knowledge - David Spivak

David Spivak, a mathematician renowned for his expertise in category theory, dives into fascinating discussions on intelligence and creativity. He simplifies category theory, demonstrating its power in understanding complex systems. Spivak explores how embodiment influences knowledge acquisition and shares insights on collective intelligence. He tackles AI's profound impact on human thinking and the evolution of intelligence. The conversation also emphasizes the critical role of language in shaping our understanding and the interplay between creativity and societal influences.
undefined
Aug 28, 2024 • 1h 40min

Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

Jürgen Schmidhuber, known as the father of generative AI, dives into the evolution of artificial intelligence and shares his groundbreaking insights. He discusses his innovative contributions, including LSTM networks and the significance of neural networks versus symbolic methods. The conversation also touches on misconceptions about AI capabilities, the future of intelligent machines, and the potential impacts on humanity. Schmidhuber offers a visionary perspective on the exponential growth of technology and its role in reshaping our understanding of intelligence.
undefined
Aug 25, 2024 • 2h 12min

"AI should NOT be regulated at all!" - Prof. Pedro Domingos

Prof. Pedro Domingos, an influential AI researcher and computer science professor, shares his critical views on the current push for AI regulations, arguing they could hinder innovation. He discusses the limitations of existing AI technologies and emphasizes the need for new innovations, including his work on tensor logic that seeks to unify AI approaches. Domingos also offers insights into his satirical book, "2040," which humorously critiques tech culture and its impact on society, raising pressing questions about the future of AI and democratic governance.
undefined
Aug 22, 2024 • 1h 28min

Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Andrew Ilyas, a PhD student at MIT soon transitioning to professor at CMU, dives into the fascinating world of data modeling and its influence on model predictions. He explains the mechanisms behind adversarial examples in machine learning and their implications for model robustness. Ilyas discusses biases in data collection, particularly in ImageNet, and presents solutions for self-selection bias. The conversation also covers black box attacks on machine learning systems, illuminating the complexities of maintaining accuracy in challenging scenarios.
undefined
Aug 21, 2024 • 57min

Joscha Bach - AGI24 Keynote (Cyberanimism)

Dr. Joscha Bach, a thought leader in artificial intelligence, challenges us with his concept of 'cyber animism,' suggesting that nature may be inhabited by self-organizing software agents, reminiscent of ancient spiritual beliefs. He explores the nature of consciousness, arguing it could be a sophisticated program not just in humans, but also in plants and ecosystems. By delving into history, philosophy, and cutting-edge AI, he invites listeners to reconsider the connections between human, artificial, and natural intelligence.
undefined
Aug 17, 2024 • 1h 12min

Gary Marcus' keynote at AGI-24

Gary Marcus, a prominent AI professor and thought leader, returns to critique the limitations of current large language models. He points out their unreliability and the diminishing returns of merely scaling data and compute. Advocating for a hybrid AI approach that integrates deep learning with symbolic reasoning, he emphasizes the need for systems to truly understand concepts like causality. Marcus also raises ethical concerns about unregulated AI deployment and the possibility of an impending 'AI winter' due to overhyped expectations and lack of accountability.
undefined
Aug 15, 2024 • 33min

Is ChatGPT an N-gram model on steroids?

Dr. Timothy Nguyen, a DeepMind Research Scientist and MIT scholar, dives deep into transformer models and n-gram statistics. He presents a fascinating method for predicting language through template matching, revealing a 78% correlation with transformer outputs. The discussion highlights crucial insights into overfitting detection, curriculum learning, and the impact of model sizes. Nguyen also explores the philosophical implications of AI behavior and suggests exciting future research directions in understanding neural network abstractions.
undefined
Aug 11, 2024 • 57min

Jay Alammar on LLMs, RAG, and AI Engineering

Jay Alammar, renowned AI educator at Cohere, dives into the world of large language models (LLMs) and retrieval augmented generation (RAG). He explains how RAG enhances data interactions and factual accuracy in AI. Jay discusses challenges in implementing AI in industry and shares expert advice for newcomers. He emphasizes the evolution from deep learning to LLMs, the power of semantic search, and strategies to keep pace with rapid advancements. Lastly, he reflects on his journey in making complex AI concepts accessible through visual learning.
undefined
Aug 8, 2024 • 2h 14min

Can AI therapy be more effective than drugs?

Daniel Cahn, co-founder of Slingshot AI, dives into the transformative potential of AI in therapy. He discusses the effectiveness of AI versus traditional drugs in addressing anxiety and depression. The conversation explores the ethical implications of AI's emotional influences and the importance of personal connections in therapeutic environments. Additionally, they examine how AI can reshape perceptions of mental health and enhance accountability in treatment, while questioning the balance between fostering genuine human interactions and maintaining agency.
undefined
Jul 29, 2024 • 1h 42min

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

Subbarao Kambhampati, an AI expert, discusses the inherent limitations of large language models (LLMs) in reasoning and logical tasks. He argues that while LLMs excel in creative applications, they often confuse fluency with content comprehension. Kambhampati emphasizes the necessity for hybrid models that pair LLMs with external verification to improve accuracy. He also critiques how publication pressures affect research integrity, calling for a more skeptical evaluation of LLM capabilities and the role of human collaboration in enhancing their outputs.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode