The Gradient: Perspectives on AI cover image

The Gradient: Perspectives on AI

Latest episodes

undefined
26 snips
Mar 21, 2024 • 42min

Kate Park: Data Engines for Vision and Language

Kate Park, Director of Product at Scale AI, discusses the importance of data in AI systems, focusing on self-driving vehicles and NLP applications. The podcast explores challenges in model evaluation, expert AI trainers, and the role of humans in labeling tasks.
undefined
19 snips
Mar 14, 2024 • 1h 8min

Ben Wellington: ML for Finance and Storytelling through Data

Ben Wellington from Two Sigma discusses applying ML techniques to quantitative finance, building predictive features, and the balance between human insights and algorithm performance. They explore the challenges of black box models in finance, the importance of accurate timestamp data, and blending small wins in trading models. The podcast also delves into improv comedy techniques for enhancing science communication and storytelling.
undefined
35 snips
Mar 7, 2024 • 2h 19min

Venkatesh Rao: Protocols, Intelligence, and Scaling

Explore the nuanced relationship between AI and society with Venkatesh Rao, discussing voice, time in AI systems, abstract planning, and reasoning. Dive into the intersection of data, intelligence, and reasoning in AI, comparing AI learning to human processes. Discover the importance of protocols in AI scaling, privacy challenges, and evolution of mathematical tools in AI. Delve into real-world AI applications, coordination problems in ML, and the need for real-time commitments.
undefined
23 snips
Feb 29, 2024 • 54min

Sasha Rush: Building Better NLP Systems

Professor Sasha Rush discusses the importance of learning and inference in AI, state-space models as an alternative to Transformers, efficiency enhancements in NLP systems through techniques like sequence level knowledge distillation, and the evolution of research perspective towards empirical approaches in NLP systems.
undefined
70 snips
Feb 22, 2024 • 1h 59min

Cameron Jones & Sean Trott: Understanding, Grounding, and Reference in LLMs

Researchers Cameron Jones and Sean Trott discuss the unexpected capabilities of language models, challenges in interpreting results of Turing tests, and the tension in lexical ambiguity. They explore the efficiency of language, internal mechanisms of language models, and the balance of meanings across wordforms. The conversation also delves into physical plausibility in language comprehension, theory of mind abilities in language models, and critiques of evaluating language models like GPT.
undefined
Feb 15, 2024 • 60min

Nicholas Thompson: AI and Journalism

Nicholas Thompson, CEO of The Atlantic, discusses his journey into journalism, perspectives from the industry, examples of good journalism, the role of an editor, and the benefits and limitations of AI in journalism. He also explores topics such as mortality through running and the broken state of online conversations.
undefined
127 snips
Feb 8, 2024 • 1h 59min

Subbarao Kambhampati: Planning, Reasoning, and Interpretability in the Age of LLMs

Subbarao Kambhampati, Professor of computer science at Arizona State University, discusses planning, reasoning, and interpretability in the age of LLMs. Topics include explanation in AI, thinking and language, scalability in planning, computational complexity in LLMs, and concerns about misinformation generated by LLMs.
undefined
Feb 1, 2024 • 56min

Russ Maschmeyer: Spatial Commerce and AI in Retail

Russ Maschmeyer, Product Lead for AI and Spatial Commerce at Shopify, previously led design for multiple services at Facebook and co-founded Primer. Podcast discusses AI in retail, personalized shopping experiences, challenges in creating immersive web experiences, and the future of spatial commerce.
undefined
Jan 25, 2024 • 1h 8min

Benjamin Breen: The Intersecting Histories of Psychedelics and AI Research

Professor Benjamin Breen discusses the intersecting histories of psychedelics and AI research. They explore end of history narratives, transformative technological change, techno-utopianism, and the importance of historical context in understanding the AI landscape.
undefined
4 snips
Jan 18, 2024 • 2h 13min

Ted Gibson: The Structure and Purpose of Language

Ted Gibson, Professor of Computational Linguistics, discusses the purpose and structure of language in this podcast. Topics include dependency distances in language processing, sentence parsing and memory costs, the concept of utility in language, the relationship between language, thought, and communication, studying the language system in the brain, and exploring dependency grammar and the Chomskyian approach.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app