

Gary Marcus
Cognitive scientist, author, and longtime AI skeptic. Author of the book Rebooting AI and Marcus on AI on Substack.
Top 10 podcasts with Gary Marcus
Ranked by the Snipd community

333 snips
Mar 7, 2023 • 1h 27min
#312 — The Trouble with AI
Stuart Russell, a UC Berkeley professor and author of 'Human Compatible,' and Gary Marcus, a renowned scientist and author, delve into the complexities of artificial intelligence. They explore the limitations of current AI technologies, especially ChatGPT, and the ethical dilemmas surrounding artificial general intelligence. The duo discusses the risks of misinformation, the need for human values in AI systems, and the urgent call for regulations to protect democracy and public safety amid evolving tech. They reveal how business models can exacerbate misinformation crises.

163 snips
May 7, 2025 • 54min
Is AI Scaling Dead? — With Gary Marcus
Cognitive scientist and AI skeptic Gary Marcus joins the discussion to explore whether large language model scaling has hit a ceiling. He shares insights on diminishing returns in AI effectiveness and critiques the reliance on GPU scaling. The conversation touches on data privacy issues, ethical considerations in AI development, and the risks associated with both open-source and proprietary models. Marcus emphasizes the need for transparency and a more nuanced understanding of AI's future trajectory.

138 snips
Jul 9, 2024 • 39min
Is AI just all hype? w/ Gary Marcus
AI skeptic Gary Marcus discusses the dangers of being distracted by generative AI hype, emphasizing the need for a thoughtful approach to AI development. The podcast explores discrepancies between AI capabilities and limitations, risks associated with AI progress, and challenges in industries like medicine and robotics.

47 snips
Jul 12, 2023 • 57min
The ACTUAL Danger of A.I. with Gary Marcus
Whether we like it or not, artificial intelligence is increasingly empowered to take control of various aspects of our lives. While some tech companies advocate for self-regulation regarding long-term risks, they conveniently overlook critical current concerns like the rampant spread of misinformation, biases in A.I. algorithms, and even A.I.-driven scams. In this episode, Adam is joined by cognitive scientist and esteemed A.I. expert Gary Marcus to enumerate the short and long-term risks posed by artificial intelligence. SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgumSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

39 snips
May 15, 2025 • 2h 4min
Gary Marcus vs. Liron Shapira — AI Doom Debate
Gary Marcus, a leading scientist and author in AI, discusses the existential risks of artificial intelligence. He debates the probability of catastrophic outcomes, pondering whether the threat level is near 50% or less than 1%. The conversation dives into misconceptions across generative AI, the timeline for achieving AGI, and the challenges of aligning AI with human values. Marcus also explores the complexities of humanity's resilience against potential 'superintelligent' dangers while highlighting the urgent need for regulatory frameworks to ensure safety in technological advancements.

38 snips
May 25, 2023 • 56min
AI: Can the Machines Really Think?
Gary Marcus and John Lanchester join David to discuss all things AI, from ChatGPT to the Turing test. Why is the Turing test such a bad judge of machine intelligence? If these machines aren’t thinking, what is it they are doing? And what are we doing giving them so much power to shape our lives? Plus we discuss self-driving cars, the coming jobs apocalypse, how children learn, and what it is that makes us truly human.Gary’s new podcast is Humans vs. Machines.Read Turing’s original paper here.Sign up to LRB Close Readings:Directly in Apple: https://apple.co/3pJoFPqIn other podcast apps: lrb.supportingcast.fm Hosted on Acast. See acast.com/privacy for more information. Learn more about your ad choices. Visit megaphone.fm/adchoices

31 snips
Sep 7, 2022 • 1h 5min
Is AI Dangerously Overhyped? — With Gary Marcus
Gary Marcus, author of 'Rebooting AI' and an AI entrepreneur, critiques the inflated expectations surrounding artificial intelligence. He discusses the controversy of AI-generated art in competitions, questioning creativity and authenticity. Marcus delves into the limits of AI sentience, emphasizing that current systems lack true understanding. He warns of the risks posed by seemingly sentient AI and advocates for regulatory measures. The conversation concludes with a reflection on the overhyping in AI technologies and the importance of sustainable advancements.

25 snips
Feb 24, 2023 • 53min
Will ChatGPT Do More Harm Than Good?
It’s poised to “change our world.” That’s according to Bill Gates, referencing an advanced AI chatbot called ChatGPT, which seems to be all the rage. The tool, which was developed by OpenAI and backed by a company Gates founded, Microsoft, effectively takes questions from users and produces human-like responses. The "GPT" stands "Generative Pre-trained Transformer," which denotes the design and nature of the artificial intelligence training. And yet despite the chatbot’s swelling popularity, it’s also not without controversy. Everything from privacy and ethical questions to growing concerns about the data it utilizes, has some concerned about the effects it will ultimately have on society. Its detractors fear job loss, a rise in disinformation, and even the compromising long-term effects it could have on humans’ capacity for reason and writing. Its advocates tout the advantages ChatGPT will inevitably lend organizations, its versatility and iterative ability, and the depth and diversity of the data from which it pulls. Against this backdrop, we debate the following question: Will ChatGPT do more harm than good? Arguing "Yes" is Gary Marcus (Author of "Rebooting AI: Building Artificial Intelligence We Can Trust" and Professor Emeritus of Psychology and Neural Science at New York University)Arguing "No" is Keith Teare (Entrepreneur, Author, and CEO & Founder at SignalRank Corporation)Emmy award-winning journalist John Donvan moderates. Take our podcast listener survey here: tinyurl.com/IQ2podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices

23 snips
Nov 20, 2024 • 1h 26min
241. Gary F. Marcus with Ted Chiang How to Make AI Work for Us (And Not the Other Way Around)
Gary F. Marcus, a best-selling author and AI expert, teams up with acclaimed sci-fi writer Ted Chiang to explore the complex landscape of artificial intelligence. They delve into the ethical implications of AI, advocating for technology aligned with human rights. The discussion reveals the whims and limitations of AI language models, the risks of generative AI, and the need for robust policies. They emphasize the importance of understanding causal reasoning in AI and the challenges of integrating personal AI assistants into our lives, urging a balance between innovation and accountability.

22 snips
May 16, 2023 • 50min
AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)
In a compelling Senate hearing, Sam Altman, CEO of OpenAI, along with AI expert Gary Marcus and IBM's Christina Montgomery, tackled the pressing issues of AI regulation. They underscored the urgent need for responsible oversight and transparency in AI technologies. The discussion revolved around the EU's proposed AI Act and its potential implications for American companies. With a focus on balancing innovation and safety, they advocated for a collaborative approach between industry and government to navigate the transformative landscape of AI.