undefined

Gary Marcus

Leading AI skeptic, scientist, best-selling author, and former CEO of Geometric.AI. Professor Emeritus at NYU.

Top 10 podcasts with Gary Marcus

Ranked by the Snipd community
undefined
327 snips
Mar 7, 2023 • 1h 27min

#312 — The Trouble with AI

Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI). They discuss the limitations of Deep Learning, the surprising power of narrow AI, ChatGPT, a possible misinformation apocalypse, the problem of instantiating human values, the business model of the Internet, the meta-verse, digital provenance, using AI to control AI, the control problem, emergent goals, locking down core values, programming uncertainty about human values into AGI, the prospects of slowing or stopping AI progress, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
undefined
136 snips
Jul 9, 2024 • 39min

Is AI just all hype? w/ Gary Marcus

AI skeptic Gary Marcus discusses the dangers of being distracted by generative AI hype, emphasizing the need for a thoughtful approach to AI development. The podcast explores discrepancies between AI capabilities and limitations, risks associated with AI progress, and challenges in industries like medicine and robotics.
undefined
90 snips
Sep 26, 2024 • 42min

‘We Have to Get It Right’: Gary Marcus On Untamed AI

Gary Marcus, a cognitive psychologist and computer scientist with a successful AI startup, shares his critical views on generative AI. He argues that society is unprepared for the risks associated with current AI technologies. The conversation emphasizes the need for stringent regulations, drawing parallels to aviation safety. Marcus discusses the dangers of misinformation and deepfakes, advocating for an independent agency to oversee AI developments. He calls for public engagement in shaping ethical AI practices, ensuring technology works for humanity.
undefined
47 snips
Jul 12, 2023 • 57min

The ACTUAL Danger of A.I. with Gary Marcus

Whether we like it or not, artificial intelligence is increasingly empowered to take control of various aspects of our lives. While some tech companies advocate for self-regulation regarding long-term risks, they conveniently overlook critical current concerns like the rampant spread of misinformation, biases in A.I. algorithms, and even A.I.-driven scams. In this episode, Adam is joined by cognitive scientist and esteemed A.I. expert Gary Marcus to enumerate the short and long-term risks posed by artificial intelligence. SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgumSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
undefined
38 snips
May 25, 2023 • 56min

AI: Can the Machines Really Think?

Gary Marcus and John Lanchester join David to discuss all things AI, from ChatGPT to the Turing test. Why is the Turing test such a bad judge of machine intelligence? If these machines aren’t thinking, what is it they are doing? And what are we doing giving them so much power to shape our lives? Plus we discuss self-driving cars, the coming jobs apocalypse, how children learn, and what it is that makes us truly human.Gary’s new podcast is Humans vs. Machines.Read Turing’s original paper here.Sign up to LRB Close Readings:Directly in Apple: https://apple.co/3pJoFPqIn other podcast apps: lrb.supportingcast.fm Hosted on Acast. See acast.com/privacy for more information. Learn more about your ad choices. Visit megaphone.fm/adchoices
undefined
30 snips
May 30, 2023 • 55min

How worried—or excited—should we be about AI? Recode Media with Peter Kafka

AI is amazing… or terrifying, depending on who you ask. This is a technology that elicits strong, almost existential reactions. So, as a Memorial Day special, we're running an episode of Recode Media with Peter Kafka that digs into the giant ambitions and enormous concerns people have about the very same tech.First up: Joshua Browder (@jbrowder1), a Stanford computer science dropout who tried to get an AI lawyer into court.Then: Microsoft's CTO Kevin Scott (@kevin_scott) pitches a bright AI future. Plus: hype-deflator, cognitive scientist and author Gary Marcus (@GaryMarcus) believes in AI, but he thinks the giants of Silicon Valley are scaling flawed technology now—with potentially dangerous consequences. Subscribe for free to Recode Media to make sure you get the whole series: https://bit.ly/3IOpWuBPivot will return on Friday! Learn more about your ad choices. Visit podcastchoices.com/adchoices
undefined
27 snips
Sep 24, 2024 • 1h 57min

Taming Silicon Valley - Prof. Gary Marcus

In this discussion, AI expert Prof. Gary Marcus critiques the current state of artificial intelligence, spotlighting its limitations and potential dangers. He expresses concerns about the profit-driven motives of major tech companies, warning that technology could exacerbate issues like fake news and privacy violations. Marcus emphasizes the need for responsible AI development and regulation to protect society from misinformation and erosion of trust. He urges the public to advocate for better AI standards before it’s too late.
undefined
25 snips
Feb 24, 2023 • 53min

Will ChatGPT Do More Harm Than Good?

It’s poised to “change our world.” That’s according to Bill Gates, referencing an advanced AI chatbot called ChatGPT, which seems to be all the rage. The tool, which was developed by OpenAI and backed by a company Gates founded, Microsoft, effectively takes questions from users and produces human-like responses. The "GPT" stands "Generative Pre-trained Transformer," which denotes the design and nature of the artificial intelligence training. And yet despite the chatbot’s swelling popularity, it’s also not without controversy. Everything from privacy and ethical questions to growing concerns about the data it utilizes, has some concerned about the effects it will ultimately have on society. Its detractors fear job loss, a rise in disinformation, and even the compromising long-term effects it could have on humans’ capacity for reason and writing. Its advocates tout the advantages ChatGPT will inevitably lend organizations, its versatility and iterative ability, and the depth and diversity of the data from which it pulls. Against this backdrop, we debate the following question: Will ChatGPT do more harm than good? Arguing "Yes" is Gary Marcus (Author of "Rebooting AI: Building Artificial Intelligence We Can Trust" and Professor Emeritus of Psychology and Neural Science at New York University)Arguing "No" is Keith Teare (Entrepreneur, Author, and CEO & Founder at SignalRank Corporation)Emmy award-winning journalist John Donvan moderates. Take our podcast listener survey here: tinyurl.com/IQ2podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
undefined
22 snips
May 16, 2023 • 50min

AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In a historic and candid Senate hearing, OpenAI CEO Sam Altman, Professor Gary Marcus, and IBM's Christina Montgomery discussed the regulatory landscape of AI in the US. The discussion was particularly interesting due to its timing, as it followed the recent release of the EU's proposed AI Act, which could potentially ban American companies like OpenAI and Google from providing API access to generative AI models and impose massive fines for non-compliance. The speakers openly addressed potential risks of AI technology and emphasized the need for precision regulation. This was a unique approach, as historically, US companies have tried their hardest to avoid regulation. The hearing not only showcased the willingness of industry leaders to engage in discussions on regulation but also demonstrated the need for a balanced approach to avoid stifling innovation. The EU AI Act, scheduled to come into power in 2026, is still just a proposal, but it has already raised concerns about its impact on the American tech ecosystem and potential conflicts between US and EU laws. With extraterritorial jurisdiction and provisions targeting open-source developers and software distributors like GitHub, the Act could create more problems than it solves by encouraging unsafe AI practices and limiting access to advanced AI technologies. One core issue with the Act is the designation of foundation models in the highest risk category, primarily due to their open-ended nature. A significant risk theme revolves around users creating harmful content and determining who should be held accountable – the users or the platforms. The Senate hearing served as an essential platform to discuss these pressing concerns and work towards a regulatory framework that promotes both safety and innovation in AI. 00:00 Show 01:35 Legals 03:44 Intro 10:33 Altman intro 14:16 Christina Montgomery 18:20 Gary Marcus 23:15 Jobs 26:01 Scorecards 28:08 Harmful content 29:47 Startups 31:35 What meets the definition of harmful? 32:08 Moratorium 36:11 Social Media 46:17 Gary's take on BingGPT and pivot into policy 48:05 Democratisation
undefined
21 snips
Oct 3, 2019 • 1h 25min

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI

Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 – Introduction 01:37 – Singularity 05:48 – Physical and psychological knowledge 10:52 – Chess 14:32 – Language vs physical world 17:37 – What does AI look like 100 years from now 21:28 – Flaws of the human mind 25:27 – General intelligence 28:25 – Limits of deep learning 44:41 – Expert systems and symbol manipulation 48:37 – Knowledge representation 52:52 – Increasing compute power 56:27 – How human children learn 57:23 – Innate knowledge and learned knowledge 1:06:43 – Good test of intelligence 1:12:32 – Deep learning and symbol manipulation 1:23:35 – Guitar