‘We Have to Get It Right’: Gary Marcus On Untamed AI
Sep 26, 2024
auto_awesome
Gary Marcus, a cognitive psychologist and computer scientist known for his critical views on AI, joins Daniel Barcay for a compelling discussion. They explore the contrasting perceptions of AI's future, debating whether we are on the brink of something incredible or facing imminent challenges. Marcus emphasizes the urgent need for regulatory frameworks to address the risks of generative AI, particularly its potential for disinformation. He advocates for independent oversight in technology regulation to prioritize public good over corporate interests.
Gary Marcus highlights the urgent need for a robust regulatory framework to manage the inherent risks associated with generative AI technologies.
Despite significant investment in AI, Marcus warns that our society is ill-prepared for the challenges posed by the existing AI systems.
Deep dives
The Contradictory Landscape of AI Development
The current state of artificial intelligence is characterized by a stark contrast between rapid investment and significant limitations. Companies are investing billions, showcasing increasingly powerful AI models with promises of imminent breakthroughs, while simultaneously, the existing AI technologies often exhibit unreliable behavior, leading to significant stock market declines. Experts like Gary Marcus suggest that while AI will eventually surpass human intelligence, such advancements are unlikely to materialize within the next few years, perhaps even taking decades. Marcus emphasizes that, regardless of the pace of technological growth, society remains ill-equipped to manage the risks posed by the AI systems already in operation.
Critique of Generative AI
Gary Marcus expresses skepticism towards the capabilities of generative AI, labeling it as fundamentally flawed compared to other forms of AI. While praising the usefulness of non-generative AI technologies, he highlights the inherent unreliability of generative AI, which tends to produce inconsistent answers and often fails to acknowledge when it doesn't know something. This leads to a significant gap between user expectation and actual performance, as generative AI delivers answers that can be incorrect or fabricated. Marcus argues that the overly ambitious focus on generative AI obscures its limitations and poses a risk of misinformation and dependency without appropriate oversight.
Immediate Risks of Current AI Technology
Several immediate risks associated with generative AI have been identified, including the potential for misinformation and manipulation in various sectors. Notably, there have been instances of deepfakes and misinformation influencing political and financial landscapes, demonstrating the capability of AI to cause real harm quickly. Marcus illustrates concerns about bad actors exploiting these technologies for profit at the expense of public welfare, particularly in creating disinformation campaigns that distort reality. As these systems operate without sufficient regulatory frameworks, the potential for misuse and societal harm continues to grow, necessitating urgent action.
The Call for Regulatory Framework and Public Action
To address the dual-use nature of AI technologies and mitigate associated risks, a regulatory framework is deemed essential. Marcus advocates for pre-deployment testing akin to that used by the FDA to ensure that technologies like AI do not endanger the public before they are released. He emphasizes the need for transparency, accountability, and independent oversight to prevent misuse and ensure that AI serves beneficial purposes. The call to action extends beyond policymakers and scientists; the public must voice their concerns and hold companies accountable, advocating for ethical standards to optimize the potential of AI while safeguarding against its dangers.
It’s a confusing moment in AI. Depending on who you ask, we’re either on the fast track to AI that’s smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He’s a cognitive psychologist and computer scientist who built his own successful AI start-up. But he’s also been called AI’s loudest critic.
On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.
The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.