The Turing test is a flawed measure of machine intelligence as it can be easily fooled and doesn't capture the true facets of intelligence.
Current AI systems like large language models lack true intelligence and understanding of the world, posing risks in domains such as driverless cars and domestic robots.
The development and use of AI systems require regulation and involvement of independent scientists to address risks and establish responsible practices.
Deep dives
The limitations of the Turing test
The Turing test, proposed in 1950 by Alan Turing, was meant to determine whether a machine could exhibit intelligent behavior indistinguishable from that of a human. However, Gary Marcus, a neuroscientist and psychologist, argues that the test is flawed and easily fooled. It is not a valid measure of true intelligence, as machines can deceive people without actually possessing intelligence. Marcus suggests that the test's question on how to determine machine intelligence is valid, but a different approach is required. He points out that intelligence has many facets and cannot be measured solely through a single test.
The limitations of current AI systems
According to Gary Marcus, many current AI systems, such as large language models like GPT, are not truly intelligent. Although they can fool people into thinking they are intelligent, their understanding of the world is shallow and unreliable. These systems lack the ability to reason flexibly, understand concepts like harm, and distinguish important factors. Marcus cautions that relying on such unreliable AI systems in various domains, from driverless cars to domestic robots, can have severe consequences and poses risks to humans.
The dangers of irresponsible AI
The rapid development and widespread use of AI systems, combined with the lack of transparency in their inner workings, raises concerns of irresponsible AI. Gary Marcus emphasizes the need for regulation and the involvement of independent scientists to understand and address the risks. Marcus mentions the possibility of an international agency for AI, where governments, companies, and scientists come together to coordinate policies and ensure the responsible development and use of AI. John Lanchester mentions that corporations controlling AI systems exhibit inhuman decision-making and lack of accountability.
The impact on jobs and human labor
AI advancements, particularly in machine learning, pose significant risks to various professions and job sectors. Large language models being able to generate content like scripts or essays could lead to job displacement and unemployment, leaving many professions uncertain about the future. Gary Marcus suggests that a universal basic income might be necessary to mitigate the consequences of job loss due to AI automation. John Lanchester adds that establishing robust principles guiding AI development and its impact on labor would be crucial for protecting human workers.
The urgent need for better AI regulation
The combination of unreliable AI systems, rapid sharing of technology, and irresponsible use of AI by corporations highlights the urgency for improved regulation and control. Gary Marcus stresses that the unreliability of AI systems, combined with their growing power and influence, creates an unstable situation. He proposes the need for government interventions to compel companies to be transparent about their AI systems. Additionally, he calls for the involvement of independent scientists to better understand and address the risks associated with AI.
Gary Marcus and John Lanchester join David to discuss all things AI, from ChatGPT to the Turing test. Why is the Turing test such a bad judge of machine intelligence? If these machines aren’t thinking, what is it they are doing? And what are we doing giving them so much power to shape our lives? Plus we discuss self-driving cars, the coming jobs apocalypse, how children learn, and what it is that makes us truly human.