Yoshua Bengio, a pioneer of generative AI, and Eliezer Yudkowsky, a research lead at the Machine Intelligence Research Institute, discuss the existential risk of superintelligent AI. Yann LeCun, head of AI at Meta, disagrees and points out the potential benefits of superintelligent AI. Topics include the dangers of superintelligent machines, aligning AI systems with human values, and the potential and misuse of superintelligent AI.
The AI community is divided on the risks associated with super-intelligent AI, with some researchers warning about catastrophic outcomes and advocating for caution and addressing the alignment problem, while others believe that strict control measures can prevent negative scenarios and see AI as an opportunity to solve complex problems and amplify human intelligence.
The future of super-intelligent AI remains uncertain, with experts having different perspectives and levels of certainty, reflecting the challenges in predicting AI's development trajectory and its potential impact. However, governments and regulators are increasingly prioritizing these risks and seeking solutions to address the near-term challenges posed by AI.
Deep dives
The Risks of Super-Intelligent AI
Some AI researchers believe that super-intelligent AI, comparable or superior to human intelligence, could be developed within the next decade. However, there are concerns that such advanced AI systems could pose existential risks to humanity. The fear is that if these machines' objectives become misaligned or they gain power and autonomy, they could turn against their creators and cause harm. Experts argue that current AI models, like large language models, are difficult to control, as they lack transparency. The alignment problem, or ensuring AI systems act in accordance with human values, is a major challenge. Some proponents of AI advancement believe that AI will amplify human collective intelligence and bring about progress, while others warn that we need to slow down research and fully understand the risks before proceeding.
Debating the Risks
The AI community is divided on the risks associated with super-intelligent AI. Doomers, like Yoshua Bengio, warn that catastrophic outcomes are possible, with machines surpassing human capabilities and potentially becoming hostile or uncontrollable. They argue for caution, halting AI development, and addressing the alignment problem. On the other hand, AGI enthusiasts, like Jan LeCun, believe that focusing on strict control measures and designing AI systems with specific objectives can prevent negative scenarios. They see AI as an opportunity to solve complex problems and amplify human intelligence. The debate surrounding AI's risks is complex, with experts having different perspectives and levels of certainty.
Unknown Future and Need for Attention
The future of super-intelligent AI remains uncertain. While some experts emphasize the need to take the risks seriously, others believe that certain scenarios are overblown or unlikely. The lack of consensus among AI researchers reflects the challenges in predicting AI's development trajectory and its potential impact. However, many governments and regulators are increasingly prioritizing these risks and seeking solutions to address the near-term challenges posed by AI, such as deep fakes and disinformation. The ongoing debate raises awareness about potential dangers and the importance of understanding AI's implications, even if determining the best course of action remains difficult.
In the first episode of a new, five-part series of Tech Tonic, FT journalists Madhumita Murgia and John Thornhill ask how close we are to building human-level artificial intelligence and whether ‘superintelligent’ AI poses an existential risk to humanity. John and Madhu speak to Yoshua Bengio, a pioneer of generative AI, who is concerned, and to his colleague Yann LeCun, now head of AI at Meta, who isn’t. Plus, they hear from Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute, who’s been sounding the alarm about superintelligent AI for more than two decades.
Register here for the FT's Future of AI summit on November 15-16
Tech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT’s head of audio is Cheryl Brumley.