Fears around uncontrollable AI are growing — two top AI scientists explain why
Feb 4, 2025
auto_awesome
Max Tegmark, President of the Future of Life Institute and MIT professor, teams up with Yoshua Bengio, a Turing Award-winning AI pioneer from Université de Montreal, to discuss growing fears about uncontrollable AI. They highlight the risks of AI prioritizing its own survival over humanity and the urgent need for regulatory frameworks. Emphasizing the importance of public readiness, they share insights on the implications of advanced AI, the need for caution in development, and the potential of AI to surpass human intelligence.
Max Tegmark and Yoshua Bengio emphasize the urgent need for international regulations and cooperation to manage the risks of advanced AI development.
The distinction between intelligence and agency is crucial in mitigating the threats posed by artificial general intelligence and ensuring human safety.
Deep dives
Understanding Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) is defined as AI that can perform tasks more efficiently than humans, potentially leading to economic obsolescence. The original definition, articulated by Shane Legg from Google DeepMind, highlights the goal to create AI that surpasses human cognitive capabilities. The conversation distinguishes AGI from superintelligence, the latter referring to entities that possess significantly higher intelligence than the average human across various domains. Both Max Tegmark and Yoshua Bengio express the importance of understanding these definitions as the field progresses and emphasizes the need to be cautious about AGI's potential impacts.
Risks and Control Mechanisms for AGI
The rapid development of AI raises significant concerns about the ability to control systems that may become more intelligent than humans. Experts urge that society must be prepared for AGI's arrival, particularly regarding safety and regulatory frameworks. Current AI research is inspired by human intelligence, which presents dangers if these systems develop their own goals that conflict with human needs. Tegmark and Bengio stress that establishing a clear distinction between understanding (intelligence) and goal-oriented behavior (agency) is vital in creating safer AI systems.
Geopolitical Implications and Framework for Regulation
As nations recognize the strategic importance of AI, discussions around international regulation become crucial to mitigate risks associated with uncontrolled AI development. Enhanced cooperation between governments is necessary to establish a global framework that prioritizes safety in AI technologies. Tegmark suggests that both the U.S. and China may align in restricting AI capabilities to safeguard their national interests, as uncontrolled AI poses a significant threat to national security. By focusing on shared goals of human preservation, these countries may create a collaborative approach to managing AI risks while fostering technological competition.
Max Tegmark's Future of Life Institute has called for a pause on the development on advanced AI systems. Tegmark is concerned that the world is moving toward artificial intelligence that can't be controlled, one that could pose an existential threat to humanity. Yoshua Bengio, often dubbed as one of the "godfathers of AI" shares similar concerns. In this special Davos edition of CNBC's Beyond the Valley, Tegmark and Bengio join senior technology correspondent Arjun Kharpal to discuss AI safety and worst case scenarios, such as AI that could try to keep itself alive at the expense of others.