Should AGI Really Be the Goal of Artificial Intelligence Research?
Mar 9, 2025
auto_awesome
Eryk Salvaggio, a visiting professor and tech policy expert, along with AI ethicist Borhane Blili-Hamelin and Margaret Mitchell, chief ethics scientist at Hugging Face, dive into the controversial pursuit of artificial general intelligence (AGI). They question whether AGI should remain the ultimate goal, arguing it may detract from vital ethical discussions and genuine societal benefits. The conversation highlights the danger of mainstream consensus around AGI, advocating for diverse perspectives and practical applications in AI that genuinely address community needs.
The ambiguous definition of AGI often serves the interests of those in power, detracting from addressing real societal needs and ethical concerns.
Focusing solely on AGI can stifle diverse opinions and hinder innovation, neglecting other crucial areas of AI that could benefit marginalized communities.
Deep dives
The Ambiguity of AGI as a Goal
The concept of artificial general intelligence (AGI) lacks a clear definition, leading to its interpretation as a narrative that serves the interests of those in power rather than a concrete objective for AI research. This vagueness enables various stakeholders to promote technologies under the guise of progressing toward AGI, allowing them to capitalize on this narrative without addressing the underlying complexities of intelligence. The discussion emphasizes that AGI often acts more as a belief system, rallying the research community around uncertain goals, which detracts from real societal needs and ethical considerations. Thus, this ambiguity raises important questions about the true priorities in AI development and who benefits from these decisions.
The Illusion of Consensus
The phenomenon known as the illusion of consensus leads the AI research community to believe they are in agreement about the goals of AGI, when in reality, diverse opinions and priorities are often sidelined. This false sense of unity can stifle critical analysis and diminish the pursuit of varied, meaningful technological advancements that address pressing social issues. By focusing narrowly on AGI, the community risks neglecting other areas of AI that could provide genuine benefits, such as technologies aimed at assisting marginalized groups or addressing real-world problems. Ultimately, this trap reinforces a simplistic view of AI progress, ignoring the richness of discussions needed for robust democratic engagement.
Bad Science and Confirmatory Bias
The pursuit of AGI often leads to compromised scientific rigor, with researchers abandoning the scientific method in favor of a blind belief in the promised benefits of AGI. This neglect of foundational research methodologies results in poorly defined goals and an inability to distinguish between hype and reality in AI advancements. Moreover, confirmation bias can permeate research efforts, skewing results to fit preconceived notions rather than allowing for objective scrutiny and exploration of AI capabilities. By prioritizing AGI above all else, the scientific integrity of AI research is at risk, potentially leading to stagnation and misleading conclusions.
The Need for Inclusive Goal-Setting
Excluding diverse voices from the conversation around AI goals can hinder innovation and progress, particularly when AGI dominates the discourse. Cultivating a pluralistic approach to goal-setting allows for broader participation from various stakeholders, including those often marginalized in technological discussions. By recognizing different priorities and values in AI development, stakeholders can align their research to address a wider array of social and ethical concerns. Ultimately, fostering a more inclusive environment is crucial for establishing meaningful frameworks for AI, ensuring the technology serves public interest rather than reinforcing existing power dynamics.
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term.
Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:
Eryk Salvaggio, a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;
Borhane Blili-Hamelin, an independent AI researcher and currently a data scientist at the Canadian bank TD; and
Margaret Mitchell, chief ethics scientist at Hugging Face.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.