AGI, part two: how to make artificial intelligence more like the human kind
Sep 11, 2024
auto_awesome
In this enlightening discussion, Abby Bertics, a tech researcher at the Geometric Intelligence Lab with a knack for science writing, explores the quest for artificial general intelligence. The conversation unveils the limitations of current large language models and proposes innovative approaches to integrate reasoning and knowledge. Bertics highlights the need for AI to truly grasp concepts, not just pattern-match. The ethical implications of creating superintelligent machines are examined, raising questions about the balance between potential benefits and inherent risks in this advancing field.
AGI remains a challenging concept requiring clarity on intelligence and consciousness, highlighting the difficulty in defining human-like capabilities for AI.
Current LLMs fall short of true general intelligence due to their reliance on statistical patterns, necessitating innovative approaches like multimodal models for better reasoning.
Deep dives
The Challenge of Defining AGI
The concept of artificial general intelligence (AGI) is notoriously challenging to define, often portrayed as a key goal for AI researchers. Defining AGI raises complex questions, such as whether it refers to an AI that matches or exceeds human intelligence or one that possesses consciousness akin to humans. The ambiguity is further compounded by the fact that even cognitive scientists struggle to articulate what intelligence and consciousness are in humans. Ultimately, AGI serves as a placeholder for a future technology capable of performing a variety of tasks at a human level, yet the road to achieving this remains unclear.
Limitations of Current AI Models
Current large language models (LLMs), while impressive in their capabilities, fall short of embodying true general intelligence due to their reliance on statistical patterns derived from vast amounts of text. These models display intelligent-seeming behaviors but often produce inaccurate outputs and lack consistent grasp on tasks requiring logical reasoning, such as basic arithmetic. The essay highlights that LLMs often hallucinate information, leading to the creation of plausible but incorrect responses. This limitation stems from their inability to form a factual understanding of reality, treating knowledge as mere correlations rather than concrete facts.
Innovative Approaches Toward AGI
Developing AGI may require innovations beyond current LLMs, including multimodal models that integrate visual and textual data. By associating language with images, these models could ground their understanding of the world, moving away from the disjointed token relationships present in current systems. Additionally, researchers emphasize incorporating knowledge graphs and databases to enhance the models’ ability to reason and produce factual outputs. Efforts to blend deep learning with symbolic AI aim to facilitate the understanding of reasoning processes, which can lead to building more reliable and interpretable AI systems.
The Future and Risks of AGI
The advancement toward AGI comes with significant risks alongside its potential benefits, particularly concerning the direction of AI goal-setting. Experts warn that if AGI achieves high-level capabilities under the influence of nefarious goals, it could lead to dangerous consequences, including a loss of human control. The need for rigorous research in AI safety is paramount to preemptively address these issues and ensure beneficial outcomes. As the technology evolves, a balance must be maintained between leveraging AI for progress while making prudent assessments regarding its deployment and implications.
Scientists and tech companies are on a quest to build AI with something closer to the general intelligence of humans. Large language models (LLMs), which power the likes of ChatGPT, can seem human-like, but they work in very different ways to the beings that created them. In order to create a superintelligent world, how can modern AI models be improved to make them better at reasoning and understanding the world? Are LLMs the right type of technology to pursue? Or do scientists need to get more creative?
This is the final episode in our two-part series on artificial general intelligence. Last week, we sought to define what is a slippery concept. This week: the technological and ethical challenges that need to be solved in building the truly human-like AI models.
Host: Alok Jha, The Economist’s science and technology editor. Contributors: Steven Pinker of Harvard University; Gary Marcus, professor emeritus at New York University; Yoshua Bengio of the University of Montréal; and The Economist’s Abby Bertics.