Pedro Domingos, a machine-learning pioneer and author, joins Martin Casado for an intriguing discussion on the quest for Artificial General Intelligence (AGI). They delve into the costs associated with scaling AI and the feasibility of reaching human-level intelligence without astronomical investments. The conversation also touches on the importance of embracing non-traditional perspectives to spark innovation, and Domingos shares insights from his satirical novel that critiques the intersection of AI and society, blending humor with deep philosophical questions.
The journey towards achieving AGI necessitates innovative ideas beyond mere scaling, as traditional models may lead to diminishing returns.
The evolving role of large language models highlights both their potential and limitations, suggesting a need for significant advancements in understanding.
Deep dives
The Long Journey to AGI
The development of Artificial General Intelligence (AGI) is likened to Einstein's decade-long effort to formulate general relativity, highlighting the complexity and time required for groundbreaking advancements. Current AI research is characterized by rapid progress towards local optima, yet there is a growing belief that such advances will not lead to human-level intelligence without truly innovative ideas. While improvements in models, like making Transformers more efficient, can yield significant immediate benefits, they will ultimately not reach the full potential of AGI. Achieving just 10% progress toward AGI could still have a transformative impact on society, underscoring the importance of a robust research direction.
Questioning Scaling Laws in AI
The conversation critically examines the concept of scaling laws in AI, which suggests that increasing data and compute power will result in improved performance. Some researchers argue that these scaling laws may be deceptive and not universally applicable, leading to diminishing returns. There's an emphasis on the necessity for fundamental new ideas rather than merely scaling current models to achieve intelligent outcomes. The dialogue suggests that while scaling has its benefits, previous experiences in AI indicate that simply enlarging models without deeper innovations will not suffice for reaching true intelligence.
The Role of LLMs in AI Development
Large language models (LLMs) were originally seen as tools for specific tasks, such as translation, but now play a broader role in shaping AI's interaction with the real world. While LLMs can generate human-like responses, there is skepticism regarding their ability to truly understand or reason, as they often rely on vast datasets without comprehending the underlying concepts. This raises concerns about their long-term viability as foundational elements for AI, suggesting that LLMs might need to evolve significantly to contribute meaningfully to AGI. The current understanding of LLMs reflects an ongoing exploration of their capabilities and limitations within the AI landscape.
Economic Dynamics of AI Progress
The podcast addresses the economic implications of AI advancements, highlighting a growing dichotomy between creative applications and scientific pursuits. There's recognition that the lucrative potential of generative AI drives much of the current focus, while foundational scientific work might not yield immediate financial returns yet is crucial for long-term progress. The conversation suggests that these different paths may ultimately coexist rather than diverge completely, as both creativity and rigorous scientific inquiry are needed to facilitate comprehensive progression. Acknowledgment of the complexities within the AI sector indicates that future developments will likely require a balance of both creative outputs and scientific advancements.
Longtime machine-learning researcher, and University of Washington Professor Emeritus, Pedro Domingos joins a16z General Partner Martin Casado to discuss the state of artificial intelligence, whether we're really on a path toward AGI, and the value of expressing unpopular opinions. It's a very insightful discussion as we head into an era of mainstream AI adoption, and ask big questions about how to ramp up progress and diversify research directions.
Here's an excerpt of Pedro sharing his thoughts on the increasing cost of frontier models and whether that's the right direction:
"if you believe the scaling laws hold and the scaling laws will take us to human-level intelligence, then, hey, it's worth a lot of investment. That's one part, but that may be wrong. The other part, however, is that to do that, we need exploding amounts of compute.
"If if I had to predict what's going to happen, it's that we do not need a trillion dollars to reach AGI at all. So if you spend a trillion dollars reaching AGI, this is a very bad investment."