The Myth of Artificial Intelligence with Erik J. Larson
Jul 22, 2023
auto_awesome
Author Erik J. Larson discusses the unpredictability of invention, limits of deep learning progress, abduction in hypothesis generation, creativity and serendipity, the myth of self-improving machines, combining deep learning and symbolic approaches in AI knowledge.
Predicting future technological advancements is flawed due to practical limitations.
The concept of self-improving machines in artificial intelligence lacks a solid foundation.
Deep dives
The Limitations of Predicting Technological Advancements
In a highly technocratic society, there is a common illusion that people can accurately predict future technological advancements. However, the idea of extrapolating progress is flawed, as it fails to take into account practical limitations. For example, Sam Alman, the head of Opening AI, warned about the practical limitations of training advanced language models like GPT-4 due to the availability of data and computing resources. This highlights the need to be cautious about making future predictions based solely on technological advancements.
The Complex Nature of Artificial Intelligence
Artificial intelligence (AI) is a multifaceted field that has evolved over time. The guest, Eric J Larson, who has been working in the field since 2000, explains that the term 'AI' has become somewhat meaningless, encompassing various aspects of machine learning and language processing. He emphasizes the importance of recognizing the different components of AI and how they contribute to advancements in the field. Larson's experience showcases the evolving nature of AI and its wide range of applications.
The Challenge of Predicting Technological Inventions
Larson delves into the concept of abduction, which involves seeking plausible causes for unique events or observations. He draws a parallel with the difficulty of predicting inventions before they happen. Using the example of the wheel, he highlights how it is impossible to accurately predict an invention before it is conceptualized. Similarly, in the field of AI, the prediction of future breakthroughs and advancements is challenging due to the need for a deep understanding of the technology and the absence of a clear blueprint for the future.
The Limits of Self-Improving Machines
The idea of self-improving machines or superintelligence is explored. Larson refers to the arguments of John von Neumann, a renowned mathematician, who stated that purely random machines could potentially improve themselves, but planned machines would require having the blueprint of the improved machine within the machine itself. Larson argues that the concept of self-improving machines is fundamentally flawed and lacks a solid foundation. He questions the feasibility of machines becoming truly intelligent and emphasizes the complexity of the concept.