Software engineers are assessing the current form of artificial intelligence from a moral perspective, questioning if AI systems can solve problems without misleading outputs or being used for nefarious purposes.
These AI systems have shown the ability to pass exams and achieve impressive results, such as CHAT GPT 3.5 getting an A on an economics exam.
An important question is whether AI systems will surpass humans in creative endeavors, like writing a poem or producing a movie as entertaining as those made by Hollywood.
The potential displacement of human roles in various domains, similar to what happened with Deep Blue and Chess in the 90s, is a significant consideration.
AI capabilities, including truthfulness, are perceived as integral to assessing their moral implications and potential impact on society.
Cognitive scientists believe that current AI architectures are flawed and advocate for a paradigm shift to create AI that is trustworthy and connects with the world in a different way.
Drawing a parallel to molecular biology, it took time for scientists to abandon the incorrect hypothesis about genes being made of protein, highlighting the potential for shifts in AI research.
Scientists like the speaker believe that the current path of AI development is misguided and that a correction is necessary to achieve significant advancements.
The speaker suggests that while science is self-correcting, engineering complexities may prolong the process of discovering the correct AI approach.
The prevailing focus on a particular aspect of AI right now is seen as inherently limited and not representative of its full potential.
Gary Marcus is an expert in artificial intelligence, a cognitive scientist and host of the podcast “Humans vs Machines with Gary Marcus.”
In this week’s conversation, Yascha Mounk and Gary Marcus discuss the shortcomings of the dominant large language model (LLM) mode of artificial intelligence; why he feels that the AI industry is on the wrong path to developing superintelligent AI; and why he nonetheless believes that the eventual emergence of superior AI may pose a serious threat to humanity.