Gary Marcus, cognitive scientist and AI researcher, discusses the recent advances and risks of generative AI. He explains the limitations of large language models like ChatGPT, their difficulty with truth, and the potential impact on society. Marcus explores future advancements, challenges in technology, and the essential role of humans in AI. He emphasizes the need for effective governance and regulation to ensure transparency and safety in AI systems.
Large language models like GPT and Dali are powerful but unreliable AI tools.
To develop reliable large language models, a combination of traditional AI techniques and neural networks is necessary along with effective governance and regulation.
Deep dives
Large language models are versatile yet unreliable
Large language models, such as GPT and Dali, are powerful AI tools with wide-ranging capabilities. However, they are also the least reliable AI techniques that have gained mainstream popularity. Unlike voice assistants like Siri, which are carefully engineered for specific tasks, large language models try to do everything but may often fall short. They can generate impressive text and images but lack the ability to truly understand concepts or accurately analyze information. Despite their versatility, these models are limited in their reliability.
Large language models analyze relationships between words, not concepts
The functioning of large language models relies on analyzing the relationships between words, rather than understanding the relationships between concepts or ideas. They predict the most likely word or phrase to follow a given set of words based on pattern recognition from massive amounts of internet data. However, this approach can lead to unreliable results. Large language models lack true comprehension or contextual understanding, as evidenced by their tendency to produce false information. For example, they may erroneously report the death of a well-known figure like Elon Musk.
Challenges in creating reliable AI and moving towards effective governance
The path to developing reliable large language models is challenging. While newer iterations may offer some improvements, they will still be prone to generating misleading or fictitious information. Achieving AI systems that we can confidently rely on will likely require a combination of traditional AI techniques, like symbolic AI, with neural networks. However, progress in this direction has been hindered due to the difficulty of modeling the complexity of the human brain and the limitations of current understanding in neuroscience. Additionally, effective governance and regulation for AI should involve the establishment of dedicated AI agencies or positions in every nation, fostering international coordination, and implementing safety assessments similar to the FDA's approval process for widespread deployment of AI systems.
Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence? On the GZERO World with Ian Bremmer podcast, cognitive scientist, author, and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI. AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write college papers or create Picasso-style paintings out of thin air. But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth. According to Marcus, they’re like “autocomplete on steroids.” As generative AI tools become more widespread, they will undoubtedly change the way we live, in both good ways and bad. Marcus sits down with Ian Bremmer to talk about the latest advances in generative artificial intelligence, the underlying technology, AI’s hallucination problem, and what effective, global AI regulation might look like.