Artificial intelligence has become ambient in our daily lives, scooting us from place to place with turn-by-turn navigation, assisting us with reminders and alarms, and guiding professionals from lawyers and doctors to reaching the best possible decisions with the data they have on hand. Domain-specific AI has also mastered everything from games like Chess and Go to the complicated science of protein folding.
Since the debut of ChatGPT in November by OpenAI however, we have seen a volcanic interest in what generative AI can do across text, audio and video. Within just a few weeks, ChatGPT reached 100 million users — arguably the fastest ever for a new product. What are its capabilities and perhaps most importantly given the feverish excitement of this new technology, what are its limitations? We turn to a stalwart of AI criticism, Gary Marcus, to explore more.
Marcus is professor emeritus of psychology and neural science at New York University and the founder of machine learning startup Geometric Intelligence, which sold to Uber in 2016. He has been a fervent contrarian on many aspects of our current AI craze, the topic at the heart of his most recent book, Rebooting AI. Unlike most modern AI specialists, he is less enthusiastic about the statistical methods that underlie approaches like deep learning and is instead a forceful advocate for returning — at least partially — to the symbolic methods that the AI field has traditionally explored.
In today’s episode of “Securities”, we’re going to talk about the challenges of truth and veracity in the context of fake content driven by tools like Galactica; pose the first ChatGPT written question to Marcus; talk about how much we can rely on AI generated answers; discuss the future of artificial general intelligence; and finally, understand why Marcus thinks AI is not going to be a universal solvent for all human problems.