Discussion on the strengths and weaknesses of large language models (LLMs) and modern AIs, including their proficiency in generating content like syllabi and recipes, but their limitations in scientific research and creating original art. Also explores the importance of understanding AI capabilities and dismisses concerns about existential risks posed by LLMs.
The Artificial Intelligence landscape is changing with remarkable speed these days, and the capability of Large Language Models in particular has led to speculation (and hope, and fear) that we could be on the verge of achieving Artificial General Intelligence. I don't think so. Or at least, while what is being achieved is legitimately impressive, it's not anything like the kind of thinking that is done by human beings. LLMs do not model the world in the same way we do, nor are they driven by the same kinds of feelings and motivations. It is therefore extremely misleading to throw around words like "intelligence" and "values" without thinking carefully about what is meant in this new context.
Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/11/27/258-solo-ai-thinks-different/
Support Mindscape on Patreon.
Some relevant references:
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.