The AI models have ingested a vast amount of text, including every digitized book available on the internet, without seeking permission. The models then output new sentences that are evaluated by humans based on their quality. These AI models do not possess a comprehensive model of the world, unlike symbolic AI, which aimed to directly represent the world.
The Artificial Intelligence landscape is changing with remarkable speed these days, and the capability of Large Language Models in particular has led to speculation (and hope, and fear) that we could be on the verge of achieving Artificial General Intelligence. I don't think so. Or at least, while what is being achieved is legitimately impressive, it's not anything like the kind of thinking that is done by human beings. LLMs do not model the world in the same way we do, nor are they driven by the same kinds of feelings and motivations. It is therefore extremely misleading to throw around words like "intelligence" and "values" without thinking carefully about what is meant in this new context.
Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/11/27/258-solo-ai-thinks-different/
Support Mindscape on Patreon.
Some relevant references:
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.