I think that's an interesting and intellectual question. We don't have any vivid examples today of my vacuum cleaner wanting to be a driverless car example I've used before. Doesn't aspire. Now, we might see some aspiration or at least perceived aspiration and in chat, GBD at some point. But part of the problem getting people convinced about its dangers is that that leap, the sentence leap, the consciousness leap, which is where goals come in doesn't seem credible at least today.
The future of AI keeps Zvi Mowshowitz up at night. He also wonders why so many smart people seem to think that AI is more likely to save humanity than destroy it. Listen as Mowshowitz talks with EconTalk's Russ Roberts about the current state of AI, the pace of AI's development, and where--unless we take serious action--the technology is likely to end up (and that end is not pretty). They also discuss Mowshowitz's theory that the shallowness of the AI extinction-risk discourse results from the assumption that you have to be either pro-technological progress or against it.