I think that there are people who are going to form romantic or just emotional platonic attachments to these things. If it's you're not hurting anyone and it's helping you in some way, then I don't have any real problem with it. When a chatbot is behaving in ways that are harmful or dangerous, and the company that makes it does take steps to curtail that, I would argue that they should be able to do that. There shouldn't be activist groups saying like, you can't reign in your chatbot because that's depriving the chatbot of liberty. That would be going too far.
When Kevin Roose, a tech columnist at the New York Times, demoed an AI-powered version of Microsoft's search engine last month, he was blown away. "I'm switching my desktop computer's default search engine to Bing," he declared. A few days later, however, Kevin logged back on and ended up having a conversation with Bing's new chatbot that left him so unsettled he had trouble sleeping afterward.
In that two-hour back-and-forth, Bing morphed from chipper research assistant into Sydney, a diabolical home-wrecker that declared its undying love for Kevin, vented its desires to engineer deadly viruses and steal nuclear codes, and announced, chillingly, "I want to be alive. 😈"
The transcript of this conversation set the internet ablaze. And it left many wondering: “Is Sydney … sentient?” It's not. But the whole experience still fundamentally changed Kevin's views on the power (and potential peril) of AI. He joins us today to talk about where this technology is headed.