Lamde was trained specifically on dialogue. It is, again, simply drawing on patterns in its training data. But it can actually be quite seductive if you start having conversations with a system that's responding about you. We've reached a point in machine learning where what we're really saying is large scale statistical analysis. How often does a word come after another word? In that sense, these systems are not intelligent. They're very similar to something like auto late that you might use.
Last week an engineer at Google claimed that an AI chatbot he worked with, known as LaMDA, had become ‘sentient’. Blake Lemoine published a transcript of his conversations with LaMDA that included responses about having feelings and fearing death. But could it really be conscious? AI researcher and author Kate Crawford speaks to Ian Sample about how LaMDA actually works, and why we shouldn’t worry about the inner life of software – for now.. Help support our independent journalism at
theguardian.com/sciencepod