Lamder: The real problem here is when these systems tend to produce your incredibly hateful forms of racist or mesogenous speech. It does make you think, instead of worrying about the supposed feelings of a computer programme, shouldn't we be focusing on the feelings of the real people who are using them? lamder: We have to ask questions about, do we really need these systems, and what are their true costs?
Last week an engineer at Google claimed that an AI chatbot he worked with, known as LaMDA, had become ‘sentient’. Blake Lemoine published a transcript of his conversations with LaMDA that included responses about having feelings and fearing death. But could it really be conscious? AI researcher and author Kate Crawford speaks to Ian Sample about how LaMDA actually works, and why we shouldn’t worry about the inner life of software – for now.. Help support our independent journalism at
theguardian.com/sciencepod