AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is There a Mind in Language Models?
We are very good at anthropromorphising. Imagine what we're doing when we see something that looks like actual language. And it is really a, there's great potential for harm. A start up in france ran the study asking whether gpt three could be used for medical applications and one of them was a mental health chap bot. You would want even an untrained person handling sensitive discussions with somebody who's in distress. Let alone a machine who's just going to randomly say things based on previous patterns. It comes out with suggesting self harm, right? And my reaction to reading this blog post was, what gave anybody the idea that this might be a reasonable application of this?