Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023
Sep 7, 2023
auto_awesome
UC Berkeley scholar Hannah Zeavin discusses the National Eating Disorders Association's decision to replace their helpline with a chatbot, the history and significance of suicide hotlines, the importance of training for crisis support volunteers, workplace toxicity and the threat of job replacement, ethical concerns of sharing data with a for-profit, and the hype around AI services and concerns about relying on chatbots for financial advice.
The datafication and automation of mental health services can disproportionately affect vulnerable populations.
The use of chat GPT in evaluating book submissions can lead to gatekeeping and exclusion of marginalized voices.
Relying solely on AI technology in legal proceedings can be risky as AI-generated citations may be rejected as fake.
Deep dives
Yokosuka city adopts chat GPT in administrative operations
The city of Yokosuka has officially adopted chat GPT in administrative operations after a successful trial period. It has been reported that the use of chat GPT has improved work efficiency and shortened business hours.
Publisher using chat GPT to evaluate book submissions
A publisher is using chat GPT to evaluate book submissions, potentially excluding non-mainstream voices. The use of chat GPT in this context raises concerns about gatekeeping and the exclusion of marginalized voices.
AI-generated precedent rejected by judge
A lawyer used chat GPT to find citations for precedent, but the judge rejected it as the AI-generated citations were deemed fake. This highlights the dangers of relying solely on AI technology in legal proceedings.
JP Morgan develops chat GPT-like investment service
JP Morgan Chase is developing a chat GPT-like service to provide investment advice to its customers. This move raises concerns about the use of large language models in financial services and potential pitfalls in relying on AI for financial advice.
Professor accuses entire class of using chat GPT
An instructor at Texas A&M University accused his entire class of using chat GPT to generate responses. This incident highlights the need for understanding the capabilities and limitations of AI models and the potential impact on academic integrity.
Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable.
Content note: This is a conversation that touches on mental health, people in crisis, and exploitation.
Hannah Zeavin is a scholar, writer, and editor whose work centers on the history of human sciences (psychoanalysis, psychology, and psychiatry), the history of technology and media, feminist science and technology studies, and media theory. Zeavin is an Assistant Professor of the History of Science in the Department of History and The Berkeley Center for New Media at UC Berkeley. She is the author of, "The Distance Cure: A History of Teletherapy."