Daniel Oberhaus, "The Silicon Shrink: How Artificial Intelligence Made the World an Asylum" (MIT Press, 2025)
Feb 4, 2025
auto_awesome
Daniel Oberhaus, a science and technology journalist, discusses his book, "The Silicon Shrink," which addresses the unsettling intersection of AI and psychiatry. He reveals how AI promises superhuman accuracy in detecting mental disorders but lacks evidence of improving outcomes. Oberhaus shares personal insights after losing his sister to suicide, highlighting the risks of psychiatric surveillance and manipulation through algorithms. He introduces the concept of "swipe psychology," warning about the ethical dilemmas posed by AI in mental health care.
Oberhaus's personal tragedy fueled his exploration into AI's potential in psychiatry, revealing a quest for understanding digital indicators of mental health struggles.
The podcast emphasizes the ambiguity in psychiatric evaluations, underlining the difference between mental disorders, illnesses, and diseases, raising doubts about AI's diagnostic reliability.
Ethical concerns regarding patient data privacy and surveillance highlight the urgent need for transparency in the deployment of AI technologies within mental health care.
Deep dives
The Personal Catalyst for Exploration
The speaker begins by sharing the deeply personal story that motivated the exploration of AI in psychiatry, stemming from the loss of a sister to suicide. This experience led to an investigation into whether technology could have identified her struggles earlier through her extensive digital footprint, including social media activity. The pursuit initially aimed to uncover potential early indicators of mental health crises, highlighting the vast data that may exist in people's online interactions. However, as the exploration continued, it shifted towards recognizing significant risks and implications of deploying AI in mental health without sufficient understanding.
Defining Mental Disorders in Context
A crucial discussion centers on clarifying the definitions of mental disorders, differentiating between mental illness, mental disorder, and mental disease. Mental diseases have identifiable biological origins, while mental illnesses are often socially prescribed and subjectively judged. The potential misdiagnosis of conditions highlights the ambiguity in psychiatric evaluations, which often rely on symptoms rather than biological markers. This lack of precision in diagnosis raises questions about the reliability of AI applications that are based on these flawed categories.
The Historical Interplay of AI and Psychiatry
The historical context reveals that the synergy between AI and psychiatry is not a novel concept, tracing back to the mid-20th century. Initial efforts in AI sought to model human intelligence, with early projects like ELISA acting as rudimentary forms of computer-aided therapy. However, the development of psychiatry has often intersected with controversies, particularly regarding ethical concerns and the effectiveness of AI-driven methodologies. The evolution of AI from a theoretical framework to practical applications raises questions about whether these technologies can genuinely contribute to mental health care.
Contemporary Challenges in Psychiatry
Modern psychiatry confronts persistent issues such as diagnostic misalignment and ineffective treatment. Despite advancements in pharmaceutical interventions, patient outcomes related to mental health have deteriorated over the years, evidenced by rising suicide rates and increasing diagnoses of anxiety and depression. The introduction of AI is presented as a potential solution to improve diagnostic accuracy and patient care; however, evidence supporting its efficacy remains sparse. Consequently, there exists widespread skepticism about whether AI can deliver tangible benefits to mental health care without replicating past mistakes.
The Ethical Implications of AI in Mental Health
The ethical considerations surrounding AI implementation in psychiatry underscore critical societal risks, particularly regarding patient data privacy and potential biases. AI systems depend on a plethora of personal data that may lead to intrusive monitoring in everyday life, prompting concerns about autonomy and consent. The invisible nature of these monitoring systems creates an environment where individuals may unknowingly partake in surveillance under the guise of mental health support. Accordingly, there is an urgent need for transparency and comprehensive evaluation of AI tools in psychiatric contexts to safeguard patients' rights.
AI psychiatrists promise to detect mental disorders with superhuman accuracy, provide affordable therapy for those who can't afford or can't access treatment, and even invent new psychiatric drugs. But the hype obscures an unnerving reality. In The Silicon Shrink: How Artificial Intelligence Made the World an Asylum(MIT Press, 2025), Daniel Oberhaus tells the inside story of how the quest to use AI in psychiatry has created the conditions to turn the world into an asylum. Most of these systems, he writes, have vanishingly little evidence that they improve patient outcomes, but the risks they pose have less to do with technological shortcomings than with the application of deeply flawed psychiatric models of mental disorder at unprecedented scale.
Oberhaus became interested in the subject of mental health after tragically losing his sister to suicide. In The Silicon Shrink, he argues that these new, ostensibly therapeutic technologies already pose significant risks to vulnerable people, and they won't stop there. These new breeds of AI systems are creating a psychiatric surveillance economy in which the emotions, behavior, and cognition of everyday people are subtly manipulated by psychologically savvy algorithms that have escaped the clinic. Oberhaus also introduces readers to the concept of “swipe psychology,” which is quickly establishing itself as the dominant mode of diagnosing and treating mental disorders.
It is not too late to change course, but to do so means we must reckon with the nature of mental illness, the limits of technology, and what it means to be human.
This interview was conducted by Dr. Miranda Melcher whose new book focuses on post-conflict military integration, understanding treaty negotiation and implementation in civil war contexts, with qualitative analysis of the Angolan and Mozambican civil wars.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode