

Professor Magda Osman on Psychological Harm
May 24, 2025
01:02:09
What is psychological harm, and can we really regulate it? Should an AI-companion app be allowed to dump the person who is using it?
📝 Episode Summary
On this episode, I’m joined once again by Professor Magda Osman, someone who’s been on the show several times before, who always has something compelling to say.
This time, we're talking about psychological harm, a term you’ve probably heard, but which remains vague, slippery, and surprisingly unhelpful when it comes to actually protecting people.
Together, we explore what psychological harm really means, why defining it matters, and why regulating it, especially in digital contexts, is so tricky.
We draw comparisons to physical harm, ask whether some emotional distress might be necessary, and consider what kinds of harm are moral rather than measurable.
The conversation touches on loneliness, AI companions, consent, and even chainsaws!
👤 Guest Biography
Magda is a Principal Research Associate at the Judge Business School, University of Cambridge, and holds a Professorial position at Leeds Business School, University of Leeds, where she supports policy impact.
She describes herself as a psychologist by training, with specific interests in decision-making under risk and uncertainty, folk beliefs in the unconscious, and behavioural change effectiveness.
Magda works at the intersection of behavioural science, regulation, and public policy, offering practical insights that challenge assumptions and bring clarity to complex issues.
⏱️ AI-Generated Timestamped Summary
[00:00:00] Introduction and framing of psychological harm
[00:02:00] The conceptual problems with defining psychological harm
[00:05:00] Psychological harm and the precautionary principle in digital regulation
[00:08:00] Social context, platform functions, and why generalisations don’t work
[00:12:00] The idea of rites of passage and unavoidable suffering
[00:15:00] AI companion apps and emotional dependency
[00:17:00] Exploitation, data harvesting, and moral transparency
[00:22:00] Frustration as normal vs. actual psychological damage
[00:26:00] The danger of regulating the trivial and the need for precision
[00:29:00] Why causal links are necessary for meaningful intervention
[00:33:00] Legal obligations and holding tech companies to account
[00:38:00] What users actually care about: privacy, data, trust
[00:42:00] Society’s negotiation of what counts as tolerable harm
[00:45:00] Why this isn’t an unprecedented problem — and how we’ve faced it before
[00:50:00] The risk of bad definitions leading to bad regulation
[00:54:00] Two contrasting examples of online services and their impacts
[00:57:00] What kind of regulation might we actually need?
[00:59:00] The case for rethinking how regulation itself is structured
[01:01:00] Where to find Magda’s work and final reflections
đź”— Links
Magda's LinkedIn profile: https://www.linkedin.com/in/magda-osman-11165138/
Her website: https://www.magdaosman.com/
Magda’s previous appearances on the show exploring:
Behavioural Interventions that fail:
https://www.humanriskpodcast.com/dr-magda-osman-on-behavioural/
Unconscious Bias: what is it, and can we train people not to show it?
https://www.humanriskpodcast.com/dr-magda-osman-on-unconscious/
Compliance, Coercion & Competence
https://www.humanriskpodcast.com/professor-magda-osman-on-compliance-coercion-competence/
Misinformation
https://www.humanriskpodcast.com/professor-magda-osman-on-misinformation/
Risk Prioritisation
https://www.humanriskpodcast.com/professor-magda-osman-on-risk-prioritisation/
📝 Episode Summary
On this episode, I’m joined once again by Professor Magda Osman, someone who’s been on the show several times before, who always has something compelling to say.
This time, we're talking about psychological harm, a term you’ve probably heard, but which remains vague, slippery, and surprisingly unhelpful when it comes to actually protecting people.
Together, we explore what psychological harm really means, why defining it matters, and why regulating it, especially in digital contexts, is so tricky.
We draw comparisons to physical harm, ask whether some emotional distress might be necessary, and consider what kinds of harm are moral rather than measurable.
The conversation touches on loneliness, AI companions, consent, and even chainsaws!
👤 Guest Biography
Magda is a Principal Research Associate at the Judge Business School, University of Cambridge, and holds a Professorial position at Leeds Business School, University of Leeds, where she supports policy impact.
She describes herself as a psychologist by training, with specific interests in decision-making under risk and uncertainty, folk beliefs in the unconscious, and behavioural change effectiveness.
Magda works at the intersection of behavioural science, regulation, and public policy, offering practical insights that challenge assumptions and bring clarity to complex issues.
⏱️ AI-Generated Timestamped Summary
[00:00:00] Introduction and framing of psychological harm
[00:02:00] The conceptual problems with defining psychological harm
[00:05:00] Psychological harm and the precautionary principle in digital regulation
[00:08:00] Social context, platform functions, and why generalisations don’t work
[00:12:00] The idea of rites of passage and unavoidable suffering
[00:15:00] AI companion apps and emotional dependency
[00:17:00] Exploitation, data harvesting, and moral transparency
[00:22:00] Frustration as normal vs. actual psychological damage
[00:26:00] The danger of regulating the trivial and the need for precision
[00:29:00] Why causal links are necessary for meaningful intervention
[00:33:00] Legal obligations and holding tech companies to account
[00:38:00] What users actually care about: privacy, data, trust
[00:42:00] Society’s negotiation of what counts as tolerable harm
[00:45:00] Why this isn’t an unprecedented problem — and how we’ve faced it before
[00:50:00] The risk of bad definitions leading to bad regulation
[00:54:00] Two contrasting examples of online services and their impacts
[00:57:00] What kind of regulation might we actually need?
[00:59:00] The case for rethinking how regulation itself is structured
[01:01:00] Where to find Magda’s work and final reflections
đź”— Links
Magda's LinkedIn profile: https://www.linkedin.com/in/magda-osman-11165138/
Her website: https://www.magdaosman.com/
Magda’s previous appearances on the show exploring:
Behavioural Interventions that fail:
https://www.humanriskpodcast.com/dr-magda-osman-on-behavioural/
Unconscious Bias: what is it, and can we train people not to show it?
https://www.humanriskpodcast.com/dr-magda-osman-on-unconscious/
Compliance, Coercion & Competence
https://www.humanriskpodcast.com/professor-magda-osman-on-compliance-coercion-competence/
Misinformation
https://www.humanriskpodcast.com/professor-magda-osman-on-misinformation/
Risk Prioritisation
https://www.humanriskpodcast.com/professor-magda-osman-on-risk-prioritisation/