

Algocracy and Transhumanism Podcast
John Danaher
Interviews with experts and occasional audio essays about the philosophy of the future.
Episodes
Mentioned books

Aug 8, 2018 • 0sec
Episode #43 – Elder on Friendship, Robots and Social Media
In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy – primarily Chinese and Greek – in order to think about current problems. She is the author of a number of articles on the philosophy of friendship, and her book Friendship, Robots, and Social Media: False Friends and Second Selves, came out in January 2018. We talk about all things to do with friendship, social media and social robots.
You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).
Show Notes
0:00 – Introduction
1:37 – Aristotle’s theory of friendship
5:00 – The idea of virtue/character friendship
10:14 – The enduring appeal of Aristotle’s account of friendship
12: 30 – Does social media corrode friendship?
16:35 – The Publicity Objection to online friendships
20:40 – The Superficiality Objection to online friendships
25:23 – The Commercialisation/Contamination Objection to online friendships
30:34 – Deception in online friendships
35:18 – Must we physically interact with our friends?
39:25 – Social robots as friends (with a specific focus on elderly populations and those on the autism spectrum)
46:50 – Can you be friends with a robot? The counterfeit currency analogy
50:55 – Does the analogy hold up?
56:13 – Why are robotic friends assumed to be fake?
1:03:50 – Does the ‘falseness’ of robotic friends depend on the type of friendship we are interested in?
1:06:38 – What about companion animals?
1:08:35 – Where is this debate going?
Relevant Links
Alexis Elder’s webpage
‘Excellent Online Friendships: An Aristotelian Defence of Social Media‘ by Alexis
‘False Friends and False Coinage: a tool for navigating the ethics of sociable robots” by Alexis
Friendship, Robots and Social Media by Alexis
‘Can you be friends with a robot? Aristotelian Friendship and Robotics‘ by John Danaher

Jul 25, 2018 • 0sec
Episode #42 – Earp on Psychedelics and Moral Enhancement
In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about moral enhancement and the potential use of psychedelics as a form of moral enhancement.
You can download the episode here or listen below. You can also subscribe to the podcast on iTunes and Stitcher (the RSS feed is here).
Show Notes
0:00 – Introduction
1:53 – Why psychedelics and moral enhancement?
5:07 – What is moral enhancement anyway? Why are people excited about it?
7:12 – What are the methods of moral enhancement?
10:18 – Why is Brian sceptical about the possibility of moral enhancement?
14:16 – So is it an empty idea?
17:58 – What if we adopt an ‘extended’ concept of enhancement, i.e. beyond the biomedical?
26:12 – Can we use psychedelics to overcome the dilemma facing the proponent of moral enhancement?
29:07 – What are psychedelic drugs? How do they work on the brain?
34:26 – Are your experiences whilst on psychedelic drugs conditional on your cultural background?
37:39 – Dissolving the ego and the feeling of oneness
41:36 – Are psychedelics the new productivity hack?
43:48 – How can psychedelics enhance moral behaviour?
47:36 – How can a moral philosopher make sense of these effects?
51:12 – The MDMA case study
58:38 – How about MDMA assisted political negotiations?
1:02:11 – Could we achieve the same outcomes without drugs?
1:06:52 – Where should the research go from here?
Relevant Links
Brian’s academia.edu page
Brian’s researchgate page
Brian as Rob Walker (and his theatre reel)
‘Psychedelic moral enhancement‘ by Brian Earp
‘Moral Neuroenhancement‘ by Earp, Douglas and Savulescu
How to Change Your Mind by Michael Pollan
Interview with Ole Martin Moen in the ethics of psychedelics
The Doors of Perception by Aldous Huxley
Roland Griffiths Laboratory at Johns Hopkins

Jul 12, 2018 • 0sec
Episode #41 – Binns on Fairness in Algorithmic Decision-Making
In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.
You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).
Show notes
0:00 – Introduction
1:46 – What is algorithmic decision-making?
4:20 – Isn’t all decision-making algorithmic?
6:10 – Examples of unfairness in algorithmic decision-making: The COMPAS debate
12:02 – Limitations of the COMPAS debate
15:22 – Other examples of unfairness in algorithmic decision-making
17:00 – What is discrimination in decision-making?
19:45 – The mental state theory of discrimination
25:20 – Statistical discrimination and the problem of generalisation
29:10 – Defending algorithmic decision-making from the charge of statistical discrimination
34:40 – Algorithmic typecasting: Could we all end up like William Shatner?
39:02 – Egalitarianism and algorithmic decision-making
43:07 – The role that luck and desert play in our understanding of fairness
49:38 – Deontic justice and historical discrimination in algorithmic decision-making
53:36 – Fair distribution vs Fair recognition
59:03 – Should we be enthusiastic about the fairness of future algorithmic decision-making?
Relevant Links
Reuben’s homepage
Reuben’s institutional page
‘Fairness in Machine Learning: Lessons from Political Philosophy‘ by Reuben Binns
‘Algorithmic Accountability and Public Reason‘ by Reuben Binns
‘It’s Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making‘ by Binns et al
‘Machine Bias‘ – the ProPublica story on unfairness in the COMPAS recidivism algorithm
‘Inherent Tradeoffs in the Fair Determination of Risk Scores‘ by Kleinberg et al — an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same)

Jun 29, 2018 • 0sec
Episode #40 – Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars
In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more.
You can download the podcast here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here).
Show Notes:
0:00 – Introduction
1:22 – What is a self-driving car?
3:00 – Fatal crashes involving self-driving cars
5:10 – Could self-driving cars ever be completely safe?
8:14 – Limitations of the Trolley Problem
11:22 – What kinds of accident scenarios do we need to plan for?
17:18 – Who should decide which ethical rules a self-driving car follows?
23:47 – Why not randomise the ethical rules?
25:18 – Experimental findings on people’s preferences with self-driving cars
29:16 – Is this just another typical applied ethical debate?
31:27 – What would a utilitarian self-driving car do?
36:30 – What would a Kantian self-driving car do?
39:33 – A contractualist approach to the ethics of self-driving cars
43:54 – The responsibility gap problem
46:12 – Scepticism of the responsibility gap: can self-driving cars be agents?
53:17 – A collaborative agency approach to self-driving cars
58:18 – So who should we blame if something goes wrong?
1:03:40 – Is there a duty to hand over driving to machines?
1:07:30 – Must self-driving cars be programmed to kill?
Relevant Links
Sven’s faculty webpage
‘The Ethics of Crashes with Self-Driving Cars, A Roadmap I‘ by Sven
‘The Ethics of Crashes with Self-Driving Cars, A Roadmap II‘ by Sven
‘Attributing Responsibility to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility Loci‘ by Sven
‘The Ethics of Accident Algorithms for Self-Driving Cars: An Applied Trolley Problem‘ by Nyholm and Smids
‘Automated Cars meet Human Drivers: responsible human-robot coordination and the ethics of mixed traffic’ by Nyhom and Smids
Episode #3 with Sven on Love Drugs, DBS and Self-Driving Cars
Episode #23 with Liu on Responsibility and Discrimination in Self-Driving Cars

Jun 4, 2018 • 0sec
Episode #39 – Re-engineering Humanity with Frischmann and Selinger
In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is Professor of Philosophy at the Rochester Institute of Technology. Their book looks at how modern techno-social engineering is affecting humanity. We have a long-ranging conversation about the main arguments and ideas from the book. The book features lots of interesting thought experiments and provocative claims. I recommend checking it out. I highlight of this conversation for me was our discussion of the ‘Free Will Wager’ and how it pertains to debates about technology and social engineering.
You can listen to the episode below or download it here. You can also subscribe on Stitcher and iTunes (the RSS feed is here).
Show Notes
0:00 – Introduction
1:33 – What is techno-social engineering?
7:55 – Is techno-social engineering turning us into simple machines?
14:11 – Digital contracting as an example of techno-social engineering
22:17 – The three important ingredients of modern techno-social engineering
29:17 – The Digital Tragedy of the Commons
34:09 – Must we wait for a Leviathan to save us?
44:03 – The Free Will Wager
55:00 – The problem of Engineered Determinism
1:00:03 – What does it mean to be self-determined?
1:12:03 – Solving the problem? The freedom to be off
Relevant Links
Evan Selinger’s homepage
Brett Frischmann’s homepage
Re-engineering Humanity – website
‘Reverse Turing Tests: Are humans becoming more machine-like?’ by me
Episode 4 with Evan Selinger on Privacy and Algorithmic Outsourcing
Episode 7 with Brett Frischmann on Human-Focused Turing Tests
Gregg Caruso on ‘Free Will Skepticism and Its Implications: An Argument for Optimism’
Derk Pereboom on Relationships and Free Will

Mar 27, 2018 • 0sec
Episode #38 – Schwartz on the Ethics of Space Exploration
In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University. His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System. He is editor (with Tony Milligan) of The Ethics of Space Exploration (Springer 2016) and his publications have appeared in Advances in Space Research, Space Policy, Acta Astronautica, Astropolitics, Environmental Ethics, Ethics & the Environment, and Philosophia Mathematica. He has also contributed chapters to The Meaning of Liberty Beyond Earth, Human Governance Beyond Earth, Dissent, Revolution and Liberty Beyond Earth (each edited by Charles Cockell), and to Yearbook on Space Policy 2015. He is currently working on a book project, The Value of Space Science. We talk about all things space-related, including the scientific case for space exploration and the myths that befuddle space advocacy.
You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).
Show Notes
0:00 – Introduction
1:40 – Why did James get interested in the philosophy of space?
3:17 – Is interest in the philosophy and ethics of space exploration on the rise?
6:05 – Do space ethicists always say “no”?
8:20 – Do we have a duty to explore space? If so, what kind of duty is this?
10:30 – Space exploration and the duty to ensure species survival
16:16 – The link between space ethics and environmental ethics: between misanthrophy and anthropocentrism
19:33 – How would space exploration help human survival?
23:20 – The scientific value of space exploration: manned or unmanned?
28:30 – Why does the scientific case for space exploration take priority?
35:40 – Is it our destiny to explore space?
38:46 – Thoughts on Elon Musk and the Colonisation Project
44:34 – The Myths of Space Advocacy
51:40 – From space philosophy to space policy: getting rid of the myths
58:55 – The future of space philosophy
Relevant Links
Dr Schwartz’s website – The Space Philosopher (with links to papers and works in progress)
‘Space Settlement: What’s the rush?’ – by James Schwartz
Myth-Free Space Advocacy Part I, Part II, Part III, Part IV -by James Schwartz
Video of James’s lecture on Worldship Ethics
‘Prioritizing Scientific Exploration: A Comparison of Ethical Justifications for Space Development and Space Science’ – by James Schwartz
Episode 37 with Christopher Yorke (middle section deals with the prospects for a utopia in space).

Mar 3, 2018 • 0sec
Episode #37 – Yorke on the Philosophy of Utopianism
In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a ‘utopia’ is, why space exploration is associated with utopian thinking, and whether Bernard Suits’ is correct to say that games are the highest ideal of human existence.
You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).
Show Notes
0:00 – Introduction
2:00 – Why did Christopher choose to study utopianism?
6:44 – What is a ‘utopia’? Defining the ideal society
14:00 – Is utopia practically achievable?
19:34 – Why are dystopias easier to imagine that utopias?
23:00 – Blueprints vs Horizons – different understandings of the utopian project
26:40 – What do philosophers bring to the study of utopia?
30:40 – Why is space exploration associated with utopianism?
39:20 – Kant’s Perpetual Peace vs the Final Frontier
47:09 – Suits’s Utopia of Games: What is a game?
53:16 – Is game-playing the highest ideal of human existence?
1:01:15 – What kinds of games will Suits’s utopians play?
1:14:41 – Is a post-instrumentalist society really intelligible?
Relevant Links
Christopher Yorke’s Academia.edu page
‘Prospects for Utopia in Space’ by Christopher Yorke
‘Endless Summer: What kinds of games will Suits’s Utopians Play?’ by Christopher Yorke
‘The Final Frontier: Space Exploration as Utopia Project‘ by John Danaher
‘The Utopia of Games: Intelligible or Unintelligible‘ by John Danaher
Other posts on utopianism and the good life
The Grasshopper by Bernard Suits

Jan 27, 2018 • 0sec
Episode #36 – Wachter on Algorithms, Explanations and the GDPR
In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal and ethical implications of Big Data, AI, and robotics as well as governmental surveillance, predictive policing, and human rights online. Her current work deals with the ethical design of algorithms, including the development of standards and methods to ensure fairness, accountability, transparency, interpretability, and group privacy in complex algorithmic systems.
You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (the RSS feed is here).
Show Notes
0:00 – Introduction
2:05 – The rise of algorithmic/automated decision-making
3:40 – Why are algorithmic decisions so opaque? Why is this such a concern?
5:25 – What are the benefits of algorithmic decisions?
7:43 – Why might we want a ‘right to explanation’ of algorithmic decisions?
11:05 – Explaining specific decisions vs. explaining decision-making systems
15:48 – Introducing the GDPR – What is it and why does it matter?
19:29 – Is there a right to explanation embedded in Article 22 of the GDPR?
23:30 – The limitations of Article 22
27:40 – When do algorithmic decisions have ‘significant effects’?
29:30 – Is there a right to explanation in Articles 13 and 14 of the GDPR (the ‘notification duties’ provisions)?
33:33 – Is there a right to explanation in Article 15 (the access right provision)?
37:45 – Is there any hope that a right to explanation might be interpreted into the GDPR?
43:04 – How could we explain algorithmic decisions? Introducing counterfactual explanations
47:55 – Clarifying the concept of a counterfactual explanation
51:00 – Criticisms and limitations of counterfactual explanations
Relevant Links
Sandra’s profile page at the Oxford Internet Institute
Sandra’s academia.edu page
‘Why a right to explanation does not exist in the General Data Protection Regulation’ by Wachter, Mittelstadt and Floridi
‘Counterfactual explanations without opening the black box: Automated decisions and the GDPR’ by Wachter, Mittelstadt and Russell
The General Data Protection Regulation
Article 29 working party guidance on the GDPR
Do judges make stricter sentencing decisions when they are hungry? and a Reply

Jan 15, 2018 • 0sec
Episode #35 – Brundage on the Case for Conditional Optimism about AI
In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI.
You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).
Show Notes
0:00 – Introduction
1:00 – Why did Miles write the conditional case for AI optimism?
5:07 – What is AI anyway?
8:26 – The difference between broad and narrow forms of AI
12:00 – Is the current excitement around AI hype or reality?
16:13 – What is the conditional case for AI conditional upon?
22:00 – The First Argument: The Value of Task Expedition
29:30 – The downsides of task expedition and the problem of speed mismatches
33:28 – How AI changes our cognitive ecology
36:00 – The Second Argument: The Value of Improved Coordination
40:50 – Wouldn’t AI be used for malicious purposes too?
45:00 – Can we create safe AI in the absence of global coordination?
48:03 – The Third Argument: The Value of a Leisure Society
52:30 – Would a leisure society really be utopian?
56:24 – How were Miles’s arguments received when presented at the EU parliament?
Relevant Links
Miles’s Homepage
Miles’s past publications
Miles at the Future of Humanity Institute
Video of Miles’s presentation to the EU Parliament (starts at approx 10:05:19 or 1 hour and 1 minute into the video)
Olle Haggstrom’s write-up about the EU parliament event
‘Cognitive Scarcity and Artificial Intelligence‘ by Miles Brundage and John Danaher