
Algocracy and Transhumanism Podcast
Interviews with experts and occasional audio essays about the philosophy of the future.
Latest episodes

Aug 28, 2019 • 0sec
#63 – Reagle on the Ethics of Life Hacking
In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the social implications of digital technology. Our conversation focuses on his most recent book: Hacking Life: Systematized Living and its Discontents (MIT Press 2019).
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).
Show Notes
0:00 – Introduction
1:52 – What is life-hacking? The four features of life-hacking
4:20 – Life Hacking as Self Help for the 21st Century
7:00 – How does technology facilitate life hacking?
12:12 – How can we hack time?
20:00 – How can we hack motivation?
27:00 – How can we hack our relationships?
31:00 – The Problem with Pick-Up Artists
34:10 – Hacking Health and Meaning
39:12 – The epistemic problems of self-experimentation
49:05 – The dangers of metric fixation
54:20 – The social impact of life-hacking
57:35 – Is life hacking too individualistic? Should we focus more on systemic problems?
1:03:15 – Does life hacking encourage a less intuitive and less authentic mode of living?
1:08:40 – Conclusion (with some further thoughts on inequality)
Relevant Links
Joseph’s Homepage
Joseph’s Blog
Hacking Life: Systematized Living and Its Discontents (including open access HTML version)
The Lifehacker Website
The Quantified Self Website
Seth Roberts’ first and final column: Butter Makes me Smarter
The Couple that Pays Each Other to Put the Kids to Bed (story about the founders of the Beeminder App)
‘The Quantified Relationship‘ by Danaher, Nyholm and Earp
Episode 6 – The Quantified Self with Deborah Lupton

Jul 3, 2019 • 0sec
#62 – Häggström on AI Motivations and Risk Denialism
In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years he has broadened his research interests to focus applied statistics, philosophy, climate science, artificial intelligence and social consequences of future technologies. He is the author of Here be Dragons: Science, Technology and the Future of Humanity (OUP 2016). We talk about AI motivations, specifically the Omohundro-Bostrom theory of AI motivation and its weaknesses. We also discuss AI risk denialism.
You can download the episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).
Show Notes
0:00 – Introduction
2:02 – Do we need to define AI?
4:15 – The Omohundro-Bostrom theory of AI motivation
7:46 – Key concepts in the Omohundro-Bostrom Theory: Final Goals vs Instrumental Goals
10:50 – The Orthogonality Thesis
14:47 – The Instrumental Convergence Thesis
20:16 – Resource Acquisition as an Instrumental Goal
22:02 – The importance of goal-content integrity
25:42 – Deception as an Instrumental Goal
29:17 – How the doomsaying argument works
31:46 – Critiquing the theory: the problem of self-referential final goals
36:20 – The problem of incoherent goals
42:44 – Does the truth of moral realism undermine the orthogonality thesis?
50:50 – Problems with the distinction between instrumental goals and final goals
57:52 – Why do some people deny the problem of AI risk?
1:04:10 – Strong versus Weak AI Scepticism
1:09:00 – Is it difficult to be taken seriously on this topic?
Relevant Links
Olle’s Blog
Olle’s webpage at Chalmers University
‘Challenges to the Omohundro-Bostrom framework for AI Motivations‘ by Olle (highly recommended)
‘The Superintelligent Will‘ by Nick Bostrom
‘The Basic AI Drives’ by Stephen Omohundro
Olle Häggström: Science, Technology, and the Future of Humanity (video)
Olle Häggström and Thore Husveldt debate AI Risk (video)
Summary of Bostrom’s theory (by me)
‘Why AI doomsayers are like sceptical theists and why it matters‘ by me

Jun 20, 2019 • 0sec
#61 – Yampolskiy on Machine Consciousness and AI Welfare
In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.
You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here).
Show Notes
0:00 – Introduction
2:30 – Artificial minds versus Artificial Intelligence
6:35 – Why talk about machine consciousness now when it seems far-fetched?
8:55 – What is phenomenal consciousness?
11:04 – Illusions as an insight into phenomenal consciousness
18:22 – How to create an illusion-based test for machine consciousness
23:58 – Challenges with operationalising the test
31:42 – Does AI already have a minimal form of consciousness?
34:08 – Objections to the proposed test and next steps
37:12 – Towards a science of AI welfare
40:30 – How do we currently test for animal and human welfare
44:10 – Dealing with the problem of deception
47:00 – How could we test for welfare in AI?
52:39 – If an AI can suffer, do we have a duty not to create it?
56:48 – Do people take these ideas seriously in computer science?
58:08 – What next?
Relevant Links
Roman’s homepage
‘Detecting Qualia in Natural and Artificial Agents‘ by Roman
‘Towards AI Welfare Science and Policies‘ by Soenke Ziesche and Roman Yampolskiy
The Hard Problem of Consciousness
25 famous optical illusions
Could AI get depressed and have hallucinations?

May 20, 2019 • 0sec
#60 – Véliz on How to Improve Online Speech with Pseudonymity
In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research programme ‘Data, Privacy, and the Individual’ at the IE’s Center for the Governance of Change’. We talk about the problems with online speech and how to use pseudonymity to address them.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, and a variety of other podcasting services (the RSS feed is here).
Show Notes
0:00 – Introduction
1:25 – The problems with online speech
4:55 – Anonymity vs Identifiability
9:10 – The benefits of anonymous speech
16:12 – The costs of anonymous speech – The online Ring of Gyges
23:20 – How digital platforms mediate speech and make things worse
28:00 – Is speech more trustworthy when the speaker is identifiable?
30:50 – Solutions that don’t work
35:46 – How pseudonymity could address the problems with online speech
41:15 – Three forms of pseudonymity and how they should be used
44:00 – Do we need an organisation to manage online pseudonyms?
49:00 – Thoughts on the Journal of Controversial Ideas
54:00 – Will people use pseudonyms to deceive us?
57:30 – How pseudonyms could address the issues with un-PC speech
1:02:04 – Should we be optimistic or pessimistic about the future of online speech?
Relevant Links
Carissa’s Webpage
“Online Masquerade: Redesigning the Internet for Free Speech Through the Use of Pseudonyms” by Carissa
“Why you might want to think twice about surrendering online privacy for the sake of convenience” by Carissa
“What If Banks Were the Main Protectors of Customers’ Private Data?” by Carissa
The Secret Barrister
Delete: The Virtue of Forgetting in the Digital Age by Viktor Mayer-Schönberger
Mill’s Argument for Free Speech: A Guide
‘Here Comes the Journal of Controversial Ideas. Cue the Outcry‘ by Bartlett

May 9, 2019 • 0sec
#59 – Torres on Existential Risk, Omnicidal Agents and Superintelligence
In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin of the Atomic Scientists, Futures, Erkenntnis, Metaphilosophy, Foresight, Journal of Future Studies, and the Journal of Evolution and Technology. He is the author of several books, including most recently Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. We talk about the problem of apocalyptic terrorists, the proliferation dual-use technology and the governance problem that arises as a result. This is both a fascinating and potentially terrifying discussion.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).
Show Notes
0:00 – Introduction
3:14 – What is existential risk? Why should we care?
8:34 – The four types of agential risk/omnicidal terrorists
17:51 – Are there really omnicidal terror agents?
20:45 – How dual-use technology give apocalyptic terror agents the means to their desired ends
27:54 – How technological civilisation is uniquely vulernable to omnicidal agents
32:00 – Why not just stop creating dangerous technologies?
36:47 – Making the case for mass surveillance
41:08 – Why mass surveillance must be asymmetrical
45:02 – Mass surveillance, the problem of false positives and dystopian governance
56:25 – Making the case for benevolent superintelligent governance
1:02:51 – Why advocate for something so fantastical?
1:06:42 – Is an anti-tech solution any more fantastical than a benevolent AI solution?
1:10:20 – Does it all just come down to values: are you a techno-optimist or a techno-pessimist?
Relevant Links
Phil’s webpage
‘Superintelligence and the Future of Governance:
On Prioritizing the Control Problem at the End of History’ by Phil
Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks by Phil
‘The Vulnerable World Hypothesis” by Nick Bostrom
Phil’s comparison of his paper with Bostrom’s paper
The Guardian orders the small-pox genome
Slaughterbots
The Future of Violence by Ben Wittes and Gabriela Blum
Future Crimes by Marc Goodman –
The Dyn Cyberattack –
Autonomous Technology by Langdon Winner
‘Biotechnology and the Lifetime of Technological Civilisations’ by JG Sotos –
The God Machine Thought Experiment (Persson and Savulescu)

Apr 25, 2019 • 0sec
#58 – Neely on Augmented Reality, Ethics and Property Rights
In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and augmented reality. We chat about the ethics of augmented reality, with a particular focus on property rights and the problems that arise when we blend virtual and physical reality together in augmented reality platforms.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other services (the RSS feed is here).
Show Notes
0:00 – Introduction
1:00 – What is augmented reality (AR)?
5:55 – Is augmented reality overhyped?
10:36 – What are property rights?
14:22 – Justice and autonomy in the protection of property rights
16:47 – Are we comfortable with property rights over virtual spaces/objects?
22:30 – The blending problem: why augmented reality poses a unique problem for the protection of property rights
27:00 – The different modalities of augmented reality: single-sphere or multi-sphere?
30:45 – Scenario 1: Single-sphere AR with private property
34:28 – Scenario 2: Multi-sphere AR with private property
37:30 – Other ethical problems in scenario 2
43:25 – Augmented reality vs imagination
47:15 – Public property as contested space
49:38 – Scenario 3: Multi-sphere AR with public property
54:30 – Scenario 4: Single-sphere AR with public property
1:00:28 – Must the owner of the single-sphere AR platform be regulated as a public utility/entity?
1:02:25 – Other important ethical issues that arise from the use of AR
Relevant Links
Erica’s Homepage
‘Augmented Reality, Augmented Ethics: Who Has the Right to Augment a Particular Physical Space?‘ by Erica
‘The Ethics of Choice in Single Player Video Games‘ by Erica
‘The Risks of Revolution: Ethical Dilemmas in 3D Printing from a US Perspective‘ by Erica
‘Machines and the Moral Community‘ by Erica
IKEA Place augmented reality app
L’Oreal’s use of augmented reality make-up apps
Holocaust Museum Bans Pokemon Go

Apr 10, 2019 • 0sec
#57 – Sorgner on Nietzschean Transhumanism
In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow at the Ethics Centre of the Friedrich-Schiller-University in Jena. His main fields of research are Nietzsche, the philosophy of music, bioethics and meta-, post- and transhumanism. We talk about his case for a Nietzschean form of transhumanism.
You can download the episode here or listen below. You can also subscribe to the podcast on iTunes, Stitcher and a variety of other podcasting apps (the RSS feed is here).
Show Notes
0:00 – Introduction
2:12 – Recent commentary on Stefan’s book Ubermensch
3:41 – Understanding transhumanism – getting away from the “humanism on steroids” ideal
10:33 – Transhumanism as an attitude of experimentation and not a destination?
13:34 – Have we always been transhumanists?
16:51 – Understanding Nietzsche
22:30 – The Will to Power in Nietzschean philosophy
26:41 – How to understand “power” in Nietzschean terms
30:40 – The importance of perspectivalism and the abandonment of universal truth
36:40 – Is it possible for a Nietzschean to consistently deny absolute truth?
39:55 – The idea of the Ubermensch (Overhuman)
45:48 – Making the case for a Nietzschean form of transhumanism
51:00 – What about the negative associations of Nietzsche?
1:02:17 – The problem of moral relativism for transhumanists
Relevant Links
Stefan’s homepage
The Ubermensch: A Plea for a Nietzschean Transhumanism – Stefan’s new book (in German)
Posthumanism and Transhumanism: An Introduction – edited by Stefan and Robert Ranisch
“Nietzsche, the Overhuman and Tranhumanism” by Stefan (open access)
“Beyond Humanism: Reflections on Trans and Post-humanism” by Stefan (a response to critics of the previous article)
Nietzsche at the Stanford Encyclopedia of Philosophy

Mar 30, 2019 • 0sec
#56 – Turner on Rules for Robots
In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI.
You can download here or listen below. You can also subscribe to the show on iTunes, Stitcher and a variety of other services (the RSS feed is here).
Show Notes
0:00 – Introduction
1:33 – Why did Jacob write Robot Rules?
2:47 – Do we need special legal rules for AI?
6:34 – The responsibility ‘gap’ problem
11:50 – Private law vs criminal law: why it’s important to remember the distinction
14:08 – Is is easy to plug the responsibility gap in private law?
23:07 – Do we need to think about the criminal law responsibility gap?
26:14 – Is it absurd to hold AI criminally responsible?
30:24 – The problem with holding proximate humans responsible
36:40 – The positive side of responsibility: lessons from the Monkey selfie case
41:50 – What is legal personhood and what it mean to grant it to an AI?
48:57 – Pragmatic reasons for granting an AI legal personhood
51:48 – Is this a slippery slope?
56:00 – Explainability and AI: Why is this important?
1:02:38 – Is there are right to explanation under EU law?
1:06:16 – Is explainability something that requires a technical solution not a legal solution?
1:08:32 – The danger of fetishising explainability
Relevant Links
Robot Rules: Regulating Artificial Intelligence
Website for the book
Jacob on Twitter
Jacob giving a lecture about the book at the University of Law
“Robots, Law and the Retribution Gap” by John Danaher
The Darknet Shopper Case
The Monkey Selfie Case
Algorithmic Entities by Lynn LoPucki (discussing Shawn Bayern’s argument)
Matthew Scherer’s critique of Bayern’s claim that AI’s can already acquire legal personhood

Mar 13, 2019 • 0sec
#55 – Baum on the Long-Term Future of Human Civilisation
In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global Catastrophic Risk Institute. He is also a Research Affiliate of the University of Cambridge Centre for the Study of Existential Risk. We talk about the importance of studying the long-term future of human civilisation, and map out four possible trajectories for the long-term future.
You can download the episode here or listen below. You can also subscribe on a variety of different platforms, including iTunes, Stitcher, Overcast, Podbay, Player FM and more. The RSS feed is available here.
Show Notes
0:00 – Introduction
1:39 – Why did Seth write about the long-term future of human civilisation?
5:15 – Why should we care about the long-term future? What is the long-term future?
13:12 – How can we scientifically and ethically study the long-term future?
16:04 – Is it all too speculative?
20:48 – Four possible futures, briefly sketched: (i) status quo; (ii) catastrophe; (iii) technological transformation; and (iv) astronomical
23:08 – The Status Quo Trajectory – Keeping things as they are
28:45 – Should we want to maintain the status quo?
33:50 – The Catastrophe Trajectory – Awaiting the likely collapse of civilisation
38:58 – How could we restore civilisation post-collapse? Should we be working on this now?
44:00 – Are we under-investing in research into post-collapse restoration?
49:00 – The Technological Transformation Trajectory – Radical change through technology
52:35 – How desirable is radical technological change?
56:00 – The Astronomical Trajectory – Colonising the solar system and beyond
58:40 – Is the colonisation of space the best hope for humankind?
1:07:22 – How should the study of the long-term future proceed from here?
Relevant Links
Seth’s homepage
The Global Catastrophic Risk Institute
“Long-Term Trajectories for Human Civilisation” by Baum et al
“The Perils of Short-Termism: Civilisation’s Greatest Threat” by Fisher, BBC News
The Knowledge by Lewis Dartnell
“Space Colonization and the Meaning of Life” by Baum, Nautilus
“Astronomical Waste: The Opportunity Cost of Delayed Technological Development” by Nick Bostrom
“Superintelligence as a Cause or Cure for Risks of Astronomical Suffering” by Kaj Sotala and Lucas Gloor
“Space Colonization and Suffering Risks” by Phil Torres
“Thomas Hobbes in Space: The Problem of Intergalactic War” by John Danaher

Feb 28, 2019 • 0sec
Episode #54 – Sebo on the Moral Problem of Other Minds
In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University. Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and Food, Animals, and the Environment. We talk about something Jeff calls the ‘moral problem of other minds’, which is roughly the problem of what we should to if we aren’t sure whether another being is sentient or not.
You can download the episode here or listen below. You can also subscribe to the show on iTunes and Stitcher (the RSS feed is here).
Show Notes
0:00 – Introduction
1:38 – What inspired Jeff to think about the moral problem of other minds?
7:55 – The importance of sentience and our uncertainty about it
12:32 – The three possible responses to the moral problem of other minds: (i) the incautionary principle; (ii) the precautionary principle and (iii) the expected value principle
15:26 – Understanding the Incautionary Principle
20:09 – Problems with the Incautionary Principle
23:14 – Understanding the Precautionary Principle: More plausible than the incautionary principle?
29:20 – Is morality a zero-sum game? Is there a limit to how much we can care about other beings?
35:02 – The problem of demandingness in moral theory
37:06 – Other problems with the precautionary principle
41:41 – The Utilitarian Version of the Expected Value Principle
47:36 – The problem of anthropocentrism in moral reasoning
53:22 – The Kantian Version of the Expected Value Principle
59:08 – Problems with the Kantian principle
1:03:54 – How does the moral problem of other minds transfer over to other cases, e.g. abortion and uncertainty about the moral status of the foetus?
Relevant Links
Jeff’s Homepage
‘The Moral Problem of Other Minds’ by Jeff
Chimpanzee Ethics by Jeff and ors
Food, Animals and the Environment by Jeff and Christopher Schlottman
‘Consider the Lobster‘ by David Foster Wallace
‘Ethical Behaviourism in the Age of the Robot’ by John Danaher
Episode 48 with David Gunkel on Robot Rights