

On the Ethics of AI
Artificial Intelligence (AI) is the Big Tech buzzword of the day. Every company who wants investment (public or private) is scrambling to have an “AI story”, adding chatbots and ‘agentic’ features in their products wherever possible. The AI companies themselves are constantly expanding their models, ingesting as much data (including highly personal information) as possible. In this AI gold rush, companies are making flawed and often harmful products. Companies are firing workers and trying to replace them with AI bots. And it’s forcing us all to question what’s real, what has actual value, and what the impacts could and should be on society as a whole. Discussing deep questions like this is the purview of philosophers – and today I’ll be welcoming back someone uniquely and supremely qualified to address them, Carissa Véliz.
Interview Notes
- Carissa Véliz: https://www.carissaveliz.com/
- Privacy is Power: https://www.carissaveliz.com/books
- Carissa’s research: https://www.carissaveliz.com/research
- Moral Zombies: https://link.springer.com/article/10.1007/s00146-021-01189-x
- ChatGPT suicide: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
- TESCREAL: https://en.wikipedia.org/wiki/TESCREAL
- John Oliver on AI Slop: https://www.youtube.com/watch?v=TWpg1RmzAbc
- Proton Lumo: https://proton.me/blog/lumo-ai
- EU’s “public good” LLM: https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html
Further Info
- My book: https://fdsd.me/book
- My newsletter: https://fdsd.me/newsletter
- Support the mission: https://fdsd.me/support
- Give the gift of privacy and security: https://fdsd.me/coupons
- Get your Firewalls Don’t Stop Dragons Merch! https://fdsd.me/merch
Table of Contents
- 0:00:00: Intro
- 0:05:09: What does “artifical intelligence” really mean?
- 0:13:21: Should STEM degrees require ethics training?
- 0:17:20: Does anthropomorphising AI undermine our discourse?
- 0:22:35: What is the TESCREAL view of AI?
- 0:28:09: Can we infuse AI tools with human morality?
- 0:34:31: What are the dangers of training AI on copyrighted works?
- 0:42:16: What happens when AI starts ingesting it’s own output?
- 0:44:27: Can we make AI systems that are truly private?
- 0:48:08: How should we assign liability for AI harms?
- 0:51:06: Is AI eroding our ability to trust anything?
- 0:54:06: What happens when AI obviates the need to work at all?
- 1:00:00: How do we maximize the benefits and minimize the harms of AI?
- 1:03:20: Interview wrap-up
- 1:06:06: Patron podcast preview
- 1:07:08: Looking ahead