

Ethical Machines
Reid Blackman
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Episodes
Mentioned books
Dec 11, 2025 • 46min
AI is Not a Normal Technology
When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.Advertising Inquiries: https://redcircle.com/brands
Dec 4, 2025 • 59min
We Are All Responsible for AI, Part 2
In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.Advertising Inquiries: https://redcircle.com/brands
Nov 20, 2025 • 1h 4min
We Are All Responsible for AI, Part 1
We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!Advertising Inquiries: https://redcircle.com/brands
Nov 13, 2025 • 44min
Orchestrating Ethics
In a thought-provoking conversation, David Danks, a Professor of Philosophy and Data Science known for his work on AI ethics, explores the crucial concept of ethical interoperability. He discusses the risks of differing ethical standards when companies integrate AI models from multiple sources. Danks emphasizes the need for case-by-case ethical alignment and the challenges of accountability in AI deployment. He also delves into how transparency and operational clarity can enhance ethical assessments, urging firms and governments to recognize mismatched ethical practices.
Nov 6, 2025 • 46min
The Military is the Safest Place to Test AI
How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. Advertising Inquiries: https://redcircle.com/brands
Oct 30, 2025 • 46min
Should We Make Digital Copies of People?
Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.Advertising Inquiries: https://redcircle.com/brands
10 snips
Oct 23, 2025 • 40min
How Society Bears AI’s Costs
In this engaging discussion, Karen Yeung, an interdisciplinary fellow in law, ethics, and informatics, dives deep into the social and political costs of AI. She emphasizes how human greed and capitalist incentives, rather than technology itself, consolidate wealth and power. Yeung highlights the dangers of misinformation and the erosion of public discourse, as well as the responsibility of Big Tech as unaccountable gatekeepers. The conversation also addresses the need for regulation over unbridled innovation, urging collective action to shape a fair future with AI.
Oct 16, 2025 • 56min
How Should We Teach Ethics to Computer Science Majors?
In this engaging discussion, Steven Kelts, a lecturer at Princeton's School of Public and International Affairs and the Department of Computer Science, shares insights on teaching ethics to future tech leaders. He argues for more than just philosophical approaches, advocating for practical training that includes moral awareness and ethical decision-making. Kelts emphasizes the importance of recognizing subtle ethical red flags and fostering systemic solutions over individual heroism. He also explores innovative teaching methods, including role plays and integrating LLMs in ethics education.
Oct 9, 2025 • 51min
In Defense of Killer Robots
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.Advertising Inquiries: https://redcircle.com/brands
Oct 1, 2025 • 39min
Live Recording: Is AI Creating a Sadder Future?
In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.Advertising Inquiries: https://redcircle.com/brands


