

Ethical Machines
Reid Blackman
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Episodes
Mentioned books
Nov 13, 2025 • 44min
Orchestrating Ethics
One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.Advertising Inquiries: https://redcircle.com/brands
Nov 6, 2025 • 46min
The Military is the Safest Place to Test AI
How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. Advertising Inquiries: https://redcircle.com/brands
Oct 30, 2025 • 46min
Should We Make Digital Copies of People?
Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.Advertising Inquiries: https://redcircle.com/brands
10 snips
Oct 23, 2025 • 40min
How Society Bears AI’s Costs
In this engaging discussion, Karen Yeung, an interdisciplinary fellow in law, ethics, and informatics, dives deep into the social and political costs of AI. She emphasizes how human greed and capitalist incentives, rather than technology itself, consolidate wealth and power. Yeung highlights the dangers of misinformation and the erosion of public discourse, as well as the responsibility of Big Tech as unaccountable gatekeepers. The conversation also addresses the need for regulation over unbridled innovation, urging collective action to shape a fair future with AI.
Oct 16, 2025 • 56min
How Should We Teach Ethics to Computer Science Majors?
In this engaging discussion, Steven Kelts, a lecturer at Princeton's School of Public and International Affairs and the Department of Computer Science, shares insights on teaching ethics to future tech leaders. He argues for more than just philosophical approaches, advocating for practical training that includes moral awareness and ethical decision-making. Kelts emphasizes the importance of recognizing subtle ethical red flags and fostering systemic solutions over individual heroism. He also explores innovative teaching methods, including role plays and integrating LLMs in ethics education.
Oct 9, 2025 • 51min
In Defense of Killer Robots
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.Advertising Inquiries: https://redcircle.com/brands
Oct 1, 2025 • 39min
Live Recording: Is AI Creating a Sadder Future?
In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.Advertising Inquiries: https://redcircle.com/brands
Jul 31, 2025 • 42min
Season finale: A New Ethics for AI Ethics?
Wendell Wallach, a prominent scholar at Yale's Bioethics Center, shares his extensive insights on AI ethics. He critiques the prevalent concept of 'value alignment' and traditional moral theories, arguing they fall short in the AI domain. Wallach introduces fresh ethical concepts like trade-off ethics and silent ethics, advocating for a universal moral language. He emphasizes the critical role of human responsibility in AI decision-making, especially regarding lethal technologies, making a strong case for a human-centric approach in our technological future.
Jul 17, 2025 • 47min
Is AI a Person or a Thing… or Neither?
It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brands
5 snips
Jul 10, 2025 • 52min
How Do You Control Unpredictable AI?
Walter Haydock, a former national security policy advisor and founder of StackAware, dives into the intriguing complexities of unpredictable AI. He discusses the dual nature of large language models—capable of both creativity and chaos. Haydock emphasizes the critical need for structured risk assessments to navigate the pitfalls of integrating agentic AI into organizations. He highlights dangers like data poisoning and calls for stricter testing and monitoring to ensure responsible AI deployment while fostering innovation.


