
Ethical Machines
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Latest episodes

Jun 5, 2025 • 54min
How Do We Construct Intelligence?
The Silicon Valley titans talk a lot about intelligence and super intelligence of AI…but what is intelligence, anyway? My guest, former philosopher professor and now Director at Gartner Philip Walsh argues the SV folks are fundamentally confused about what intelligence is. It’s not, he argues, like horsepower, which can be objectively measured. Instead, whether we ascribe intelligence to something is a matter of what we communally agree to ascribe intelligence to. More specifically, we have to collectively agree on the criteria for intelligence and that’s when it makes sense to say “yeah this thing is intelligent.” But we don’t really have a settled collective agreement and that’s why we sort of want to say “this is not intelligence” at the same time we say, “How is this thing so smart?!” I think this is a crucial discussion for anyone who wants to think deeply about what to make of our new quasi/proto/faux intelligent companions.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

May 29, 2025 • 34min
AI Needs Historians
How can we solve AI’s problems if we don’t understand where they came from? Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

May 22, 2025 • 53min
We’re Not Ready for Agentic AI
In this discussion, Avijit Ghosh, an Applied Policy Researcher at Hugging Face focused on AI safety, reveals the perils of deploying agentic AI without proper safeguards. He highlights the gaps in current AI ethical practices and the challenges in managing autonomy within these systems. Ghosh emphasizes the importance of human oversight in communication protocols between AI agents. Their conversation dives into the necessity for robust cybersecurity measures and the ethical implications of AI in critical fields like healthcare.

May 15, 2025 • 48min
How Algorithms Manipulate Us
In this discussion, Michael Klenk, an assistant professor of practical philosophy at Delft University, dives into the complexities of algorithmic manipulation on social media. He unpacks the nuanced distinctions between manipulation and persuasion, emphasizing that not all influence is harmful. Klenk also critiques the ethical implications of algorithms, questioning whether they can guide behavior without crossing moral lines. The conversation explores manipulation in both digital media and personal relationships, highlighting the challenges of honesty amid emotional appeals.

May 8, 2025 • 48min
What Should We Do When AI Knows More Than Us?
We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

May 1, 2025 • 42min
Should We Ignore Claims about AI’s Existential Threat?
Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity posed by AI.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 24, 2025 • 47min
AI is Not Intelligent
I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 17, 2025 • 27min
A Crash Course on the AI Ethics Landscape
By the end of this crash course, you’ll understand a lot about the AI ethics landscape. Not only will it give you your bearings, but it will also enable you to identify what parts of the landscape you find interesting so you can do a deeper dive.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 10, 2025 • 43min
Does AI Ethics in Business Make Sense?
People want AI developed ethically, but is there actually a business case for it? The answer better be yes since, after all, it’s businesses that are developing AI in the first place. Today I talk with Dennis Hirsch, Professor of Law and Computer Science at Ohio State University, who is conducting empirical research on this topic. He argues that AI ethics - or as he prefers to call it, Responsible AI - delivers a lot of bottom line business value. In fact, his research revealed something about its value that he didn’t even expect to see. We’re in the early days of businesses taking AI ethics seriously, but if he’s right, we’ll see a lot more of it. Fingers crossed.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 3, 2025 • 46min
Does AI Undermine Scientific Discovery?
Automation is great, right? It speeds up what needs to get done. But is that always a good thing? What about in the process of scientific discovery? Yes, AI can automate a lot of science by running thousands of virtual experiments and generating results - but is something lost in the process? My guest, Ramón Alvarado a professor of philosophy and a member of the Philosophy and Data Science Initiative at the University of Oregon, thinks something crucial is missing: serendipity. Many significant scientific discoveries occurred by happenstance. Penicillin, for instance, was discovered by Alexander Fleming who accidentally left a petri dish on a bench before going off for vacation. Exactly what is the scientific value of serendipity, how important is it, and how does AI potentially impinge on it? That’s today’s conversation.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy