

Ethical Machines
Reid Blackman
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Episodes
Mentioned books

May 15, 2025 • 48min
How Algorithms Manipulate Us
In this discussion, Michael Klenk, an assistant professor of practical philosophy at Delft University, dives into the complexities of algorithmic manipulation on social media. He unpacks the nuanced distinctions between manipulation and persuasion, emphasizing that not all influence is harmful. Klenk also critiques the ethical implications of algorithms, questioning whether they can guide behavior without crossing moral lines. The conversation explores manipulation in both digital media and personal relationships, highlighting the challenges of honesty amid emotional appeals.

May 8, 2025 • 48min
What Should We Do When AI Knows More Than Us?
We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

May 1, 2025 • 42min
Should We Ignore Claims about AI’s Existential Threat?
Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity posed by AI.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 24, 2025 • 47min
AI is Not Intelligent
I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 17, 2025 • 27min
A Crash Course on the AI Ethics Landscape
Dive into the intricate world of AI ethics, exploring the vital distinctions between 'AI for good' and 'AI for not bad.' Discover the challenges of bias and discrimination arising from flawed training data, and understand the implications of privacy concerns in AI. The conversation sheds light on managing ethical risks and the importance of balanced training to prevent biased outcomes. Plus, gain insights on automation bias and the necessity of ethical practices for beneficial AI technology.

Apr 10, 2025 • 43min
Does AI Ethics in Business Make Sense?
People want AI developed ethically, but is there actually a business case for it? The answer better be yes since, after all, it’s businesses that are developing AI in the first place. Today I talk with Dennis Hirsch, Professor of Law and Computer Science at Ohio State University, who is conducting empirical research on this topic. He argues that AI ethics - or as he prefers to call it, Responsible AI - delivers a lot of bottom line business value. In fact, his research revealed something about its value that he didn’t even expect to see. We’re in the early days of businesses taking AI ethics seriously, but if he’s right, we’ll see a lot more of it. Fingers crossed.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Apr 3, 2025 • 46min
Does AI Undermine Scientific Discovery?
Automation is great, right? It speeds up what needs to get done. But is that always a good thing? What about in the process of scientific discovery? Yes, AI can automate a lot of science by running thousands of virtual experiments and generating results - but is something lost in the process? My guest, Ramón Alvarado a professor of philosophy and a member of the Philosophy and Data Science Initiative at the University of Oregon, thinks something crucial is missing: serendipity. Many significant scientific discoveries occurred by happenstance. Penicillin, for instance, was discovered by Alexander Fleming who accidentally left a petri dish on a bench before going off for vacation. Exactly what is the scientific value of serendipity, how important is it, and how does AI potentially impinge on it? That’s today’s conversation.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Mar 27, 2025 • 48min
The Power of Technologists
Behind all those algorithms are the people who create them and embed them into our lives. How did they get that power? What should they do with it? What are their responsibilities? This and more with my guest Chris Wiggins, Chief Data Scientist at the New York Times, Associate Professor of Applied Mathematics at Columbia University, and author of the book “How Data Happened: A History from the Age of Reason to the Age of Algorithms”. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Mar 20, 2025 • 49min
AI is Uncontrollable
People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Mar 13, 2025 • 48min
A Culture of Online Manipulation
Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy waters. My discussion today is with Cennydd Bowles, Managing Director of NowNext, where he helps organizations develop ethically sound products. He’s also the author of a book called “Future Ethics.” He argues that A/B testing on people is often ethically wrong and creates a culture among developers of a willingness to manipulate people. Great conversation ranging from the ethics of experimentation to marketing and even to capitalism.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy