Ethical Machines

Reid Blackman
undefined
Jan 22, 2026 • 42min

AI is Culturally Ignorant

AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.Advertising Inquiries: https://redcircle.com/brands
undefined
Jan 15, 2026 • 54min

When Metrics Make Us Happy, or Miserable

When we’re playing a game or a sport, we like being measured. We want a high score, we want to beat the game. Measurement makes it fun. But in work, being measured, hitting our numbers, can make us miserable. Why does measuring ourselves sometimes enhance and sometimes undermine our happiness and sense of fulfillment? That’s the question C. Thi Nguyen tackles in his new book “The Score: How to Stop Playing Somebody Else’s Game.” Thi is one of the most interesting philosophers I know - enjoy!Advertising Inquiries: https://redcircle.com/brands
undefined
Jan 8, 2026 • 48min

We Need International Agreement on AI Standards

When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.Advertising Inquiries: https://redcircle.com/brands
undefined
Dec 18, 2025 • 52min

Rewriting History with AI

What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.Advertising Inquiries: https://redcircle.com/brands
undefined
Dec 11, 2025 • 46min

AI is Not a Normal Technology

When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.Advertising Inquiries: https://redcircle.com/brands
undefined
Dec 4, 2025 • 59min

We Are All Responsible for AI, Part 2

In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.Advertising Inquiries: https://redcircle.com/brands
undefined
Nov 20, 2025 • 1h 4min

We Are All Responsible for AI, Part 1

We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!Advertising Inquiries: https://redcircle.com/brands
undefined
Nov 13, 2025 • 44min

Orchestrating Ethics

In a thought-provoking conversation, David Danks, a Professor of Philosophy and Data Science known for his work on AI ethics, explores the crucial concept of ethical interoperability. He discusses the risks of differing ethical standards when companies integrate AI models from multiple sources. Danks emphasizes the need for case-by-case ethical alignment and the challenges of accountability in AI deployment. He also delves into how transparency and operational clarity can enhance ethical assessments, urging firms and governments to recognize mismatched ethical practices.
undefined
Nov 6, 2025 • 46min

The Military is the Safest Place to Test AI

How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. Advertising Inquiries: https://redcircle.com/brands
undefined
Oct 30, 2025 • 46min

Should We Make Digital Copies of People?

Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.Advertising Inquiries: https://redcircle.com/brands

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app