Ethical Machines cover image

Ethical Machines

Latest episodes

undefined
Jul 17, 2025 • 47min

Is AI a Person or a Thing… or Neither?

It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
5 snips
Jul 10, 2025 • 52min

How Do You Control Unpredictable AI?

Walter Haydock, a former national security policy advisor and founder of StackAware, dives into the intriguing complexities of unpredictable AI. He discusses the dual nature of large language models—capable of both creativity and chaos. Haydock emphasizes the critical need for structured risk assessments to navigate the pitfalls of integrating agentic AI into organizations. He highlights dangers like data poisoning and calls for stricter testing and monitoring to ensure responsible AI deployment while fostering innovation.
undefined
Jun 26, 2025 • 42min

The AI Job Interviewer

AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. Originally aired in season one. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
6 snips
Jun 19, 2025 • 49min

Accuracy Isn’t Enough

Will Landecker, CEO of Accountable Algorithm and former responsible AI practitioner at notable tech companies, dives into why accuracy is just one metric in AI performance. He discusses balancing accuracy with relevance and ethical considerations, stressing the importance of explainability in AI systems. The conversation also touches on the challenges of multi-agent AI working together and the need for interdisciplinary collaboration to ensure ethical outcomes in algorithmic decision-making.
undefined
Jun 12, 2025 • 39min

Beware of Autonomous Weapons

Should we allow autonomous AI systems? Who is accountable if things go sideways? And how is AI going to transform the future of military work? All this and more with my guest, Rosaria Taddeo, Professor of Digital Ethics and Defense Technologies at the University of Oxford.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jun 5, 2025 • 54min

How Do We Construct Intelligence?

The Silicon Valley titans talk a lot about intelligence and super intelligence of AI…but what is intelligence, anyway? My guest, former philosopher professor and now Director at Gartner Philip Walsh argues the SV folks are fundamentally confused about what intelligence is. It’s not, he argues, like horsepower, which can be objectively measured. Instead, whether we ascribe intelligence to something is a matter of what we communally agree to ascribe intelligence to. More specifically, we have to collectively agree on the criteria for intelligence and that’s when it makes sense to say “yeah this thing is intelligent.” But we don’t really have a settled collective agreement and that’s why we sort of want to say “this is not intelligence” at the same time we say, “How is this thing so smart?!” I think this is a crucial discussion for anyone who wants to think deeply about what to make of our new quasi/proto/faux intelligent companions.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
May 29, 2025 • 34min

AI Needs Historians

The podcast explores the essential role of historians in addressing modern tech challenges. By examining the historical roots of AI and social media, the speakers highlight how past values shape current issues. They argue that understanding these contexts is crucial for effective policymaking and ethical AI development. The discussion emphasizes a humanistic approach to technology that aligns with societal values, urging for solutions that consider historical narratives. Ultimately, it’s a call for a more informed and responsible tech movement.
undefined
May 22, 2025 • 53min

We’re Not Ready for Agentic AI

In this discussion, Avijit Ghosh, an Applied Policy Researcher at Hugging Face focused on AI safety, reveals the perils of deploying agentic AI without proper safeguards. He highlights the gaps in current AI ethical practices and the challenges in managing autonomy within these systems. Ghosh emphasizes the importance of human oversight in communication protocols between AI agents. Their conversation dives into the necessity for robust cybersecurity measures and the ethical implications of AI in critical fields like healthcare.
undefined
May 15, 2025 • 48min

How Algorithms Manipulate Us

In this discussion, Michael Klenk, an assistant professor of practical philosophy at Delft University, dives into the complexities of algorithmic manipulation on social media. He unpacks the nuanced distinctions between manipulation and persuasion, emphasizing that not all influence is harmful. Klenk also critiques the ethical implications of algorithms, questioning whether they can guide behavior without crossing moral lines. The conversation explores manipulation in both digital media and personal relationships, highlighting the challenges of honesty amid emotional appeals.
undefined
May 8, 2025 • 48min

What Should We Do When AI Knows More Than Us?

We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app