Ethical Machines cover image

Ethical Machines

Latest episodes

undefined
Jun 26, 2025 • 42min

The AI Job Interviewer

AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. Originally aired in season one. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
6 snips
Jun 19, 2025 • 49min

Accuracy Isn’t Enough

Will Landecker, CEO of Accountable Algorithm and former responsible AI practitioner at notable tech companies, dives into why accuracy is just one metric in AI performance. He discusses balancing accuracy with relevance and ethical considerations, stressing the importance of explainability in AI systems. The conversation also touches on the challenges of multi-agent AI working together and the need for interdisciplinary collaboration to ensure ethical outcomes in algorithmic decision-making.
undefined
Jun 12, 2025 • 39min

Beware of Autonomous Weapons

Should we allow autonomous AI systems? Who is accountable if things go sideways? And how is AI going to transform the future of military work? All this and more with my guest, Rosaria Taddeo, Professor of Digital Ethics and Defense Technologies at the University of Oxford.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jun 5, 2025 • 54min

How Do We Construct Intelligence?

The Silicon Valley titans talk a lot about intelligence and super intelligence of AI…but what is intelligence, anyway? My guest, former philosopher professor and now Director at Gartner Philip Walsh argues the SV folks are fundamentally confused about what intelligence is. It’s not, he argues, like horsepower, which can be objectively measured. Instead, whether we ascribe intelligence to something is a matter of what we communally agree to ascribe intelligence to. More specifically, we have to collectively agree on the criteria for intelligence and that’s when it makes sense to say “yeah this thing is intelligent.” But we don’t really have a settled collective agreement and that’s why we sort of want to say “this is not intelligence” at the same time we say, “How is this thing so smart?!” I think this is a crucial discussion for anyone who wants to think deeply about what to make of our new quasi/proto/faux intelligent companions.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
May 29, 2025 • 34min

AI Needs Historians

The podcast explores the essential role of historians in addressing modern tech challenges. By examining the historical roots of AI and social media, the speakers highlight how past values shape current issues. They argue that understanding these contexts is crucial for effective policymaking and ethical AI development. The discussion emphasizes a humanistic approach to technology that aligns with societal values, urging for solutions that consider historical narratives. Ultimately, it’s a call for a more informed and responsible tech movement.
undefined
May 22, 2025 • 53min

We’re Not Ready for Agentic AI

In this discussion, Avijit Ghosh, an Applied Policy Researcher at Hugging Face focused on AI safety, reveals the perils of deploying agentic AI without proper safeguards. He highlights the gaps in current AI ethical practices and the challenges in managing autonomy within these systems. Ghosh emphasizes the importance of human oversight in communication protocols between AI agents. Their conversation dives into the necessity for robust cybersecurity measures and the ethical implications of AI in critical fields like healthcare.
undefined
May 15, 2025 • 48min

How Algorithms Manipulate Us

In this discussion, Michael Klenk, an assistant professor of practical philosophy at Delft University, dives into the complexities of algorithmic manipulation on social media. He unpacks the nuanced distinctions between manipulation and persuasion, emphasizing that not all influence is harmful. Klenk also critiques the ethical implications of algorithms, questioning whether they can guide behavior without crossing moral lines. The conversation explores manipulation in both digital media and personal relationships, highlighting the challenges of honesty amid emotional appeals.
undefined
May 8, 2025 • 48min

What Should We Do When AI Knows More Than Us?

We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
May 1, 2025 • 42min

Should We Ignore Claims about AI’s Existential Threat?

Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity posed by AI.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Apr 24, 2025 • 47min

AI is Not Intelligent

I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app