Ethical Machines

Reid Blackman
undefined
10 snips
Oct 23, 2025 • 40min

How Society Bears AI’s Costs

In this engaging discussion, Karen Yeung, an interdisciplinary fellow in law, ethics, and informatics, dives deep into the social and political costs of AI. She emphasizes how human greed and capitalist incentives, rather than technology itself, consolidate wealth and power. Yeung highlights the dangers of misinformation and the erosion of public discourse, as well as the responsibility of Big Tech as unaccountable gatekeepers. The conversation also addresses the need for regulation over unbridled innovation, urging collective action to shape a fair future with AI.
undefined
Oct 16, 2025 • 56min

How Should We Teach Ethics to Computer Science Majors?

In this engaging discussion, Steven Kelts, a lecturer at Princeton's School of Public and International Affairs and the Department of Computer Science, shares insights on teaching ethics to future tech leaders. He argues for more than just philosophical approaches, advocating for practical training that includes moral awareness and ethical decision-making. Kelts emphasizes the importance of recognizing subtle ethical red flags and fostering systemic solutions over individual heroism. He also explores innovative teaching methods, including role plays and integrating LLMs in ethics education.
undefined
Oct 9, 2025 • 51min

In Defense of Killer Robots

Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Oct 1, 2025 • 39min

Live Recording: Is AI Creating a Sadder Future?

In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jul 31, 2025 • 42min

Season finale: A New Ethics for AI Ethics?

Wendell Wallach, a prominent scholar at Yale's Bioethics Center, shares his extensive insights on AI ethics. He critiques the prevalent concept of 'value alignment' and traditional moral theories, arguing they fall short in the AI domain. Wallach introduces fresh ethical concepts like trade-off ethics and silent ethics, advocating for a universal moral language. He emphasizes the critical role of human responsibility in AI decision-making, especially regarding lethal technologies, making a strong case for a human-centric approach in our technological future.
undefined
Jul 17, 2025 • 47min

Is AI a Person or a Thing… or Neither?

It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
5 snips
Jul 10, 2025 • 52min

How Do You Control Unpredictable AI?

Walter Haydock, a former national security policy advisor and founder of StackAware, dives into the intriguing complexities of unpredictable AI. He discusses the dual nature of large language models—capable of both creativity and chaos. Haydock emphasizes the critical need for structured risk assessments to navigate the pitfalls of integrating agentic AI into organizations. He highlights dangers like data poisoning and calls for stricter testing and monitoring to ensure responsible AI deployment while fostering innovation.
undefined
Jun 26, 2025 • 42min

The AI Job Interviewer

AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. Originally aired in season one. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
6 snips
Jun 19, 2025 • 49min

Accuracy Isn’t Enough

Will Landecker, CEO of Accountable Algorithm and former responsible AI practitioner at notable tech companies, dives into why accuracy is just one metric in AI performance. He discusses balancing accuracy with relevance and ethical considerations, stressing the importance of explainability in AI systems. The conversation also touches on the challenges of multi-agent AI working together and the need for interdisciplinary collaboration to ensure ethical outcomes in algorithmic decision-making.
undefined
Jun 12, 2025 • 39min

Beware of Autonomous Weapons

Should we allow autonomous AI systems? Who is accountable if things go sideways? And how is AI going to transform the future of military work? All this and more with my guest, Rosaria Taddeo, Professor of Digital Ethics and Defense Technologies at the University of Oxford.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app