
Ethical Machines
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Latest episodes

Nov 13, 2024 • 54min
ChatGPT Does Not Understand Anything Part 1
In a captivating discussion, Alex Grzankowski, a philosophy professor at Birkbeck College and director of the London AI Humanity and AI Project, dives into the depths of understanding in AI versus human cognition. He critiques the common perception that models like ChatGPT truly comprehend language. Exploring the Chinese Room Argument, Alex raises essential questions about machine comprehension, the ethical implications in tech, and the distinction between symbol manipulation and genuine understanding. Get ready to rethink what ‘understanding’ actually means!

Nov 7, 2024 • 46min
Tyranny of the One Best Algorithm
One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Oct 31, 2024 • 58min
How AI Ends Legal Uncertainty
Abdi Adid, a visiting associate professor of law at Yale and co-author of "The Legal Singularity," dives into the transformative potential of AI in law. He discusses how AI can synthesize dense legal texts to provide clarity and accessibility for the average person. The conversation touches on AI's role in navigating property rights, its impact on the legal landscape, and the balance between technology and human judgment. The ethical implications of AI in justice are explored, alongside the challenges of making legal advice accessible while avoiding frivolous lawsuits.

Oct 24, 2024 • 37min
Is Tech a Religion that Needs Reformation?
Greg Epstein, the humanist chaplain at Harvard and MIT and author of "Tech Agnostic," dives deep into the notion of technology as a contemporary religion. He explores how technology shapes societal norms and rituals, questioning its ethical implications. Discussions include the existential risks of AI, likening its worship-like fervor to traditional beliefs. Epstein advocates for a much-needed reformation in tech practices, emphasizing accountability among leaders and the necessity for a more equitable approach in the digital landscape.

Oct 17, 2024 • 53min
Should We Care About Data Privacy?
From the best of season 1: You might think it's outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the "attention" economy is often objected to on just these grounds.On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn't really matter all that much. Carissa has powerful arguments against me.This conversation goes way deeper than 'privacy good/data collection bad' statements we see all the time. I hope you enjoy!Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Oct 10, 2024 • 1h 3min
The AI Mirror
Shannon Vallor, the Bailey Gifford Chair in the Ethics of Data and AI at the University of Edinburgh and author of "The AI Mirror," reframes our understanding of AI. She argues against seeing AI as a human-like entity and instead proposes viewing it as a mirror reflecting our biases and intentions. Vallor critiques how AI perpetuates stereotypes and suggests we prioritize addressing human-centered risks over speculative AI threats. Her insights advocate for a more ethical approach to AI development, emphasizing genuine engagement and innovation.

Oct 3, 2024 • 49min
Holding AI Responsible for What It Says
In this intriguing discussion, philosopher Emma Borg delves into the accountability of AI chatbots after Canada Air lost a lawsuit involving misinformation. She explores the notion of responsibility in AI outputs, questioning whether chatbots should be held accountable for what they say. Through thought experiments, Borg highlights the complex interplay between intention, meaning, and communication, challenging our understanding of AI's role as a responsible entity. This conversation raises profound philosophical queries about the essence of meaning and intentionality in digital dialogues.

Sep 26, 2024 • 48min
Deepfakes and 2024 Election
Dean Jackson and Jon Bateman, experts on deepfakes and disinformation, dive into the alarming implications of deepfake technology for the 2024 election. They discuss California's new legislation targeting online deepfakes and emphasize the need for media literacy and systemic solutions. The conversation touches on the challenges of managing disinformation in a polarized political landscape, the decline of local journalism, and the importance of trust in information sources. Get ready for a thought-provoking discussion on navigating our digital age!

Sep 19, 2024 • 42min
Ethics for People Who Work in Tech
Marc Steen, an author dedicated to weaving ethics into technology practices, shares his insights on the importance of integrating ethical considerations in AI development. He emphasizes ethics as a continuous, participatory process rather than a mere checklist. The conversation dives into the role of facilitation in ethical discussions and the application of virtue ethics, stressing the need for self-reflection and responsible data science. Steen advocates for ongoing stakeholder engagement and continuous ethical assessments, particularly in high-stakes applications.

Sep 12, 2024 • 33min
Calm the Hell Down : AI is Just Software that Learns by Example and No, It’s Not Going to Kill Us All
Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.