

Ethical Machines
Reid Blackman
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Episodes
Mentioned books

Oct 3, 2024 • 49min
Holding AI Responsible for What It Says
In this intriguing discussion, philosopher Emma Borg delves into the accountability of AI chatbots after Canada Air lost a lawsuit involving misinformation. She explores the notion of responsibility in AI outputs, questioning whether chatbots should be held accountable for what they say. Through thought experiments, Borg highlights the complex interplay between intention, meaning, and communication, challenging our understanding of AI's role as a responsible entity. This conversation raises profound philosophical queries about the essence of meaning and intentionality in digital dialogues.

Sep 26, 2024 • 48min
Deepfakes and 2024 Election
Dean Jackson and Jon Bateman, experts on deepfakes and disinformation, dive into the alarming implications of deepfake technology for the 2024 election. They discuss California's new legislation targeting online deepfakes and emphasize the need for media literacy and systemic solutions. The conversation touches on the challenges of managing disinformation in a polarized political landscape, the decline of local journalism, and the importance of trust in information sources. Get ready for a thought-provoking discussion on navigating our digital age!

Sep 19, 2024 • 42min
Ethics for People Who Work in Tech
Marc Steen, an author dedicated to weaving ethics into technology practices, shares his insights on the importance of integrating ethical considerations in AI development. He emphasizes ethics as a continuous, participatory process rather than a mere checklist. The conversation dives into the role of facilitation in ethical discussions and the application of virtue ethics, stressing the need for self-reflection and responsible data science. Steen advocates for ongoing stakeholder engagement and continuous ethical assessments, particularly in high-stakes applications.

Sep 12, 2024 • 33min
Calm the Hell Down : AI is Just Software that Learns by Example and No, It’s Not Going to Kill Us All
Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Sep 5, 2024 • 49min
Does Social Media Diminish Our Autonomy?
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Aug 29, 2024 • 47min
Choosing Who Should Benefit and Who Should Suffer with AI
From the best of season 1: I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Aug 22, 2024 • 1h 6min
We’re Doing AI Ethics Wrong
Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Aug 15, 2024 • 44min
Can AI Do Ethics?
The discussion delves into AI's capability to engage in ethical reasoning akin to human children. Researchers ponder the alignment problem, debating how AI can reflect human values. Complexities arise around teaching AI ethical inquiry, exploring metaethics and the nature of moral truths. The podcast critically examines ethical relativism, questioning the potential for universal standards in AI ethics. By navigating these philosophical challenges, it raises profound implications about AI's role in moral judgment and the future of our ethical constructs.

Aug 8, 2024 • 51min
We Don’t Need AI Regulations
Dean Ball, an expert who argues against new AI regulations, challenges the current narrative that existing laws are insufficient. He emphasizes that current frameworks can manage AI risks like bias and privacy violations. Instead of broad regulations, he advocates for focused governance responses and targeted policies tailored to specific sectors, such as healthcare. The podcast dives into how existing laws can address ethical concerns effectively, urging a more nuanced approach to navigating the complexities of AI.

Aug 1, 2024 • 47min
When Biased AI is Good
David Danks, a professor of data science and philosophy at UCSD, challenges the conventional wisdom about biased AI. He argues that in certain scenarios, biased algorithms can yield positive outcomes when managed effectively. The conversation explores the ethical complexities of AI bias, especially in areas like hiring and judicial decision-making. Danks emphasizes the need for a nuanced approach to AI, suggesting that collaboration between data scientists and ethicists is crucial for developing fairer systems while maintaining human oversight.