Ethical Machines cover image

Ethical Machines

Latest episodes

undefined
Oct 24, 2024 • 37min

Is Tech a Religion that Needs Reformation?

Greg Epstein, the humanist chaplain at Harvard and MIT and author of "Tech Agnostic," dives deep into the notion of technology as a contemporary religion. He explores how technology shapes societal norms and rituals, questioning its ethical implications. Discussions include the existential risks of AI, likening its worship-like fervor to traditional beliefs. Epstein advocates for a much-needed reformation in tech practices, emphasizing accountability among leaders and the necessity for a more equitable approach in the digital landscape.
undefined
Oct 17, 2024 • 53min

Should We Care About Data Privacy?

From the best of season 1: You might think it's outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the "attention" economy is often objected to on just these grounds.On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn't really matter all that much. Carissa has powerful arguments against me.This conversation goes way deeper than 'privacy good/data collection bad' statements we see all the time. I hope you enjoy!Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Oct 10, 2024 • 1h 3min

The AI Mirror

Shannon Vallor, the Bailey Gifford Chair in the Ethics of Data and AI at the University of Edinburgh and author of "The AI Mirror," reframes our understanding of AI. She argues against seeing AI as a human-like entity and instead proposes viewing it as a mirror reflecting our biases and intentions. Vallor critiques how AI perpetuates stereotypes and suggests we prioritize addressing human-centered risks over speculative AI threats. Her insights advocate for a more ethical approach to AI development, emphasizing genuine engagement and innovation.
undefined
Oct 3, 2024 • 49min

Holding AI Responsible for What It Says

In this intriguing discussion, philosopher Emma Borg delves into the accountability of AI chatbots after Canada Air lost a lawsuit involving misinformation. She explores the notion of responsibility in AI outputs, questioning whether chatbots should be held accountable for what they say. Through thought experiments, Borg highlights the complex interplay between intention, meaning, and communication, challenging our understanding of AI's role as a responsible entity. This conversation raises profound philosophical queries about the essence of meaning and intentionality in digital dialogues.
undefined
Sep 26, 2024 • 48min

Deepfakes and 2024 Election

Dean Jackson and Jon Bateman, experts on deepfakes and disinformation, dive into the alarming implications of deepfake technology for the 2024 election. They discuss California's new legislation targeting online deepfakes and emphasize the need for media literacy and systemic solutions. The conversation touches on the challenges of managing disinformation in a polarized political landscape, the decline of local journalism, and the importance of trust in information sources. Get ready for a thought-provoking discussion on navigating our digital age!
undefined
Sep 19, 2024 • 42min

Ethics for People Who Work in Tech

Marc Steen, an author dedicated to weaving ethics into technology practices, shares his insights on the importance of integrating ethical considerations in AI development. He emphasizes ethics as a continuous, participatory process rather than a mere checklist. The conversation dives into the role of facilitation in ethical discussions and the application of virtue ethics, stressing the need for self-reflection and responsible data science. Steen advocates for ongoing stakeholder engagement and continuous ethical assessments, particularly in high-stakes applications.
undefined
Sep 12, 2024 • 33min

Calm the Hell Down : AI is Just Software that Learns by Example and No, It’s Not Going to Kill Us All

Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Sep 5, 2024 • 49min

Does Social Media Diminish Our Autonomy?

Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Aug 29, 2024 • 47min

Choosing Who Should Benefit and Who Should Suffer with AI

From the best of season 1: I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Aug 22, 2024 • 1h 6min

We’re Doing AI Ethics Wrong

Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app