Ethical Machines cover image

Ethical Machines

Latest episodes

undefined
Sep 5, 2024 • 49min

Does Social Media Diminish Our Autonomy?

Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Aug 29, 2024 • 47min

Choosing Who Should Benefit and Who Should Suffer with AI

From the best of season 1: I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Aug 22, 2024 • 1h 6min

We’re Doing AI Ethics Wrong

Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Aug 15, 2024 • 44min

Can AI Do Ethics?

The discussion delves into AI's capability to engage in ethical reasoning akin to human children. Researchers ponder the alignment problem, debating how AI can reflect human values. Complexities arise around teaching AI ethical inquiry, exploring metaethics and the nature of moral truths. The podcast critically examines ethical relativism, questioning the potential for universal standards in AI ethics. By navigating these philosophical challenges, it raises profound implications about AI's role in moral judgment and the future of our ethical constructs.
undefined
Aug 8, 2024 • 51min

We Don’t Need AI Regulations

Dean Ball, an expert who argues against new AI regulations, challenges the current narrative that existing laws are insufficient. He emphasizes that current frameworks can manage AI risks like bias and privacy violations. Instead of broad regulations, he advocates for focused governance responses and targeted policies tailored to specific sectors, such as healthcare. The podcast dives into how existing laws can address ethical concerns effectively, urging a more nuanced approach to navigating the complexities of AI.
undefined
Aug 1, 2024 • 47min

When Biased AI is Good

David Danks, a professor of data science and philosophy at UCSD, challenges the conventional wisdom about biased AI. He argues that in certain scenarios, biased algorithms can yield positive outcomes when managed effectively. The conversation explores the ethical complexities of AI bias, especially in areas like hiring and judicial decision-making. Danks emphasizes the need for a nuanced approach to AI, suggesting that collaboration between data scientists and ethicists is crucial for developing fairer systems while maintaining human oversight.
undefined
Jul 25, 2024 • 49min

The Secret Life of Data

Aram Sinnreich and Jesse Gilbert delve into the complexities of data privacy in 'The Secret Life of Data'. They discuss the blurred lines of data control, ethical implications of data collection, societal impacts, historical influences, and the necessity of regulating technology to protect individuals and democratic institutions.
undefined
Jul 18, 2024 • 56min

The Necessary Imperfections of AI Content Moderation

With the ocean of social media content we need AI to identify and remove inappropriate material; humans just can’t keep up. But AI doesn’t assess content the same way we do. It’s not a deliberative body akin to the Supreme Court. But because we think of content moderation as a reflection of human evaluation, we then make unreasonable demands of social media companies and ask for regulations that won’t protect anyone. When we reframe what AI content moderation is and has to be, my guest argues, that leads us to make more reasonable and more effective demands of social media companies and government.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jul 11, 2024 • 51min

AI Armageddon is Unlikely

AI + nuclear capacities sounds like a recipe for disaster. Some people think it could cause mass extinction. While it’s easy to let our imaginations run wild, insight into how the military actually incorporates AI into its weapons and operations is a much better idea. Heather gives us precisely those insights and (thus) the opportunity to think clearly about the threat.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jul 4, 2024 • 49min

Could AI Undermine Informed Consent?

AI holds a lot of promise in making faster, more accurate diagnoses of our ailments. But if they are too influential, might they undermine our doctors’ ability to understand the rationale for the diagnose? And could it undermine the aspect of the doctor-patient relationship that is crucial for maintaining our patient autonomy?Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner