Ethical Machines cover image

Ethical Machines

Latest episodes

undefined
Jan 16, 2025 • 16min

Businesses are afraid to say “ethics”

“Sustainability,” “purpose/mission/value driven”, “human-centric design.” These are terms companies use so they don’t have to say “ethics.” My contention is that this is bad for business and bad for society at large. Our world, corporate and otherwise, is confronted with a growing mountain of ethical problems, spurred on by technologies that bring us fresh new ways of realizing our familiar ethical nightmares. These issues do not disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jan 9, 2025 • 54min

We’re Getting AI and Democracy Wrong

Ted Lechterman, UNESCO Chair in AI Ethics and Governance at IE University, dives deep into the intersection of AI and democracy. He argues that current discussions are too narrow, overlooking critical power dynamics affecting democratic engagement. The conversation challenges misconceptions about AI’s impact, drawing parallels between media influence and public opinion. Lechterman emphasizes the need for genuine stakeholder participation in AI development and advocates for a broader dialogue about ethics and inclusivity in our democratic processes.
undefined
Dec 19, 2024 • 53min

Why Copyright Challenges to AI Learning Will Fail and the Ethical Reasons Why They Shouldn’t

From the best of season 1. Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?I talked with Darren Hick about all this, who wrote a few books on the topic. I definitely didn’t think he was going to bring up Hegel.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Dec 12, 2024 • 50min

Evolving AI Governance

My guest and I have been doing AI governance for businesses for a combined 17+years. We started way before genAI was a big thing. But I’d say I’m more a qualitative guy and he’s more quant. Nick Elprin is the CEO of an AI governance software company, after all. How has AI ethics or AI governance evolved over that time and what does cutting edge governance look like? Perhaps you’re about to find out…Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Dec 5, 2024 • 54min

What’s Wrong With Loving an AI?

People, especially kids under 18, are forming emotional attachments with AI chatbots. At a minimum, this is…weird. Is it also unethical? Does it harm users? Is it, as my guest Robert Mahari argues, an affront to human dignity? Have a listen and find out.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Nov 21, 2024 • 52min

Rationally Believing Conspiracy Theories

You might want more online content moderation so insane conspiracy theories don’t flourish. Sex slaves in Democrat pizza shops, climate change is a hoax, and so on. But is it irrational to believe these things? Is content moderation - whether in the form of censoring or labelling something as false - the morally right and/or effective strategy? In this discussion Neil Levy and I go back to basics about what it is to be rational and how that helps us answer our questions. Neil’s fascinating answer in a nutshell: they’re not irrational and content moderation isn’t a good strategy. This is, I have to say, great stuff. Enjoy!Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Nov 14, 2024 • 60min

AI Understands. A Little. Part 2

From the best of season 1. Part 2 of my conversation with Alex. There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Nov 13, 2024 • 54min

ChatGPT Does Not Understand Anything Part 1

In a captivating discussion, Alex Grzankowski, a philosophy professor at Birkbeck College and director of the London AI Humanity and AI Project, dives into the depths of understanding in AI versus human cognition. He critiques the common perception that models like ChatGPT truly comprehend language. Exploring the Chinese Room Argument, Alex raises essential questions about machine comprehension, the ethical implications in tech, and the distinction between symbol manipulation and genuine understanding. Get ready to rethink what ‘understanding’ actually means!
undefined
Nov 7, 2024 • 46min

Tyranny of the One Best Algorithm

One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Oct 31, 2024 • 58min

How AI Ends Legal Uncertainty

Abdi Adid, a visiting associate professor of law at Yale and co-author of "The Legal Singularity," dives into the transformative potential of AI in law. He discusses how AI can synthesize dense legal texts to provide clarity and accessibility for the average person. The conversation touches on AI's role in navigating property rights, its impact on the legal landscape, and the balance between technology and human judgment. The ethical implications of AI in justice are explored, alongside the challenges of making legal advice accessible while avoiding frivolous lawsuits.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app