Ethical Machines

Reid Blackman
undefined
Feb 20, 2025 • 42min

What Psychologists Say About AI Relationships

Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table. Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to robots to loving relationships with AI to the threat of AI-induced self-absorption. I guess I need to have more psychologists on the show.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Feb 13, 2025 • 30min

Am I Wrong About Agentic AI?

A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Feb 6, 2025 • 54min

What Do VCs Want to Know About AI Ethics?

Jaahred Thomas is a VC friend of mine who wanted to talk about the evolving landscape of AI ethics in startups and business generally. So rather than have a normal conversation like people do, we made it an episode! Jaahred asks me a bunch of questions about AI ethics and startups, investors, Fortune 500 companies, and more, and I tell him the unvarnished truths about where corporate America is in the AI ethics journey and what startup founders should and shouldn’t spend their time doing.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jan 30, 2025 • 51min

The Peril of Principles in AI Ethics

From the best of season 1: The hospital faced an ethical question: should we deploy robots to help with elder care?Let’s look at a standard list of AI ethics values: justice/fairness, privacy, transparency, accountability, explainability. But as Ami points out in our conversation, that standard list doesn’t include a core value at the hospital: the value of caring.And that’s one example of one of three objections to a view he calls “Principalism.” Principalism is the view that we do AI ethics best by first defining our AI ethics values or principles at that very abstract level. This objection is that the list will always be incomplete.Given Ami’s expertise in ethics and experience as a clinical ethicist, it was insightful to see how he gets ethics done on the ground and his views on how organizations should approach ethics more generally.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jan 23, 2025 • 50min

Innovation Hype and Why We Should Wait on AI Regulation

Lee Vinsel, a professor at Virginia Tech specializing in innovation and technology, shares valuable insights on the pitfalls of innovation hype. He argues that overhyping technology can cloud rational decision-making for leaders. Vinsel advocates for reactive regulations instead of proactive ones, citing our difficulty in predicting tech applications accurately. He highlights the dual nature of emerging technologies, urging critical assessments over sensationalism, while drawing parallels to historical regulatory responses, emphasizing a balanced approach to innovation and regulation.
undefined
Jan 16, 2025 • 16min

Businesses are afraid to say “ethics”

“Sustainability,” “purpose/mission/value driven”, “human-centric design.” These are terms companies use so they don’t have to say “ethics.” My contention is that this is bad for business and bad for society at large. Our world, corporate and otherwise, is confronted with a growing mountain of ethical problems, spurred on by technologies that bring us fresh new ways of realizing our familiar ethical nightmares. These issues do not disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jan 9, 2025 • 54min

We’re Getting AI and Democracy Wrong

Ted Lechterman, UNESCO Chair in AI Ethics and Governance at IE University, dives deep into the intersection of AI and democracy. He argues that current discussions are too narrow, overlooking critical power dynamics affecting democratic engagement. The conversation challenges misconceptions about AI’s impact, drawing parallels between media influence and public opinion. Lechterman emphasizes the need for genuine stakeholder participation in AI development and advocates for a broader dialogue about ethics and inclusivity in our democratic processes.
undefined
Dec 19, 2024 • 53min

Why Copyright Challenges to AI Learning Will Fail and the Ethical Reasons Why They Shouldn’t

From the best of season 1. Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?I talked with Darren Hick about all this, who wrote a few books on the topic. I definitely didn’t think he was going to bring up Hegel.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Dec 12, 2024 • 50min

Evolving AI Governance

My guest and I have been doing AI governance for businesses for a combined 17+years. We started way before genAI was a big thing. But I’d say I’m more a qualitative guy and he’s more quant. Nick Elprin is the CEO of an AI governance software company, after all. How has AI ethics or AI governance evolved over that time and what does cutting edge governance look like? Perhaps you’re about to find out…Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Dec 5, 2024 • 54min

What’s Wrong With Loving an AI?

People, especially kids under 18, are forming emotional attachments with AI chatbots. At a minimum, this is…weird. Is it also unethical? Does it harm users? Is it, as my guest Robert Mahari argues, an affront to human dignity? Have a listen and find out.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app