Ethical Machines cover image

Ethical Machines

Latest episodes

undefined
Mar 27, 2025 • 48min

The Power of Technologists

Behind all those algorithms are the people who create them and embed them into our lives. How did they get that power? What should they do with it? What are their responsibilities? This and more with my guest Chris Wiggins, Chief Data Scientist at the New York Times, Associate Professor of Applied Mathematics at Columbia University, and author of the book “How Data Happened: A History from the Age of Reason to the Age of Algorithms”. Originally aired in season one.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Mar 20, 2025 • 49min

AI is Uncontrollable

People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Mar 13, 2025 • 48min

A Culture of Online Manipulation

Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy waters. My discussion today is with Cennydd Bowles, Managing Director of NowNext, where he helps organizations develop ethically sound products. He’s also the author of a book called “Future Ethics.” He argues that A/B testing on people is often ethically wrong and creates a culture among developers of a willingness to manipulate people. Great conversation ranging from the ethics of experimentation to marketing and even to capitalism.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Mar 7, 2025 • 40min

AI Risk Mitigation is Insanely Complex

There’s a picture in our heads that’s overly simplistic and the result is not thinking clearly about AI risks. Our simplistic picture is that a team develops AI and then it gets used. The truth, the more complex picture, is that 1000 hands touch that AI before it ever becomes a product. This means that risk identification and mitigation is spread across a very complex supply chain. My guest, Jason Stanley, is at the forefront of research and application when it comes to managing all this complexityAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Feb 27, 2025 • 46min

Did You Say "Quantum" Computer?

Brian Linehan, founder of the Quantum Strategy Institute, sheds light on the intriguing world of quantum computing. He explains the groundbreaking advancements from Microsoft, emphasizing how quantum technology could revolutionize industries like finance and logistics. Brian simplifies complex concepts, discussing its implications for cybersecurity and the urgent need for quantum-resistant encryption. He also highlights the increasing demand for skilled professionals in the field, making quantum computing an exciting frontier to explore.
undefined
Feb 20, 2025 • 42min

What Psychologists Say About AI Relationships

Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table. Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to robots to loving relationships with AI to the threat of AI-induced self-absorption. I guess I need to have more psychologists on the show.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Feb 13, 2025 • 30min

Am I Wrong About Agentic AI?

A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Feb 6, 2025 • 54min

What Do VCs Want to Know About AI Ethics?

Jaahred Thomas is a VC friend of mine who wanted to talk about the evolving landscape of AI ethics in startups and business generally. So rather than have a normal conversation like people do, we made it an episode! Jaahred asks me a bunch of questions about AI ethics and startups, investors, Fortune 500 companies, and more, and I tell him the unvarnished truths about where corporate America is in the AI ethics journey and what startup founders should and shouldn’t spend their time doing.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jan 30, 2025 • 51min

The Peril of Principles in AI Ethics

From the best of season 1: The hospital faced an ethical question: should we deploy robots to help with elder care?Let’s look at a standard list of AI ethics values: justice/fairness, privacy, transparency, accountability, explainability. But as Ami points out in our conversation, that standard list doesn’t include a core value at the hospital: the value of caring.And that’s one example of one of three objections to a view he calls “Principalism.” Principalism is the view that we do AI ethics best by first defining our AI ethics values or principles at that very abstract level. This objection is that the list will always be incomplete.Given Ami’s expertise in ethics and experience as a clinical ethicist, it was insightful to see how he gets ethics done on the ground and his views on how organizations should approach ethics more generally.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
undefined
Jan 23, 2025 • 50min

Innovation Hype and Why We Should Wait on AI Regulation

Lee Vinsel, a professor at Virginia Tech specializing in innovation and technology, shares valuable insights on the pitfalls of innovation hype. He argues that overhyping technology can cloud rational decision-making for leaders. Vinsel advocates for reactive regulations instead of proactive ones, citing our difficulty in predicting tech applications accurately. He highlights the dual nature of emerging technologies, urging critical assessments over sensationalism, while drawing parallels to historical regulatory responses, emphasizing a balanced approach to innovation and regulation.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app