Ethical Machines

Season finale: A New Ethics for AI Ethics?

Jul 31, 2025
Wendell Wallach, a prominent scholar at Yale's Bioethics Center, shares his extensive insights on AI ethics. He critiques the prevalent concept of 'value alignment' and traditional moral theories, arguing they fall short in the AI domain. Wallach introduces fresh ethical concepts like trade-off ethics and silent ethics, advocating for a universal moral language. He emphasizes the critical role of human responsibility in AI decision-making, especially regarding lethal technologies, making a strong case for a human-centric approach in our technological future.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Value Alignment Shrunk A Rich Field

  • "Value alignment" narrowed a rich philosophical project into a technical framing favored by AI researchers.
  • Wendell Wallach argues this shift drained nuance from earlier machine ethics work and left gaps unsolved.
INSIGHT

Classical Theories Don’t Offer Algorithms

  • Traditional ethical theories (deontology/consequentialism) lack operational algorithms for difficult real-world dilemmas.
  • Wallach says neither gives clear decision procedures for machines or for complex human dilemmas today.
ADVICE

Weigh Options And Ameliorate Harms

  • Use trade-off ethics: list options, weigh benefits and harms, and actively reduce harms before choosing.
  • Factor amelioration of harms into decisions rather than just maximizing aggregate good.
Get the Snipd Podcast app to discover more snips from this episode
Get the app