

Should AI be trusted with Moral decisions? | Cambridge & Oxford Philosophers Discuss
Sep 4, 2025
Philosophers Alex Carter from Cambridge and Amna Whiston from Oxford tackle the ethical quagmires of AI. They unravel the complexities of moral decision-making machines, questioning if AI can ever truly carry responsibility. The conversation spans the trolley problem, the implications of AI on human relationships, and the pressing challenge of intellectual property in a tech-driven age. As automation rises, compelling questions arise about the potential dehumanization in our society and the balance between technology and essential human qualities.
AI Snips
Chapters
Books
Transcript
Episode notes
AI As An Evolving Tool Not A Person
- Amna Whiston defines AI as an evolving artificial tool that mimics certain human intelligence features without true human capacities like emotion or meaning.
- She warns AI is usable and misusable and likens it to a "modern slave" justified for human purposes.
AI Executes Intent, It Doesn't Intend
- Alex Carter frames AI as "thinking the noun, not the verb"—it performs thinking-at-a-distance using code humans provide.
- He stresses AI acts as intended by programmers and cannot autonomously choose meanings or emotions.
Ethics Is How We Live, Not Puzzle Solving
- Alex reframes ethics as the question "how should we live?" rather than solvable puzzles like the trolley problem.
- He argues many ethical dilemmas aren't meant to have a single correct answer and require navigating conflicting duties.