
Peter Railton on AI and Ethics
Philosophy Bites
Is There a Risk That Our Moral Thinking Will Atrophy?
i imagines a world in which we outsource some of our moral thinking. If it gets good at modelling moral intuitions, why would we need to use intuitions any more? And isn't there a risk that our capacity for making judgments on the fly could atrophy as a result of that? Yes. One way to think about the idea of learning here is that learning is a continuing process. No machine can know everything, so there will always be some bias from the amount or kind of data that's going in. and humans would be irresponsible if they resigned their role.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.