AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is There a Risk That Our Moral Thinking Will Atrophy?
i imagines a world in which we outsource some of our moral thinking. If it gets good at modelling moral intuitions, why would we need to use intuitions any more? And isn't there a risk that our capacity for making judgments on the fly could atrophy as a result of that? Yes. One way to think about the idea of learning here is that learning is a continuing process. No machine can know everything, so there will always be some bias from the amount or kind of data that's going in. and humans would be irresponsible if they resigned their role.