The Sentience Institute Podcast cover image

Kurt Gray on human-robot interaction and mind perception

The Sentience Institute Podcast

00:00

AI Decision Making: Why People Don't Like It

People generally don't like AI's making moral decisions because they think that AI, again, doesn't have the capacity for experience. And so people want those making moral decisions to care about the value of other human lives and robots at least now do not. But there is some promise for AI's being less biased, right? If you program them right with unbiased data, assuming that exists, they'll be less biased. So I don't think we should scrap, you know, AI decision making systems because they're biased. I think they have a lot of promise and we should take that promise seriously.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app