The Sentience Institute Podcast cover image

Kurt Gray on human-robot interaction and mind perception

The Sentience Institute Podcast

CHAPTER

AI Decision Making: Why People Don't Like It

People generally don't like AI's making moral decisions because they think that AI, again, doesn't have the capacity for experience. And so people want those making moral decisions to care about the value of other human lives and robots at least now do not. But there is some promise for AI's being less biased, right? If you program them right with unbiased data, assuming that exists, they'll be less biased. So I don't think we should scrap, you know, AI decision making systems because they're biased. I think they have a lot of promise and we should take that promise seriously.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner