3min chapter

The Sentience Institute Podcast cover image

Kurt Gray on human-robot interaction and mind perception

The Sentience Institute Podcast

CHAPTER

AI Decision Making: Why People Don't Like It

People generally don't like AI's making moral decisions because they think that AI, again, doesn't have the capacity for experience. And so people want those making moral decisions to care about the value of other human lives and robots at least now do not. But there is some promise for AI's being less biased, right? If you program them right with unbiased data, assuming that exists, they'll be less biased. So I don't think we should scrap, you know, AI decision making systems because they're biased. I think they have a lot of promise and we should take that promise seriously.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode