AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Symbol Grounding Problem in AI
The idea that this is a pure axiom data logical conclusion process isn't actually true because the argument either lands as true or not based on its correlation with experience. And what's not completely clear is presumably there's some type of experience, experienced evidence that a person could have that would change their mind. People do change their minds and presumably it's for reasons. It could be just an honest intellectual mistake. It's a conclusion they make and they're just not quite thinking well but I don't know exactly what that is. But I keep hoping that we can find.