AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is There Need to Be an Element of Explainability in These Kinds of Models?
i think most of behavioral science is basically realizing that our own explanations of our actions are quite poor, right? Like, you kind of do something, and you come up with some like, back narrative for why you did it. So i o feel like the base line we should shoot for, is that we should get to a better place than where we are with humans in terms of being able to explain why decisions were made. But on the other hand, let's not kit ourselves anything, like even writing the simplest programme is a super complicated task. You need to know this whole library of all these different functions. And i think that to even fit in our brain exactly the like, ok