AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Why Isn't the Explainability Model a Research Project?
We have a very comp linguistic type of a grammar based model that can take the sequel and compositionally produce the natural language explanation step by step. And just to make ser understand that, so your the explainer is explaining the result, not the process for obtaining the result. But i don'tno, does it even matter? Well, we think because a ent normally anduser just care about the final answer. Soyou you can think of seqol some sort of intermediate representation. But this is the intermediate representation that actually could potentially be alined with how humans think about the problem. Tis goes back to what i mention the beginning, like the machine's representation of the data of the