It feels pretty intuitive that at least as a single passive transformer you can't really think through it thoroughly as a compositional task. It's more of a like jumping to a conclusion with some instinct and they have theoretical results as well as empirical ones so again we are accumulating more understanding of these kinds of models transformers or language models. This kind of result on compositional tasks very relevant to something like Swift Sage right where we create an agent that is iterative instead of just having one output.