AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How Does the Manifold of All Natural Language Change During Training?
So we have this cube of all possible data, and our manifold cribe some like, oddly shaped circle in this cube. And you build an eral network that can basically output any point in the cube initially. And now you train it, and yourebas you're asking, ok, how does the things that it Outputs changes during training? Would it be more clustered, or would it cover more of the cube? Well, i so, so ye here's the question. So is that what we think them that like?