LLMs and deep learning focus on mapping tokens or things into a vector space where the distance between points in this space reflects their semantic similarity. This concept is akin to Hebbian learning where neurons that fire together wire together, likening the connection strength between neurons to a distance. The distance in this context is more of a topological distance than a geometric one.