Manifold cover image

Artificial Intelligence & Large Language Models: Oxford Lecture — #35

Manifold

CHAPTER

The Transformer Architecture: How Word Order Matters

This is based on a paper that was published by researchers at Google Brain back in 2017. They gave an architecture which could look at large chunks of training data, i.e. human generated text and extract that embedding model. And more than that they can also do it in an automated way. It's not perfect. I'm about to get into why it's not perfect, but it was a little bit shocking to me to realize how at least at first approximation one could think of it as a vector space. Okay. So now obviously we talked a little bit about how this embedding that little thing called the embedding model is built. We're going to discuss that in more detail

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner