The Sentience Institute Podcast cover image

Raphaël Millière on large language models

The Sentience Institute Podcast

CHAPTER

The Power of Attention in Language Models

In NLP, words are represented as vectors in a high dimensional vector space. Vectors can capture some aspect of the meaning and potentially syntactic roles of words. And then you can look at the distance between two word vectors to give you some insight into the semantic distance between two words. So people decided that working on these word and meaning models could be very useful for machine learning.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner