The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Language Understanding and LLMs with Christopher Manning - #686

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Exploring Attention Mechanisms in Transformers and Contextual Influence

This chapter explores the key differences between transformers and traditional sequence models such as LSTMs, focusing on the unique attention mechanisms of transformers. It highlights the significance of large context windows in training, illustrating how transformers assess relevance across both nearby and distant elements in a sequence.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app