The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Techniques to Reduce Transformer State Size

Sam asks about alternatives; Jacob analyzes windowed attention, GQA, latent attention and layer hybrids as state-size reductions.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app