The Language Neuroscience Podcast cover image

Encoding and decoding semantic representations with Alexander Huth

The Language Neuroscience Podcast

00:00

Transformers vs RNNs: why GPT-style models help

Huth compares recurrent networks and transformer/GPT architectures, highlighting attention and greater context capacity in transformers.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app