Gradient Dissent: Conversations on AI cover image

Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

Gradient Dissent: Conversations on AI

00:00

Transformers: A Multi-Layer Perception Work

I think Transformers are almost MLPs. So they do look like just a bunch of matmalls. They add in like one more access to do matmalls across. And that's the length of your sequence. But I think it's basically trying to cut as close to a massive MLP as you possibly can, because those do saturate compute the best. What you want to avoid are tons of little ops like softmaxes and little activation functions, dropout layers,. All of these little things which break up those big matmall.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app