
Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere
Gradient Dissent: Conversations on AI
Transformers: A Multi-Layer Perception Work
I think Transformers are almost MLPs. So they do look like just a bunch of matmalls. They add in like one more access to do matmalls across. And that's the length of your sequence. But I think it's basically trying to cut as close to a massive MLP as you possibly can, because those do saturate compute the best. What you want to avoid are tons of little ops like softmaxes and little activation functions, dropout layers,. All of these little things which break up those big matmall.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.