Gradient Dissent: Conversations on AI cover image

Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

Gradient Dissent: Conversations on AI

CHAPTER

The Pain of TPUs in Machine Learning

GPTJ is typically trained on these A100s and then all typically interest. The MSN, right. And what happens is that puts pressure on the calculation done in the attention head. You're doing an analysis that you're understanding each gene in this case within the context of the entire genome. That combination of very long sequence lines, which is the relevance window, the attention window, plus big parameters was brutally memory intensive and caused the GPUs to bar. It should be pretty darn close to push for the big NLP networks you care about.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner