Gradient Dissent: Conversations on AI cover image

Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

Gradient Dissent: Conversations on AI

00:00

The Pain of TPUs in Machine Learning

GPTJ is typically trained on these A100s and then all typically interest. The MSN, right. And what happens is that puts pressure on the calculation done in the attention head. You're doing an analysis that you're understanding each gene in this case within the context of the entire genome. That combination of very long sequence lines, which is the relevance window, the attention window, plus big parameters was brutally memory intensive and caused the GPUs to bar. It should be pretty darn close to push for the big NLP networks you care about.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app