650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy
Feb 3, 2023
07:47
forum Ask episode
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
SparseGPT is a noteworthy one-shot pruning technique that can halve the size of large language models like GPT-3 without adversely affecting accuracy. In this episode, Jon Krohn provides an overview of this development and explains its commercial and environmental implications.