LessWrong (Curated & Popular) cover image

“You can remove GPT2’s LayerNorm by fine-tuning for an hour” by StefanHex

LessWrong (Curated & Popular)

00:00

Exploring Modifications to GPT-2's Layer Normalization

This chapter explores the impact of removing Layer Normalization from the GPT-2 model, focusing on how these changes affect performance. It also addresses theoretical insights and empirical results concerning generalization and stability in training and inference.

Play episode from 12:05
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app