Get the app
Tom Henighan
Member of the technical staff at OpenAI, working on the safety team and co-author of the paper "Scaling Laws for Autoregressive Generative Modeling". Completed his PhD in physics at Stanford.
Best podcasts with Tom Henighan
Ranked by the Snipd community
Nov 8, 2020
• 33min
OpenAI's "Scaling Laws for Autoregressive Generative Modeling"
chevron_right
Tom Henighan, a member of OpenAI's safety team and co-author of a groundbreaking paper on scaling laws in generative modeling, shares his insights on model performance. He discusses how scaling influences test loss in autoregressive models, revealing a power law behavior. The importance of balancing model size with computational capacity is emphasized, advocating for an optimal 'Goldilocks' range. Tom also highlights the impact of transformer architectures and model pruning on generative capabilities, sparking excitement for future AI advancements.
The AI-powered Podcast Player
Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
Get the app