The Gradient: Perspectives on AI cover image

Hattie Zhou: Lottery Tickets and Algorithmic Reasoning in LLMs

The Gradient: Perspectives on AI

NOTE

Exploring Masks in Probing Pre-trained Models

Probing pre-trained models using masks to identify model parts responsible for specific characteristics of input data connects to previous research on steering models and intervening on the concepts they focus on. Concepts like bottleneck models and recent studies on guiding pre-trained large language models in specific directions based on weight differences have shown promising results. The use of masks to intervene and steer pre-trained models towards desirable outcomes, as seen in the Super Mask and Superposition paper, can address challenges like catastrophic forgetting in continual learning.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner