The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Understanding Neural Network Interpretability

This chapter explores the critical role of interpretability in neural networks, highlighting how knowledge is stored and accessed within models. The discussion covers methods for model editing, the significance of localization, and advancements like causal tracing for deeper insights into model behavior. Additionally, it examines fine-tuning techniques and their implications for maintaining model integrity and privacy.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app