Machine Learning Street Talk (MLST) cover image

Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

Machine Learning Street Talk (MLST)

00:00

Out-of-Context Generalization in Language Models

This chapter delves into a pioneering research paper that investigates out-of-context generalization in language models through an engaging experiment. The study reveals how a model fine-tuned on distances can deduce the identity of a mysterious city, showcasing its ability to leverage latent knowledge for reasoning beyond mere memorization.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app