Machine Learning Street Talk (MLST) cover image

Neel Nanda - Mechanistic Interpretability

Machine Learning Street Talk (MLST)

00:00

Cognitive Structures in Language Models

This chapter discusses the cognitive frameworks of language models, focusing on the unique characteristics of transformer models and their parallels to evolutionary biology. It examines the Chomsky hierarchy and critiques the theoretical frameworks used to categorize algorithms while recognizing the unexpected success of transformers in language processing.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app