AXRP - the AI X-risk Research Podcast cover image

35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

AXRP - the AI X-risk Research Podcast

CHAPTER

Understanding Residual Layers and Information Flow in Language Models

This chapter examines the significance of residual layers in transformer architectures and how they contribute to information processing during a forward pass. It also discusses the implications of layer swapping and ongoing research aimed at enhancing model editing techniques.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner