AXRP - the AI X-risk Research Podcast cover image

35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

AXRP - the AI X-risk Research Podcast

CHAPTER

Challenging Assumptions in Model Knowledge Editing

This chapter examines a research paper that challenges traditional views on knowledge storage in neural network layers and its impact on model editing methods. It reveals unexpected findings that suggest a more intricate relationship between layer knowledge and editing efficacy than previously believed.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner