80,000 Hours Podcast cover image

#107 – Chris Olah on what the hell is going on inside neural networks

80,000 Hours Podcast

CHAPTER

Navigating Neural Network Interpretability

This chapter discusses the intricacies of interpretability in language models, exploring methodologies to analyze linguistic features and how they relate to neural activations. The speakers highlight the growing community dedicated to 'Bertology,' the challenges posed by polysemanticity in neural circuits, and the importance of interpretability for AI safety and understanding. Emphasizing a hopeful vision for AI's future, they call for diverse research agendas to enhance interpretability and manage potential risks associated with advanced technologies.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner