LessWrong (Curated & Popular) cover image

What’s up with LLMs representing XORs of arbitrary features?

LessWrong (Curated & Popular)

00:00

Introduction

This chapter explores the claim that LLM's can represent XORs of arbitrary features and its implications for AI safety research and interpretability work.

Play episode from 00:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app