The Gradient: Perspectives on AI cover image

Cameron Jones & Sean Trott: Understanding, Grounding, and Reference in LLMs

The Gradient: Perspectives on AI

00:00

Understanding Internalist Account of Belief Sensitivity in Language Models

The chapter explores the concept of internalist account of belief sensitivity in Language Models (LLMs) and its implications on behavior observation. The speakers discuss the challenges in assessing representational structure and information encoding in LLMs, emphasizing the importance of task design. They also examine the similarities between human cognition and LLMs in theory of mind tasks and reasoning, highlighting the need for human-like performance across various tasks for comparable internal representations.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app