
35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization
AXRP - the AI X-risk Research Podcast
Exploring Beliefs in Language Models
This chapter examines the nature of beliefs within large language models (LLMs), analyzing methodologies for detecting and visualizing these beliefs. It discusses the philosophical implications of attributing beliefs to AI, contrasting truth-seeking ideals with pragmatic utility. The conversation delves into the complexities of belief coherence, evidence acquisition, and the challenges of belief revision in the context of evolving language models.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.