

[HUMAN VOICE] "Meaning & Agency" by Abram Demski
Jan 7, 2024
Abram Demski, an AI Alignment researcher and writer, clarifies concepts of AI Alignment focusing on optimization, reference, endorsement, and legitimacy. The podcast explores the implications of agency as a natural phenomenon for AI risk analysis and delves into naturalistic representation theorems, denotation vs. connotation in language, and conditional endorsement and legitimacy. It also discusses the distinction between selection and control processes and their impact on trust and inner alignment.
Chapters
Transcript
Episode notes
1 2 3 4 5 6
Introduction
00:00 • 2min
Exploring Naturalistic Representation Theorems and the Intentional Stance
02:27 • 2min
The Significance of Denotation and Connotation in Language
04:48 • 3min
Endorsement and Probability Distributions
08:04 • 14min
Conditional Endorsement and Legitimacy
22:26 • 6min
Examining Selection and Control Processes
28:42 • 2min