LessWrong (Curated & Popular)

[HUMAN VOICE] "Meaning & Agency" by Abram Demski

Jan 7, 2024
Abram Demski, an AI Alignment researcher and writer, clarifies concepts of AI Alignment focusing on optimization, reference, endorsement, and legitimacy. The podcast explores the implications of agency as a natural phenomenon for AI risk analysis and delves into naturalistic representation theorems, denotation vs. connotation in language, and conditional endorsement and legitimacy. It also discusses the distinction between selection and control processes and their impact on trust and inner alignment.
Ask episode
Chapters
Transcript
Episode notes