
LessWrong (Curated & Popular) [HUMAN VOICE] "A case for AI alignment being difficult" by jessicata
Jan 2, 2024
The podcast explores the challenges of AGI alignment, including ontology identification and defining human values. It discusses different approaches to modeling the human brain as utility maximizers and the criteria for aligning AI with human values. It explores alignment as a normative criterion, the challenges of aligning AI systems with human values, and the concept of consequentialism. It also discusses the technological difficulties of high-fidelity brain emulations and misalignment issues in AI alignment.
Chapters
Transcript
Episode notes
1 2 3 4 5
Introduction
00:00 • 2min
Modeling Human Values and the Alignment of AI
02:07 • 3min
Exploring Alignment as a Normative Criterion and Human Values
04:40 • 2min
Challenges of AI Alignment and Consequentialism
06:16 • 21min
Understanding the Technological Difficulties and Misalignment Issues in AI Alignment
27:03 • 2min
