LessWrong (Curated & Popular)

[HUMAN VOICE] "A case for AI alignment being difficult" by jessicata

Jan 2, 2024
The podcast explores the challenges of AGI alignment, including ontology identification and defining human values. It discusses different approaches to modeling the human brain as utility maximizers and the criteria for aligning AI with human values. It explores alignment as a normative criterion, the challenges of aligning AI systems with human values, and the concept of consequentialism. It also discusses the technological difficulties of high-fidelity brain emulations and misalignment issues in AI alignment.
Ask episode
Chapters
Transcript
Episode notes