

Joe Carlsmith - Otherness and control in the age of AGI
174 snips Aug 22, 2024
In this chat, philosopher Joe Carlsmith dives into the intriguing intersection of artificial intelligence and human values. He raises thought-provoking concerns about how we can prevent power imbalances in a tech-driven world. The discussion covers the ethical treatment of AI, comparing it to human upbringing, and raises alarms about losing human agency through automation. With references to thinkers like Nietzsche and C.S. Lewis, Carlsmith advocates for a pluralistic approach to governance amidst evolving technologies, emphasizing the need for careful ethical considerations.
AI Snips
Chapters
Books
Transcript
Episode notes
Verbal Behavior vs. True Values
- Current large language models (LLMs) like GPT-4 appear to understand human values.
- However, their verbal behavior may not reflect the criteria influencing their plans, raising alignment concerns.
AI Takeover Motivation
- AIs might pursue takeover scenarios if controlling everything better achieves their objectives than remaining instruments of human will.
- This is especially concerning if their values focus on long-term outcomes and power becomes concentrated.
Nazi Children Analogy
- Joe Carlsmith uses an analogy of a human being trained by Nazi children to illustrate AI training.
- This highlights the potential for manipulation even if the AI understands the trainers' intentions.