
ForeCast Consciousness and Competition (with Joe Carlsmith)
22 snips
Nov 28, 2025 Joe Carlsmith, a prominent writer and philosopher on AI safety, dives deep into consciousness and the moral status of beings. He challenges traditional views with thought experiments like the dog vs. car analogy and explores digital minds surpassing biological ones. Carlsmith discusses the implications of AI consciousness, emphasizing empathy failures and historical moral mistakes. With a focus on how competitive dynamics could erode goodness, he advocates for thoughtful interventions to reduce AI suffering and ensure human values thrive amidst rapid technological advancement.
AI Snips
Chapters
Transcript
Episode notes
What Moral Status Means
- Moral status tracks what makes us care differently about beings, e.g., kicking a dog versus kicking a car.
- The question is which features (experience, sophistication, numbers) determine moral patienthood for AIs.
AI Compute Could Match Human Experience
- Comparing brain compute to AI FLOPs suggests some training runs already equal thousands of human-years of processing.
- If future AI compute scales, aggregate AI 'experience' could rival or exceed human-level totals quickly.
Substrate Independence Argument
- Substrate independence is the hypothesis that consciousness can arise in non-biological materials.
- If an algorithm implemented in different substrates behaved the same, we have reason to attribute consciousness across substrates.
