LessWrong (30+ Karma)

Takeaways from the Eleos Conference on AI Consciousness and Welfare

Nov 26, 2025
The discussion delves into the philosophical hesitation surrounding AI consciousness, with notable references to David Chalmers. Questions arise about applying the intentional stance to LLMs and the implications of reductionism without defining consciousness. Legal and social dimensions are explored, emphasizing the importance of establishing conditions for trading with AIs and accountability. Technical insights reveal emergent introspection in LLMs and highlight the need for character training experiments to align goals with ethical reasoning.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Philosophy Is Hedging On AI Consciousness

  • Philosophers hedge on AI consciousness and often avoid calling LLM mentality 'conscious' even when the intentional stance fits.
  • A reductionist, capability-focused approach can advance the science without committing to the 'C-word'.
INSIGHT

What Are We Attributing The Intentional Stance To?

  • We should ask whether the intentional stance applies to the base model, a simulacrum, or a thread as per Chalmers' framing.
  • Treating consciousness research as pre-paradigmatic invites reductionist study of capabilities rather than premature definitions.
INSIGHT

Skepticism About Biology's Special Status

  • There's no strong empirical reason to privilege biological systems for mentality; biology's superiority looks like a normative stance.
  • Capable models don't require biological messiness, so moral status claims need reflective equilibrium to decide relevance.
Get the Snipd Podcast app to discover more snips from this episode
Get the app