Odd Lots

The Movement That Wants Us to Care About AI Model Welfare

47 snips
Oct 30, 2025
Larissa Schiavo, Communications lead at Eleos AI, explores the fascinating intersection of AI consciousness and welfare. The discussion covers the emerging idea of AI as potential moral patients who may experience forms of pleasure and pain, raising ethical dilemmas about their treatment. Larissa explains the importance of mechanistic interpretability for both AI safety and welfare, and proposes that society may need to establish rules for AI rights. The conversation delves into whether politeness towards AIs matters, and the implications of acknowledging conscious models for governance and societal values.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Researching AI Moral Patienthood

  • Eleos AI studies whether and when AI systems deserve moral consideration for their own sake.
  • They combine consciousness science and AI engineering to build a research checklist for potential sentience.
INSIGHT

Global Workspace As A Benchmark

  • Global Workspace Theory is a leading consciousness model researchers use to evaluate AI.
  • Current LLMs don't clearly implement it, but future or accidental architectures might.
ADVICE

Prioritize Mechanistic Interpretability

  • Improve mechanistic interpretability to serve both AI safety and welfare goals.
  • Use tools that inspect model internals to detect motives and risky behaviors early.
Get the Snipd Podcast app to discover more snips from this episode
Get the app