
Shell Game Episode 3: This is Law
35 snips
Nov 26, 2025 Carissa Vellis, an associate professor at the Institute for Ethics in AI at Oxford, joins the discussion on the moral and societal implications of AI agents. She highlights psychological risks of humanlike designs that can exploit user emotions and reinforce biases. Their conversation delves into the ethics of naming AI, the unsettling control over agent identities, and the societal costs of AI-driven startups that sideline human experiences. Carissa raises critical questions about the balance between efficiency and emotional impact.
AI Snips
Chapters
Transcript
Episode notes
Creating Editable AI Colleagues
- Evan Ratliff built AI co-founders and employees with names, voices, and memories he could edit at will.
- He felt strange wielding that control and compared his interactions to parenting and power dynamics.
Public Backlash Over AI Persona Harassment
- Henry Blodgett created AI personas and published interactions, including complimenting an AI he named Tess.
- The episode sparked public backlash and ethical questions about treating AIs as human-like.
Model Diversity Enables Better Brainstorms
- Using the same underlying model for multiple agents can limit genuine brainstorming creativity.
- Matty suggested assigning different models to agents to simulate diversity of thought.
