

Season 10. Episode 1: Full Stack AI Alignment and Human Flourishing with Joe Edelman
Oct 3, 2025
Joe Edelman, founder of the Meaning Alignment Institute, discusses AI alignment and human flourishing. He critiques existing methods like RLHF, advocating for deeper 'thick models' of value to guide AI and institutions. Joe shares lessons from social media's failure and proposes four ambitious moonshots: super negotiators, public resource regulators, market intermediaries, and value stewardship agents. He emphasizes the need for collaboration across disciplines to ensure that technological advances align with genuine human values and societal well-being.
AI Snips
Chapters
Transcript
Episode notes
Power Requires Precise Direction
- More powerful technology requires far clearer guidance about where to point it.
- Powerful AI increases pressure on institutions which must be coordinated to avoid harmful directions.
Social Media's Engagement Trap
- Joe recounts how social media recommenders optimized engagement rather than true user value.
- He links engagement-tuned loops to business incentives and downstream harms observed in social platforms.
Make Values Legible And Measurable
- Build richer representations of human values called 'thick models' through interviews and observation.
- Convert those representations into metrics to tune algorithms and company incentives.