RadicalxChange(s) cover image

Joe Edelman: Co-Founder of Meaning Alignment Institute

RadicalxChange(s)

CHAPTER

Values in AI Interactions

This chapter discusses a new tool designed to enhance conversations centered on personal values, featuring a user-friendly interface and an AI model that prioritizes transparency and understanding. The conversation explores the complexities of aligning large language models with democratic values and the importance of preserving human agency in ethical decision-making. The implications of AI on moral reasoning and the balance between assistance and over-reliance are critically examined, promoting a community-centered approach to moral development.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner