Exploring the ethical considerations of using AI to decode emotions, the chapter highlights strategies to optimize AI models for health and well-being rather than manipulative engagement. It discusses the importance of collecting feedback through surveys to measure user satisfaction and mental health, emphasizing the need to balance positive and negative emotions for long-term well-being. The conversation also focuses on the shift in business strategies towards optimizing for wellbeing and user experience, away from traditional metrics like revenue.
This AI can read emotions better than you can.
It was created by Alan Cowen, the cofounder and CEO of Hume, an AI research lab developing models that can read your face and your voice with uncanny accuracy. Before starting Hume, Alan helped set up Google’s research into affective computing and has a Ph.D. in computational psychology from Berkely.
Hume’s ultimate goal is to build AI models that can optimize for human well-being, and in this episode I sat down with Alan to understand how that might be possible.
We get into:
What an emotion actually is
Why traditional psychological theories of emotion are inadequate
How Hume is able to model human emotions
How Hume's API enables developers to build empathetic voice interfaces
Applications of the model in customer service, gaming, and therapy
Why Hume is designed to optimize for human well-being instead of engagement
The ethical concerns around creating an AI that can interpret human emotions
The future of psychology as a science
This is a must-watch for anyone interested in the science of emotion and the future of human-AI interactions.
If you found this episode interesting, please like, subscribe, comment, and share!
Want even more?
Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It’s usually only for paying subscribers, but you can get it here for free.
To hear more from Dan Shipper:
Timestamps:
Dan tells Hume’s empathetic AI model a secret: 00:00:00
Introduction: 00:01:13
What traditional psychology tells us about emotions: 00:10:17
Alan’s radical approach to studying human emotion: 00:13:46
Methods that Hume’s AI model uses to understand emotion: 00:16:46
How the model accounts for individual differences: 00:21:08
Dan’s pet theory on why it’s been hard to make progress in psychology: 00:27:19
The ways in which Alan thinks Hume can be used: 00:38:12
How Alan is thinking about the API v. consumer product question: 00:41:22
Ethical concerns around developing AI that can interpret human emotion: 00:44:42
Links to resources mentioned in the episode: