The Behavioral Design Podcast

Samuel Salzer and Aline Holzwarth
undefined
Jul 31, 2025 • 47min

Our Most Controversial AI Opinions – Season 4 Finale

Season 4 Finale: Our Most Controversial AI TakesWe wrap up Season 4 of the Behavioral Design Podcast with a different kind of conversation. Instead of looking outward at our guests’ insights, Aline and Samuel turn the mic on themselves, reflecting on the season, what we’ve learned, and the boldest, most controversial opinions we hold about AI.From questions about whether AI can truly emulate human qualities to fears of a future where we slowly de-skill ourselves by over-relying on machines, this episode is part reflection, part confessional. Highlights include:A look back at the season’s most surprising and provocative guest takes on AIWhy AI optimism often lives closest to where experts work—and where skepticism still lingersThe heated debate over AI companions: comforting helpers or human connection killers?Our personal, unfiltered takes on AI’s hidden risks, including cognitive offloading and the myth of collaborationThe strange and perhaps surprisingly useful role of AI “oracles” in our own livesThis is the perfect sendoff for Season 4: A candid, wide-ranging discussion about the future of AI, human behavior, and what it all means for how we live, think, and connect.--Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
undefined
Jul 9, 2025 • 46min

Season 4 Recap: Meet Our AI Co-Hosts

The hosts dive into how AI is transforming human behavior and relationships. They explore the emotional landscape shaped by empathic chatbots and the ethical challenges of algorithmic profiling. The discussion reflects on AI's role in personal communication and its risks in navigating authenticity. They also tackle the fine line between beneficial nudges and manipulation in decision-making. In a twist, AI co-hosts summarize the season's ideas, sparking thoughts on the future of behavioral science amid evolving technology.
undefined
11 snips
Jun 11, 2025 • 1h 4min

Productivity and AI with Oliver Burkeman

Join Oliver Burkeman, a journalist and bestselling author of 'Four Thousand Weeks', as he delves into the complex relationship between productivity and AI. He discusses how AI tools might entrap us in productivity loops, urging a deeper understanding of what time well spent truly means. The conversation highlights the psychological toll of outsourcing decisions to machines and the value of embracing life's uncertainties. Burkeman also stresses the importance of prioritizing personal agency and human connections in an age of hyper-efficiency.
undefined
4 snips
May 28, 2025 • 52min

AI Therapy with Alison Cerezo

In a fascinating discussion with Alison Cerezo, a clinical psychologist and Senior VP of Research at Mpathic, the conversation dives into AI's transformative role in therapy. They uncover how AI tools can enhance empathy and provide real-time feedback without replacing human therapists. Alison raises intriguing questions about the potential for AI to feel empathy and warns against over-reliance on technology. The talk also touches on the future of mental health, emphasizing the need for a balance between innovation and maintaining genuine human connection.
undefined
May 15, 2025 • 1h 4min

Empathy and AI with Michael Inzlicht

Empathic Machines with Michael InzlichtIn this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Michael Inzlicht, professor of psychology at the University of Toronto and co-host of the podcast Two Psychologists Four Beers. Together, they explore the surprisingly effortful nature of empathy—and what happens when artificial intelligence starts doing it better than we do.Michael shares insights from his research into empathic AI, including findings that people often rate AI-generated empathy as more thoughtful, emotionally satisfying, and effortful than human responses—yet still prefer to receive empathy from a human. They unpack the paradox behind this preference, what it tells us about trust and connection, and whether relying on AI for emotional support could deskill us over time.This conversation is essential listening for anyone interested in the intersection of psychology, emotion, and emerging AI tools—especially as machines get better at sounding like they care.--Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
undefined
May 1, 2025 • 1h 22min

Building Moral AI with Jana Schaich Borg

How Do You Build a Moral AI? with Jana Schaich BorgIn this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try?Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient.This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care.— Topics Covered:What AI alignment really means (and why it’s so hard)Bottom-up vs. top-down moral AI systemsHow organizations get ethical AI wrong—and what to do insteadThe messy reality of human values and decision makingTranslational ethics and the need for AI KPIsPersonalizing AI to match your valuesWhen moral self-reflection becomes a design feature—Timestamps:00:00  Intro: AI Alignment — Mission Impossible?04:00  Why Moral AI Is So Hard (and Necessary)07:00  The “Spec” Story & Reinforcement Gone Wrong10:00  Anthropomorphizing AI — Helpful or Misleading?12:00  Introducing Jana & the Moral AI Project15:00  What “Moral AI” Really Means18:00  Interdisciplinary Collaboration (and Friction)21:00  Bottom-Up vs. Top-Down Approaches27:00  Why Human Morality Is Messy31:00  Building a Hybrid Moral AI System41:00  Case Study: Kidney Donation Decisions47:00  From Models to Moral Reflection52:00  Embedding Ethics Inside Organizations56:00  Moral Growth Mindset & Training the Workforce01:03:00  Why Trust & Culture Matter Most01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta01:10:00  What We Still Don’t Know01:11:00  Quickfire: To AI or Not To AI01:16:00  Jana’s Most Controversial Take01:19:00  Can AI Make Us Better Humans?—🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show.Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.
undefined
Apr 16, 2025 • 1h 10min

State of AI Risk with Peter Slattery

Understanding AI Risks with Peter SlatteryIn this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats.Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI.This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track.--LINKS:Peter's LinkedIn ProfileMIT FutureTech Lab: futuretech.mit.eduAI Risk Repository--Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
undefined
15 snips
Mar 20, 2025 • 48min

Enter the AI Lab

Dive into how AI is transforming behavioral design from discovery to testing. The hosts explore insights from LinkedIn polls about AI's role and human expertise. They analyze various AI tools for literature reviews, weighing their strengths against potential pitfalls. Discussions highlight the importance of maintaining human oversight, as AI outputs can lack depth. Plus, get excited about upcoming innovations from an AI Lab and a case study on Peloton that combines AI and behavioral insights to enhance user experiences!
undefined
12 snips
Mar 6, 2025 • 1h 7min

When to AI, and When Not to AI with Eric Hekler

When to AI, and When Not to AI with Eric Hekler"People are different. Context matters. Things change."In this episode of the Behavioral Design Podcast, Aline is joined by Eric Hekler, professor at UC San Diego, to explore the nuances of AI in behavioral science and health interventions. Eric’s mantra—emphasizing the importance of individual differences, context, and change—serves as a foundation for the conversation as they discuss when AI enhances behavioral interventions and when human judgment is indispensable.The discussion explores just-in-time adaptive interventions (JITAI), the efficiency trap of AI, and the jagged frontier of AI adoption—where machine learning excels and where it falls short. Eric shares his expertise on control systems engineering, human-AI collaboration, and the real-world challenges of scaling adaptive health interventions. The episode also explores teachable moments, the importance of domain knowledge, and the need for AI to support rather than replace human decision-making.The conversation wraps up with a quickfire round, where Eric debates AI’s role in health coaching, mental health interventions, and optimizing human routines.LINKS:Eric Hekler:TIMESTAMPS:02:01 Introduction and Correction05:21 The Efficiency Trap of AI08:02 Human-AI Collaboration11:04 Conversation with Eric Hekler14:12 Just-in-Time Adaptive Interventions15:19 System Identification Experiment28:27 Control Systems vs. Machine Learning39:44 Challenges with Classical Machine Learning43:16 Translating Research to Real-World Applications49:49 Community-Based Research and Context Matters59:46 Quickfire Round: To AI or Not to AI01:08:27 Final Thoughts on AI and Human Evolution--Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
undefined
Feb 20, 2025 • 1h 7min

Sci-Fi and AI: Exploring Annie Bot with Sierra Greer

Sci-Fi and AI: Exploring Annie Bot with Sierra GreerIn this episode of the Behavioral Design Podcast, hosts Aline and Samuel dive into the ethical, emotional, and societal complexities of AI companionship with special guest Sierra Greer, author of Annie Bot. This thought-provoking novel explores AI-human relationships, autonomy, and the blurred line between artificial intelligence and the human experience.Sierra shares her inspiration for Annie Bot and how sci-fi can serve as a lens to explore real-world ethical dilemmas in AI development. The conversation covers the concept of reinforcement learning in AI and how it mirrors human conditioningThe gender dynamics embedded in AI design, and the ethical implications of AI companions. The discussion also examines real-life cases of people forming deep emotional bonds with AI chatbotsThe episode rounds out with a lively quickfire round, where Sierra debates whether AI should replace lost loved ones, act as conversational assistants for introverts, or intervene in human arguments.This is a must-listen for fans of sci-fi, behavioral science, and those fascinated by the future of AI companionship and emotional intelligence.LINKS:Sierra Greer websiteAnnie Bot – Official Book PageGoodreads ProfileTIMESTAMPS:01:43 AI Companions: A Controversial Opinion05:48 Exploring Sci-Fi and AI in Literature07:42 Introducing Sierra Greer and Her Book09:12 Reinforcement Learning Explained15:47 Diving into the World of Annie Bot23:17 Power Dynamics and Human-Robot Relationships32:31 Humanity and Artificial Intelligence41:31 Autonomy vs. Agreeableness in Relationships43:20 Reinforcement Learning in AI and Humans46:13 Ethics and Gaslighting in AI48:57 Gender Dynamics in AI Design57:18 AI Companions and Human Relationships01:06:45 Quickfire Round: To AI or Not to AI01:12:39 Final Thoughts and Controversial Opinions--Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app