
Humans of Martech 198: Pam Boiros: 10 Ways to support women and build more inclusive AI
What’s up everyone, today we have the pleasure of sitting down with Pam Boiros, Fractional CMO and Marketing advisor, and Co-Founder Women Applying AI.
- (00:00) - Intro
- (01:13) - In This Episode
- (03:49) - How To Audit Data Fingerprints For AI Bias In Marketing
- (07:39) - Why Emotional Intelligence Improves AI Prompting Quality
- (10:14) - Why So Many Women Hesitate
- (15:40) - Why Collaborative AI Practice Builds Confidence In Marketing Ops Teams
- (18:31) - How to Go From AI Curious to AI Confident
- (24:32) - Joining The 'Women Applying AI' Community
- (27:18) - Other Ways to Support Women in AI
- (28:06) - Role Models and Visibility
- (32:55) - Leadership’s Role in Inclusion
- (35:57) - Mentorship for the AI Era
- (38:15) - Why Story Driven Communities Strengthen AI Adoption for Women
- (42:17) - AI’s Role in Women’s Worklife Harmony
- (45:22) - Why Personal History Strengthens Creative Leadership
Summary: Pam delivers a clear, grounded look at how women learn and lead with AI, moving from biased datasets to late-night practice sessions inside Women Applying AI. She brings sharp examples from real teams, highlights the quiet builders shaping change, and roots her perspective in the resilience she learned from the women in her own family. If you want a straightforward view of what practical, human-centered AI adoption actually looks like, this episode is worth your time.
About Pam
Pam Boiros is a consultant who helps marketing teams find direction and build plans that feel doable. She leads Marketing AI Jump Start and works as a fractional CMO for clients like Reclaim Health, giving teams practical ways to bring AI into their day-to-day work. She’s also a founding member of Women Applying AI, a new community that launched in Sep 2025 that creates a supportive space for women to learn AI together and grow their confidence in the field.
Earlier in her career, Pam spent 12 years at a fast-growing startup that Skillsoft later acquired, then stepped into senior marketing and product leadership there for another three and a half years. That blend of startup pace and enterprise structure shapes how she guides her clients today.
How To Audit Data Fingerprints For AI Bias In Marketing
AI bias spreads quietly in marketing systems, and Pam treats it as a pattern problem rather than a mistake problem. She explains that models repeat whatever they have inherited from the data, and that repetition creates signals that look normal on the surface. Many teams read those signals as truth because the outputs feel familiar. Pam has watched marketing groups make confident decisions on top of datasets they never examined, and she believes this is how invisible bias gains momentum long before anyone sees the consequences.
Pam describes every dataset as carrying a fingerprint. She studies that fingerprint by zooming into the structure, the gaps, and the repetition. She looks for missing groups, inflated representation, and subtle distortions baked into the source. She builds this into her workflow because she has seen how quickly a model amplifies the same dominant voices that shaped the data. She brings up real scenarios from her own career where women were labeled as edge cases in models even though they represented half the customer base. These patterns shape everything from product recommendations to retention scores, and she believes many teams never notice because the numbers look clean and objective.
"Every dataset has a fingerprint. You cannot see it at first glance, but it becomes obvious once you look for who is overrepresented, who is underrepresented, or who is misrepresented."
Pam organizes her process into three cycles that marketers can use immediately.
The habit works because it forces scrutiny at every stage, not just at kickoff.
Before building, trace the data source, the people represented, and the people missing.
While building, stress test the system across groups that usually sit at the margins.
After launch, monitor outputs with the same rhythm you use for performance analysis.
She treats these cycles as an operational discipline. She compares the scale of bias to a compounding effect, since one flawed assumption can multiply into hundreds of outputs within hours. She has seen pressure to ship faster push teams into trusting defaults, which creates the illusion of objectivity even when the system leans heavily toward one group’s behavior. She wants marketers to recognize that AI audits function like quality control, and she encourages them to build review rituals that continue as the model learns. She believes this daily maintenance protects teams from subtle drift where the model gradually leans toward the patterns it already prefers.
Pam views long term monitoring as the part that matters most. She knows how fast AI systems evolve once real customers interact with them. Bias shifts as new data enters the mix. Entire segments disappear because the model interprets their silence as disengagement. Other segments dominate because they participate more often, which reinforces the skew. Pam advocates for ongoing alerts, periodic evaluations, and cross-functional reviews that bring different perspectives into the monitoring loop. She believes that consistent visibility keeps the model grounded in the full customer base.
Key takeaway: You can reduce AI bias by treating audits as part of your standard workflow. Trace the origin of every dataset so you understand who shapes the patterns. Stress test during development so you catch distortions early. Monitor outcomes after launch so you can identify drift before it influences targeting, scoring, and personalization. This rhythm gives you a reliable way to detect biased fingerprints, keep systems accountable, and protect real customers from skewed automation.
Why Emotional Intelligence Improves AI Prompting Quality
Emotional intelligence shapes how people brief AI, and Pam focuses on the practical details behind that pattern. She sees prompting as a form of direction setting, similar to guiding a creative partner who follows every instruction literally. Women often add richer context because they instinctively think through tone, audience, and subtle cues before giving direction. That depth produces output that carries more human texture and brand alignment, and it reduces the amount of rewriting teams usually do when prompts feel thin.
Pam also talks about synthetic empathy and how easily teams misread it. AI can generate warm language, but users often sense a hollow quality once they reread the output. She has seen teams trust the first fluent result because it looks polished on the surface. People with stronger emotional intelligence detect when the writing lacks genuine feeling or when it leans on clichés instead of real understanding. Pam notices this most in content meant for sensitive moments, such as apology emails or customer care messages, where the emotional miss becomes obvious.
"Prompting is basically briefing the AI, and women are natural context givers. We think about tone and audience and nuance, and that is what makes AI output more human and more aligned with the brand."
Pam brings even sharper clarity when she moves into analytics. She observes that many marketers chase the top performer without questioning who drove the behavior. She describes moments where curiosity leads someone to discover that a small, highly engaged audience segment pulled the numbers upward. She sees women interrogating patterns by asking:
Who showed up
Why they behaved the way they did
What made the pattern appear more universal than it is
Those questions shift analytics from scoreboar...
