Humans of Martech

Phil Gamache
undefined
Dec 16, 2025 • 56min

200: Matthew Castino: How Canva measures marketing

What’s up everyone, today we have the pleasure of sitting down with Matthew Castino, Marketing Measurement Science Lead @ Canva.(00:00) - Intro (01:10) - In This Episode (03:50) - Canva’s Prioritization System for Marketing Experiments (11:26) - What Happened When Canva Turned Off Branded Search (18:48) - Structuring Global Measurement Teams for Local Decision Making (24:32) - How Canva Integrates Marketing Measurement Into Company Forecasting (31:58) - Using MMM Scenario Tools To Align Finance And Marketing (37:05) - Why Multi Touch Attribution Still Matters at Canva (42:42) - How Canva Builds Feedback Loops Between MMM and Experiments (46:44) - Canva’s AI Workflow Automation for Geo Experiments (51:31) - Why Strong Coworker Relationships Improve Career Satisfaction Summary: Canva operates at a scale where every marketing decision carries huge weight, and Matt leads the measurement function that keeps those decisions grounded in science. He leans on experiments to challenge assumptions that models inflate. As the company grew, he reshaped measurement so centralized models stayed steady while embedded data scientists guided decisions locally, and he built one forecasting engine that finance and marketing can trust together. He keeps multi touch attribution in play because user behavior exposes patterns MMM misses, and he treats disagreements between methods as signals worth examining. AI removes the bottlenecks around geo tests, data questions, and creative tagging, giving his team space to focus on evidence instead of logistics. About MatthewMatthew Castino blends psychology, statistics, and marketing intuition in a way that feels almost unfair. With a PhD in Psychology and a career spent building measurement systems that actually work, he’s now the Marketing Measurement Science Lead at Canva, where he turns sprawling datasets and ambitious growth questions into evidence that teams can trust.His path winds through academia, health research, and the high-tempo world of sports trading. At UNSW, Matt taught psychology and statistics while contributing to research at CHETRE. At Tabcorp, he moved through roles in customer profiling, risk systems, and US/domestic sports trading; spaces where every model, every assumption, and every decision meets real consequences fast. Those years sharpened his sense for what signal looks like in a messy environment.Matt lives in Australia and remains endlessly curious about how people think, how markets behave, and why measurement keeps getting harder, and more fun.Canva’s Prioritization System for Marketing ExperimentsCanva’s marketing experiments run in conditions that rarely resemble the clean, product controlled environment that most tech companies love to romanticize. Matthew works in markets filled with messy signals, country level quirks, channel specific behaviors, and creative that behaves differently depending on the audience. Canva built a world class experimentation platform for product, but none of that machinery helps when teams need to run geo tests or channel experiments across markets that function on completely different rhythms. Marketing had to build its own tooling, and Matthew treats that reality with a mix of respect and practicality.His team relies on a prioritization system grounded in two concrete variables.SpendUncertaintyLarge budgets demand measurement rigor because wasted dollars compound across millions of impressions. Matthew cares about placing the most reliable experiments behind the markets and channels with the biggest financial commitments. He pairs that with a very sober evaluation of uncertainty. His team pulls signals from MMM models, platform lift tests, creative engagement, and confidence intervals. They pay special attention to MMM intervals that expand beyond comfortable ranges, especially when historical spend has not varied enough for the model to learn. He reads weak creative engagement as a warning sign because poor engagement usually drags efficiency down even before the attribution questions show up.“We try to figure out where the most money is spent in the most uncertain way.”The next challenge sits in the structure of the team. Matthew ran experimentation globally from a centralized group for years, and that model made sense when the company footprint was narrower. Canva now operates in regions where creative norms differ sharply, and local teams want more authority to respond to market dynamics in real time. Matthew sees that centralization slows everything once the company reaches global scale. He pushes for embedded data scientists who sit inside each region, work directly with marketers, and build market specific experimentation roadmaps that reflect local context. That way experimentation becomes a partner to strategy instead of a bottleneck.Matthew avoids building a tower of approvals because heavy process often suffocates marketing momentum. He prefers a model where teams follow shared principles, run experiments responsibly, and adjust budgets quickly. He wants measurement to operate in the background while marketers focus on creative and channel strategies with confidence that the numbers can keep up with the pace of execution.Key takeaway: Run experiments where they matter most by combining the biggest budgets with the widest uncertainty. Use triangulated signals like MMM bounds, lift tests, and creative engagement to identify channels that deserve deeper testing. Give regional teams embedded data scientists so they can respond to real conditions without waiting for central approval queues. Build light guardrails, not heavy process, so experimentation strengthens day to day marketing decisions with speed and confidence.What Happened When Canva Turned Off Branded SearchGeographic holdout tests gave Matt a practical way to challenge long-standing spend patterns at Canva without turning measurement into a philosophical debate. He described how many new team members arrived from environments shaped by attribution dashboards, and he needed something concrete that demonstrated why experiments belong in the measurement toolkit. Experiments produced clearer decisions because they created evidence that anyone could understand, which helped the organization expand its comfort with more advanced measurement methods.The turning point started with a direct question from Canva’s CEO. She wanted to understand why the company kept investing heavily in bidding on the keyword “Canva,” even though the brand was already dominant in organic search. The company had global awareness, strong default rankings, and a product that people searched for by name. Attribution platforms treated branded search as a powerhouse channel because those clicks converted at extremely high rates. Matt knew attribution would reinforce the spend by design, so he recommended a controlled experiment that tested actual incrementality."We just turned it off or down in a couple of regions and watched what happened."The team created several regional holdouts across the United States. They reduced bids in those regions, monitored downstream behavior, and let natural demand play out. The performance barely moved. Growth held steady and revenue held steady. The spend did not create additional value at the level the dashboards suggested. High intent users continued converting, which showed how easily attribution can exaggerate impact when a channel serves people who already made their decision.The outcome saved Canva millions of dollars, and the savings were immediately reallocated to areas with better leverage. The win carried emotional weight inside the company because it replaced speculati...
undefined
Dec 9, 2025 • 60min

199: Anna Aubuchon: Moving BI workloads into LLMs and using AI to build what you used to buy

What’s up everyone, today we have the pleasure of sitting down with Anna Aubuchon, VP of Operations at Civic Technologies.(00:00) - Intro (01:15) - In This Episode (04:15) - How AI Flipped the Build Versus Buy Decision (07:13) - Redrawing What “Complex” Means (12:20) - Why In House AI Provides Better Economics And Control (15:33) - How to Treat AI as an Insourcing Engine (21:02) - Moving BI Workloads Out of Dashboards and Into LLMs (31:37) - Guardrails That Keep AI Querying Accurate (38:18) - Using Role Based AI Guardrails Across MCP Servers (44:43) - Ops People are Creators of Systems Rather Than Maintainers of Them (48:12) - Why Natural Language AI Lowers the Barrier for First-Time Builders (52:31) - Technical Literacy Requirements for Next Generation Operators (56:46) - Why Creative Practice Strengthens Operational Leadership Summary: AI has reshaped how operators work, and Anna lays out that shift with the clarity of someone who has rebuilt real systems under pressure. She breaks down how old build versus buy habits hold teams back, how yearly AI contracts quietly drain momentum, and how modern integrations let operators assemble powerful workflows without engineering bottlenecks. She contrasts scattered one-off AI tools with the speed that comes from shared patterns that spread across teams. Her biggest story lands hard. Civic replaced slow dashboards and long queues with orchestration that pulls every system into one conversational layer, letting people get answers in minutes instead of mornings. That speed created nerves around sensitive identity data, but tight guardrails kept the team safe without slowing anything down. Anna ends by pushing operators to think like system designers, not tool babysitters, and to build with the same clarity her daughter uses when she describes exactly what she wants and watches the system take shape.About AnnaAnna Aubuchon is an operations executive with 15+ years building and scaling teams across fintech, blockchain, and AI. As VP of Operations at Civic Technologies, she oversees support, sales, business operations, product operations, and analytics, anchoring the company’s growth and performance systems.She has led blockchain operations since 2014 and built cross-functional programs that moved companies from early-stage complexity into stable, scalable execution. Her earlier roles at Gyft and Thomson Reuters focused on commercial operations, enterprise migrations, and global team leadership, supporting revenue retention and major process modernization efforts.How AI Flipped the Build Versus Buy DecisionAI tooling has shifted so quickly that many teams are still making decisions with a playbook written for a different era. Anna explains that the build versus buy framework people lean on carries assumptions that no longer match the tool landscape. She sees operators buying AI products out of habit, even when internal builds have become faster, cheaper, and easier to maintain. She connects that hesitation to outdated mental models rather than actual technical blockers.AI platforms keep rolling out features that shrink the amount of engineering needed to assemble sophisticated workflows. Anna names the layers that changed this dynamic. System integrations through MCP act as glue for data movement. Tools like n8n and Lindy give ops teams workflow automation without needing to file tickets. Then ChatGPT Agents and Cloud Skills launched with prebuilt capabilities that behave like Lego pieces for internal systems. Direct LLM access removed the fear around infrastructure that used to intimidate nontechnical teams. She describes the overall effect as a compression of technical overhead that once justified buying expensive tools.She uses Civic’s analytics stack to illustrate how she thinks about the decision. Analytics drives the company’s ability to answer questions quickly, and modern integrations kept the build path light. Her team built the system because it reinforced a core competency. She compares that with an AI support bot that would need to handle very different audiences with changing expectations across multiple channels. She describes that work as high domain complexity that demands constant tuning, and the build cost would outweigh the value. Her team bought that piece. She grounds everything in two filters that guide her decisions: core competency and domain complexity.Anna also calls out a cultural pattern that slows AI adoption. Teams buy AI tools individually and create isolated pockets of automation. She wants teams to treat AI workflows as shared assets. She sees momentum building when one group experiments with a workflow and others borrow, extend, or remix it. She believes this turns AI adoption into a group habit rather than scattered personal experiments. She highlights the value of shared patterns because they create a repeatable way for teams to test ideas without rebuilding from scratch.She closes by urging operators to update their decision cycle. Tooling is evolving at a pace that makes six month old assumptions feel stale. She wants teams to revisit build versus buy questions frequently and to treat modern tools as a prompt to redraw boundaries rather than defend old ones. She frames it as an ongoing practice rather than a one time decision.Key takeaway: Reassess your build versus buy decisions every quarter by measuring two factors. First, identify whether the workflow strengthens a core competency that deserves internal ownership. Second, gauge the domain complexity and decide whether the function needs constant tuning or specialized expertise. Use modern integration layers, workflow builders, and direct LLM access to assemble internal systems quickly. Build the pieces that reinforce your strengths, buy the pieces that demand specialized depth, and share internal workflows so other teams can expand your progress.Why In House AI Provides Better Economics And ControlAI tooling has grown into a marketplace crowded with vendors who promise intelligence, automation, and instant transformation. Anna watches teams fall into these patterns with surprising ease. Many of the tools on the market run the same public models under new branding, yet buyers often assume they are purchasing deeply specialized systems trained on inaccessible data. She laughs about driving down the 101 and seeing AI billboards every few minutes, each one selling a glossy shortcut to operational excellence. The overcrowding makes teams feel like they should buy something simply because everyone else is buying something, and that instinct shifts AI procurement from a strategic decision into a reflex."A one year agreement might as well be a decade in AI right now."Anna has seen how annual vendor contracts slow companies down. The moment a team commits to a year long agreement, the urgency to evaluate alternatives vanishes. They adopt a “set it and forget it” mindset because the tool is already purchased, the budget is already allocated, and the contract already sits in legal. AI development moves fast. Contract cycles do not. That mismatch creates friction that becomes expensive, especially when new models launch every few weeks and outperform the ones you purchased only months earlier. Teams do not always notice the cost of stagnation because it creeps in quietly.Anna lays out a practical build versus buy framework. Teams should inspect whether the capability touches their core competency, their customer experience, or their strategic distinctiveness. If it does, then in house AI provides more long term value. It lets the company shape the model around real customer patterns. It keeps experimentation in motion instead...
undefined
Dec 2, 2025 • 49min

198: Pam Boiros: 10 Ways to support women and build more inclusive AI

What’s up everyone, today we have the pleasure of sitting down with Pam Boiros, Fractional CMO and Marketing advisor, and Co-Founder Women Applying AI.(00:00) - Intro (01:13) - In This Episode (03:49) - How To Audit Data Fingerprints For AI Bias In Marketing (07:39) - Why Emotional Intelligence Improves AI Prompting Quality (10:14) - Why So Many Women Hesitate (15:40) - Why Collaborative AI Practice Builds Confidence In Marketing Ops Teams (18:31) - How to Go From AI Curious to AI Confident (24:32) - Joining The 'Women Applying AI' Community (27:18) - Other Ways to Support Women in AI (28:06) - Role Models and Visibility (32:55) - Leadership’s Role in Inclusion (35:57) - Mentorship for the AI Era (38:15) - Why Story Driven Communities Strengthen AI Adoption for Women (42:17) - AI’s Role in Women’s Worklife Harmony (45:22) - Why Personal History Strengthens Creative Leadership Summary: Pam delivers a clear, grounded look at how women learn and lead with AI, moving from biased datasets to late-night practice sessions inside Women Applying AI. She brings sharp examples from real teams, highlights the quiet builders shaping change, and roots her perspective in the resilience she learned from the women in her own family. If you want a straightforward view of what practical, human-centered AI adoption actually looks like, this episode is worth your time.About PamPam Boiros is a consultant who helps marketing teams find direction and build plans that feel doable. She leads Marketing AI Jump Start and works as a fractional CMO for clients like Reclaim Health, giving teams practical ways to bring AI into their day-to-day work. She’s also a founding member of Women Applying AI, a new community that launched in Sep 2025 that creates a supportive space for women to learn AI together and grow their confidence in the field.Earlier in her career, Pam spent 12 years at a fast-growing startup that Skillsoft later acquired, then stepped into senior marketing and product leadership there for another three and a half years. That blend of startup pace and enterprise structure shapes how she guides her clients today.How To Audit Data Fingerprints For AI Bias In MarketingAI bias spreads quietly in marketing systems, and Pam treats it as a pattern problem rather than a mistake problem. She explains that models repeat whatever they have inherited from the data, and that repetition creates signals that look normal on the surface. Many teams read those signals as truth because the outputs feel familiar. Pam has watched marketing groups make confident decisions on top of datasets they never examined, and she believes this is how invisible bias gains momentum long before anyone sees the consequences.Pam describes every dataset as carrying a fingerprint. She studies that fingerprint by zooming into the structure, the gaps, and the repetition. She looks for missing groups, inflated representation, and subtle distortions baked into the source. She builds this into her workflow because she has seen how quickly a model amplifies the same dominant voices that shaped the data. She brings up real scenarios from her own career where women were labeled as edge cases in models even though they represented half the customer base. These patterns shape everything from product recommendations to retention scores, and she believes many teams never notice because the numbers look clean and objective."Every dataset has a fingerprint. You cannot see it at first glance, but it becomes obvious once you look for who is overrepresented, who is underrepresented, or who is misrepresented."Pam organizes her process into three cycles that marketers can use immediately.The habit works because it forces scrutiny at every stage, not just at kickoff.Before building, trace the data source, the people represented, and the people missing.While building, stress test the system across groups that usually sit at the margins.After launch, monitor outputs with the same rhythm you use for performance analysis.She treats these cycles as an operational discipline. She compares the scale of bias to a compounding effect, since one flawed assumption can multiply into hundreds of outputs within hours. She has seen pressure to ship faster push teams into trusting defaults, which creates the illusion of objectivity even when the system leans heavily toward one group’s behavior. She wants marketers to recognize that AI audits function like quality control, and she encourages them to build review rituals that continue as the model learns. She believes this daily maintenance protects teams from subtle drift where the model gradually leans toward the patterns it already prefers.Pam views long term monitoring as the part that matters most. She knows how fast AI systems evolve once real customers interact with them. Bias shifts as new data enters the mix. Entire segments disappear because the model interprets their silence as disengagement. Other segments dominate because they participate more often, which reinforces the skew. Pam advocates for ongoing alerts, periodic evaluations, and cross-functional reviews that bring different perspectives into the monitoring loop. She believes that consistent visibility keeps the model grounded in the full customer base.Key takeaway: You can reduce AI bias by treating audits as part of your standard workflow. Trace the origin of every dataset so you understand who shapes the patterns. Stress test during development so you catch distortions early. Monitor outcomes after launch so you can identify drift before it influences targeting, scoring, and personalization. This rhythm gives you a reliable way to detect biased fingerprints, keep systems accountable, and protect real customers from skewed automation.Why Emotional Intelligence Improves AI Prompting QualityEmotional intelligence shapes how people brief AI, and Pam focuses on the practical details behind that pattern. She sees prompting as a form of direction setting, similar to guiding a creative partner who follows every instruction literally. Women often add richer context because they instinctively think through tone, audience, and subtle cues before giving direction. That depth produces output that carries more human texture and brand alignment, and it reduces the amount of rewriting teams usually do when prompts feel thin.Pam also talks about synthetic empathy and how easily teams misread it. AI can generate warm language, but users often sense a hollow quality once they reread the output. She has seen teams trust the first fluent result because it looks polished on the surface. People with stronger emotional intelligence detect when the writing lacks genuine feeling or when it leans on clichés instead of real understanding. Pam notices this most in content meant for sensitive moments, such as apology emails or customer care messages, where the emotional miss becomes obvious."Prompting is basically briefing the AI, and women are natural context givers. We think about tone and audience and nuance, and that is what makes AI output more human and more aligned with the brand."Pam brings even sharper clarity when she moves into analytics. She observes that many marketers chase the top performer without questioning who drove the behavior. She describes moments where curiosity leads someone to discover that a small, highly engaged audience segment pulled the numbers upward. She sees women interrogating patterns by asking:Who showed upWhy they behaved the way they didWhat made the pattern appear more universal than it isThose questions shift analytics from scoreboar...
undefined
Nov 25, 2025 • 56min

197: Anna Leary: The Art of saying no and other mental health strategies in marketing ops

Join Anna Leary, Director of Marketing Operations at Alma, as she dives into the art of maintaining boundaries in high-pressure environments. With expertise gleaned from her work at Uber and Bitly, Anna reveals why saying no can be a key strategy for preventing burnout. She discusses the importance of visibility in marketing operations and how to handle constant pushback from stakeholders. Learn about the benefits of asynchronous communication, smart planning tactics, and how to evaluate Martech tools based on their real business impact.
undefined
Nov 18, 2025 • 55min

196: Blair Bendel: The World of casino marketing and the tech that brings it to life

Blair Bendel, Senior VP of Marketing at Foxwoods Resort Casino, brings two decades of experience to the table. He discusses the evolution of casino marketing technology, emphasizing the shift towards personalized communication and how it enhances guest experiences. Blair shares insights on migrating to MoEngage to unify data, balancing marketing strategies with privacy concerns, and the role of AI in the industry. He highlights the importance of human elements in marketing and the need for a resourceful team to thrive in the fast-paced casino environment.
undefined
Nov 11, 2025 • 55min

195: Megan Kwon: How One of Canada’s largest retailers orchestrates messaging, and structures martech

What’s up folks, today we have the pleasure of sitting down with Megan Kwon, Director, Digital Customer Communications at Loblaw Digital.(00:00) - Intro (01:26) - In This Episode (04:11) - Building a Career Around Conversations That Scale (06:25) - Customer Journey Pods and Martech Team Structures (09:08) - Martech Team Structures (11:23) - Customer Journey Martech Pods (12:54) - How to Assign Martech Tool Ownership and Drive Real Adoption (14:54) - Martech Training and Onboarding (17:30) - How To Integrate New Martech Into Daily Habits (19:59) - Why Change Champions Work in Martech Transformation (24:11) - Change Champion Example (28:25) - How To Manage Transactional Messaging Across Multiple Brands (32:35) - Frequency and Recency Capping (35:59) - Why Shared Ownership Improves Transactional Messaging (41:50) - Why Human Governance Still Matters in AI Messaging (47:11) - Why Curiosity Matters in Adapting to AI (53:08) - Creating Sustainable Energy in Marketing Leadership Summary: Megan leads digital customer communications at Loblaw Digital, turning enterprise-scale messaging into something that feels personal. She built her teams around the customer journey, giving each pod full creative and data ownership. The people driving results also own the tools, learning by building and celebrating small wins. Her “change champions” make new ideas stick, and her view on AI is grounded; use it to go faster, not think for you. Curiosity, she says, is what keeps marketing human.About MeganMegan Kwon runs digital customer communications at Loblaw Digital, the team behind how millions of Canadians hear from brands like Loblaws, Shoppers Drug Mart, and President’s Choice. She’s part strategist, part systems thinker, and fully obsessed with how data can make marketing feel more human, not less.Before returning to Loblaw, Megan helped reshape how people discover and trust local marketplaces at Kijiji, and before that, she built growth engines in the fintech world at NorthOne. Her career has been a study in scale; from scrappy e-commerce tests to national lifecycle programs that touch nearly every Canadian household. What sets her apart is the way she leads: with deep curiosity, radical ownership, and a bias for collaboration. She believes numbers tell stories, and that the best marketing teams build movements around insight, empathy, and accountability.Building a Career Around Conversations That ScaleRunning digital messaging at Loblaw means coordinating communication at a scale that few marketers ever experience. Megan oversees the systems that deliver millions of emails and texts across brands Canadians interact with daily, including Loblaws, Shoppers Drug Mart, and President’s Choice. Her team manages both marketing and transactional messages, making sure each one aligns with a specific stage in the customer journey. The workload is immense. Each division has its own priorities, and every campaign needs to fit within a shared infrastructure that still feels personal to the customer.“We work with a lot of different business divisions across the entire organization. Our job is to make sure their strategies and programs come to life through the customer lifecycle.”Megan’s team operates more like a connective tissue than a broadcast engine. They bridge the gaps between marketing, product, and data teams, translating disconnected strategies into a unified experience. That work involves building systems capable of:Managing multiple brand voices while keeping messaging consistentTriggering real-time communications that respond to customer behaviorIntegrating old and new technologies without breaking operational flowEvery campaign becomes part of a continuous conversation with the customer. Each message is one step in a long dialogue, not a one-off announcement.Megan’s perspective comes from experience earned in very different industries. She began her career at Loblaw during the early days of online grocery, a time when digital operations were experimental and resourceful. She later worked across fintech, marketplaces, and paid media before returning to Loblaw. That journey helped her understand every layer of the customer funnel, from acquisition through retention. It also taught her how to combine growth marketing tactics with enterprise-level communication systems, that way she can scale personalization without losing humanity.Most large organizations still treat messaging as a collection of isolated programs. Megan treats it as an ecosystem. Her work shows that when lifecycle and acquisition efforts operate within a shared framework, communication becomes more coherent and far more effective. Alignment between data, channels, and teams reduces noise and builds trust with customers who engage across multiple brands.Key takeaway: Building a unified messaging ecosystem starts with structure, not volume. Create systems that connect channels, data, and brand voices into one coordinated experience. Treat messaging as a relationship that continues long after the first conversion. That way you can make enterprise-scale communication feel personal, intentional, and consistent across every touchpoint.Customer Journey Pods and Martech Team StructuresRunning digital communications at Loblaw means managing one of the largest customer ecosystems in the country. The team sends millions of messages across grocery, pharmacy, and e-commerce brands every week. Each interaction has to feel personal, relevant, and timely, even when it comes from a massive organization. Megan explains that the only way to handle that kind of scale is to treat data as the operating system and collaboration as the backbone.Her team relies on analytics to shape every message. Real-time signals from dozens of digital properties guide what customers see, when they see it, and how those experiences evolve. It is a constant feedback loop between behavior and communication. “We lean a lot into the data that we gather,” Megan says. “That pretty much drives almost everything that we do.” The systems are only half the story, though. The other half is how her team stays connected across offices, divisions, and projects. They share knowledge in Coda, manage progress in Jira, and rely on Slack to keep conversations fluid. Even their emojis have purpose, creating a shared language that makes collaboration faster and more human.“Everything that we do, we share that knowledge back and forth so that we can continue to learn off each other,” Megan said.The team structure used to follow the company’s business units. Each division had its own specialists who acted like small internal agencies. It worked for speed, but it made collaboration harder. Megan reorganized everything around the customer journey instead. Her teams now work in “pods” that align with stages such as onboarding, discovery, shopping, and post-purchase. Each pod has both data and creative ownership over its domain. That way, a single team can experiment, learn, and apply what works across multiple brands.Megan also built intentional overlap between pods to keep ideas moving. For example, the loyalty and early engagement pod owns both new-member activation and retention. That connection helps them understand the full customer arc, from first purchase to repeat visits. The result is a flexible structure that shares expertise fluidly without losing focus. Large enterprises tend to slow down under their own weight, but this model keeps Loblaw’s marketing engine fast, synchronized, and grounded in customer behavior.The work Megan’s team does might look complex from the out...
undefined
Nov 4, 2025 • 53min

194: Jane Menyo: How Gong democratized customer proof with AI research and standardized prompts

What’s up everyone today we have the pleasure of sitting down with Jane Menyo, Sr. Director, Solutions & Customer Marketing @ Gong.(00:00) - Jane-audio (01:01) - In This Episode (04:43) - How Solutions Marketing Turns Customer Insights Into Strategy (09:22) - Using AI to Mine Real Customer Intelligence from Conversations (13:18) - Why Stitching Research Sequences Works in Customer Marketing (17:09) - Using AI Trackers to Uncover Buyer Behavior in Sales Conversations (23:21) - How Standardized Prompts Improve Sales Enablement Systems (29:43) - Building Messaging Systems That Scale Across Industries (34:15) - How Gong’s Research Assistant Slack Bot Delivers Instant Customer Proof (38:26) - Avoiding Mediocre AI Marketing Research (43:42) - Why Customer Proof Outperforms AI-Generated Marketing (45:41) - Why Rest Strengthens Creative Output in Marketing Summary: Jane built her marketing practice around listening. At Gong, she turned raw customer conversations into a live feedback system that connects sales calls, product strategy, and messaging in real time. Her team uses AI to surface patterns from the field and feed them back into content that actually reflects how people buy. She runs on curiosity and recovery, finding her best ideas mid-run. In a world obsessed with producing more, Jane’s work reminds marketers to listen better. The smartest strategies start in the quiet moments when someone finally hears what the customer’s been saying all along.About JaneJane Menyo leads Solutions and Customer Marketing at Gong, where she’s known for fusing strategy with storytelling to turn customers into true advocates. She built Gong’s customer marketing engine from the ground up, scaling programs that drive adoption, retention, and community impact across the company’s revenue intelligence ecosystem.Before Gong, Jane led customer and solutions marketing at ON24, where she developed go-to-market playbooks and launched large-scale advocacy initiatives that connected customer voice to product innovation. Earlier in her career, she helped shape demand generation and brand strategy at Comprehend Systems (a Y Combinator and Sequoia-backed life sciences startup) laying the operational groundwork that fueled growth.A former NCAA All-American and U.S. Olympic Trials contender, Jane brings a rare blend of discipline, creativity, and competitive energy to her leadership. Her approach to marketing is grounded in empathy and powered by data; a balance that turns customer stories into growth engines.How Solutions Marketing Turns Customer Insights Into StrategyJane’s role at Gong evolved from building customer advocacy programs to leading both customer and solutions marketing. What began as storytelling and adoption work expanded into shaping how Gong positions its products for different personas and industries. The shift moved her from celebrating customer wins to architecting how those wins inform the company’s broader go-to-market strategy.Persona marketing only works when it goes beyond demographics and titles. Jane treats it as an operational system that connects customer understanding with product truth. Her team studies how real people use Gong, where they get stuck, what outcomes they care about, and how their teams actually make buying decisions. Those details guide every message Gong sends into the market. It is a constant feedback loop that keeps the company close to how customers think and work.Her solutions marketing team functions like a mirror to product marketing. Product marketers focus on what the product can do, while Jane’s team translates that into why it matters to specific audiences. They do not write from feature lists. They write from the field. When a sales manager spends half her day in Gong but still struggles to coach reps efficiently, Jane’s team crafts stories and materials that speak directly to that pain. The goal is to make every communication feel like it was written from inside the customer’s daily workflow.“Our work is about meeting customers where they are and helping them get to outcomes faster,” Jane said.That perspective only works when every team in the company has equal access to the customer’s voice. Gong’s own technology makes that possible. Conversations, feedback, and usage patterns are captured and shared automatically, so customer knowledge is no longer limited to those on the front lines. Jane’s group uses that visibility to deepen persona profiles, test new positioning, and identify emerging trends before they reach scale. It makes the company more responsive and keeps messaging grounded in real behavior instead of assumption.For anyone building customer marketing systems, the lesson is practical. Treat persona development as a live system, not a static report. Use customer data to update your understanding regularly. Create tools that let everyone in your company hear what customers say in their own words. That way you can write content, sales materials, and product messaging that actually aligns with how people buy, not how you wish they did.Key takeaway: Persona marketing works when it functions as an always-on loop between customer data and company action. Map real behaviors, refresh those insights often, and share them widely. When everyone in your company hears the customer directly, you can shape messaging that feels relevant, personal, and authentic. That way you can scale customer understanding instead of guessing at it.Using AI to Mine Real Customer Intelligence from ConversationsAI is reshaping how teams understand their customers. Jane uses it as a force multiplier for customer research, not a replacement for human interpretation. Her process starts inside Gong’s platform, where every call, email, and deal interaction holds untapped evidence of what customers actually think. Instead of relying on small surveys or intuition, her team digs into those real conversations to extract patterns that explain why deals move forward or stall.When the team explores a new persona or market, they begin with what customers have already said. They gather every interaction tied to that persona and run it through a standardized set of research questions. In one project focused on CIOs, Jane’s team analyzed hundreds of calls to understand how these executives engage in deals. They wanted to know what information CIOs request, what they challenge, and how their questions differ from other buyers.“We were able to run a series of questions across hundreds of calls and get standardized insights in a couple of days,” Jane said. “That changed the tempo of how we learn.”Once they finish mining internal conversations, they widen their view to external data. They use AI tools like ChatGPT to scan analyst reports, trade publications, and articles that mention the same personas. That process identifies what topics are rising in the market and how those trends align with what Gong’s customers are discussing in their calls. The result is a dual-layered map of reality: what customers say in private conversations and what the market signals in public forums.This kind of research produces better decisions because it pairs scale with nuance. AI speeds up analysis across thousands of data points, but empathy gives meaning to those patterns. That way you can identify where customer perception shifts are happening and adjust messaging, enablement, or product focus before the market catches up.Key takeaway: Use AI to process the noise, not to replace your judgment. Start with the data you already have; call recordings, customer emails, and deal transcripts, and create a structured framework for what you want to learn. Th...
undefined
Oct 28, 2025 • 1h 3min

193: David Joosten: The Politics and architecture of martech transformation

What’s up everyone, today we have the pleasure of sitting down with David Joosten, Co-Founder and President at GrowthLoop and the co-author of ‘First-Party Data Activation’.(00:00) - Intro (01:02) - In This Episode (03:47) - Earning The Right To Transform Martech (08:17) - Why Internal Roadshows Make Martech Wins Stick (10:52) - Architecture Shapes How Teams Move and What They Believe (16:25) - Bring Order to Customer Data With the Medallion Framework (21:33) - The Real Enemy of Martech is Fragmented Data (28:39) - Stop Calling Your CRM the Source of Truth (34:47) - Building the Tech Stack People Rally Behind (38:18) - Why Most CDP Failures Start With Organizational Misalignment (44:18) - Why Tough Conversations Strengthen Lifecycle Marketing (55:15) - Why Experimentation Culture Strengthens Martech Leadership (01:00:00) - How to Use a North Star to Stay Focused in Leadership Summary: David learned that martech transformation begins with proof people can feel. Early in his career, he built immaculate systems that looked impressive but delivered nothing real. Everything changed when a VP asked him to show progress instead of idealistic roadmaps. From that moment, David focused on momentum and quick wins. Those early victories turned into stories that spread across the company and built trust naturally. Architecture became his silent advantage, shaping how teams worked together and how confidently they moved. About DavidDavid is the co-founder of GrowthLoop, a composable customer data platform that helps marketers connect insights to action across every channel. He previously worked at Google, where he led global marketing programs and helped launch the Nexus 5 smartphone. Over the years, he has guided teams at Indeed, Priceline, and Google in building first-party data strategies that drive clarity, collaboration, and measurable growth.He is the co-author of First-Party Data Activation: Modernize Your Marketing Data Platform, a practical guide for marketers who want to understand their customers through direct, consent-based interactions. David helps teams move faster by removing data friction and building marketing systems that adapt through experimentation. His work brings energy and empathy to the challenge of modernizing data-driven marketing.Earning The Right To Transform MartechEvery marketing data project starts with ambition. Teams dream of unified dashboards, connected pipelines, and a flawless single source of truth. Then the build begins, and progress slows to a crawl. David remembers one project vividly. His team at GrowthLoop had connected more than 200 data fields for a global tech company, yet every new campaign still needed more. The setup looked impressive, but nothing meaningful was shipping.“We spent quarters building the perfect setup,” David said. “Then the VP of marketing called me and said, ‘Where are my quick wins?’”That question changed his thinking. The VP wasn’t asking for reports or architecture diagrams. He wanted visible proof that the investment was worth it. He needed early wins he could show to leadership to keep momentum alive. David realized that transformation happens through demonstration, not design. Theoretical perfection means little when no one in marketing can point to progress.From then on, he started aiming for traction over theory. That meant focusing on use cases that delivered impact quickly. He looked for under-supported teams that were hungry to try new tools, small markets that moved fast, and forgotten product lines desperate for attention. Those early adopters created visible success stories. Their enthusiasm turned into social proof that carried the project forward.Momentum built through results is what earns the right to transform. When others in the organization see evidence of progress, they stop questioning the system and start asking how to join it.Key takeaway: Martech transformations thrive on proof, not perfection. Target high-energy teams where quick wins are possible, deliver tangible outcomes fast, and use that momentum to secure organizational buy-in. Transformation is granted to those who prove it works, one visible success at a time.Why Internal Roadshows Make Martech Wins StickAn early martech win can disappear as quickly as it arrives. A shiny dashboard, a clean sync, or a new workflow can fade into noise unless you turn it into something bigger. David explains that the real work begins when you move beyond Slack celebrations and start building visibility across the company. The most effective teams bring their success to where influence actually happens. They show up in weekly leadership meetings for sales, data, and marketing, and they connect their progress to the company’s larger mission. That connection transforms an isolated result into shared purpose.“If you can get invited to those regular meetings and actually tie the win back to the larger vision, you’ll bring people along in a much bigger way,” David said.The mechanics of this matter. A martech team can create genuine momentum by turning their story into a live narrative that other departments care about. Each meeting becomes a checkpoint where others see how their world benefits. Instead of flooding channels with metrics, show impact in person. When people see faces, hear real stories, and feel included in the mission, adoption follows naturally.David has seen that the most credible voices are not the ones who built the system, but those who benefited from it. He encourages marketers to bring those users along. When a sales manager or a CX leader shares how a workflow saved hours or unlocked new visibility, trust deepens. One authentic endorsement in a meeting will do more for your reputation than a dozen slide decks.Momentum also depends on rhythm. Passionate advocates move ideas forward, not mass announcements. David’s playbook involves building a few strong allies who believe in your work, keeping promises, and maintaining a consistent drumbeat of delivery. Predictable progress creates confidence, and confidence earns permission to take bigger swings next time.Key takeaway: Wins that stay private fade fast. Present them live, in front of the right rooms, and connect them to the company’s shared mission. Bring along the people most impacted to tell their side of the story, and focus on nurturing a few genuine allies instead of broadcasting to everyone. That way you can turn one early success into a pattern of momentum that fuels every project that follows.Architecture Shapes How Teams Move and What They BelieveTechnology architecture does more than keep the lights on. It defines how much teams trust each other, how quickly they adapt, and how confidently a brand competes. David describes it as invisible scaffolding, the kind that quietly dictates how an organization moves. Once the systems are in place, the defaults harden into habits. Those habits shape behavior long after anyone remembers who set them.“People can get used to almost anything,” David said. “You acquire habits from architectural decisions made long ago, and it’s not conscious. You just walk into the context and act within it.”That pattern shows up inside every marketing organization. Data teams often build for accuracy and control, while marketers push for agility and access. The architecture decides which side wins. When the design prioritizes risk management, marketers spend months waiting for queries to be approved. When it prioritizes freedom without governance, trust breaks down the first time a campaign misfires. Neither version scales.Composable system...
undefined
Oct 21, 2025 • 1h 6min

192: Angela Vega: Expedia’s Martech leader on ADHD, discernment, and the art of picking battles in martech

Angela Vega, Director at Expedia Group, shares her unique insights on leveraging ADHD in martech leadership. She discusses building an ADHD tech stack that turns distractions into productivity through a structured workflow. Angela explains how her late diagnosis reshaped her leadership style, emphasizing that execution is more critical than strategy. She offers a framework for discernment in decision-making and highlights the importance of energy management for effective marketing operations.
undefined
Oct 14, 2025 • 1h 3min

191: Aboli Gangreddiwar: Self healing data agents, hivemind memory curators and living documentation

What’s up everyone, today we have the pleasure of sitting down with Aboli Gangreddiwar, Senior Director of Lifecycle and Product Marketing at Credible. (00:00) - Intro (01:10) - In This Episode (04:54) - Agentic Infrastructure Components in Marketing Operations (09:52) - Self Healing Data Quality Agents (16:36) - Data Activation Agents (26:56) - Campaign QA Agents (32:53) - Compliance Agents (39:59) - Hivemind Memory Curator (51:22) - AI Browsers Could Power Living Documentation (58:03) - How to Stay Balanced as a Marketing Leader Summary: Aboli and Phil explore AI agent use cases and the operational efficiency potential of AI for marketing Ops teams. Data quality agents promise self-healing pipelines, though their value depends on strong metadata. QA agents catch broken links, design flaws, and compliance issues before launch, shrinking review cycles from days to minutes. An AI hivemind memory curator that records every experiment and outcome, giving teams durable knowledge instead of relying on long-tenured employees. Documentation agents close the loop, with AI browsers hinting at a future where SOPs and playbooks stay accurate by default. About AboliAboli Gangreddiwar is the Senior Director of Lifecycle and Product Marketing at Credible, where she leads growth, retention, and product adoption for the personal finance marketplace. She has previously led lifecycle and product marketing at Sundae, helping scale the business from Series A to Series C, and held senior roles at Prosper Marketplace and Wells Fargo. Aboli has built and managed high-performing teams across acquisition, lifecycle, and product marketing, with a track record of driving customer growth through a data-driven, customer-first approach.Agentic Infrastructure Components in Marketing OperationsAgentic infrastructure depends on layers that work together instead of one-off experiments. Aboli starts with the data layer because every agent needs the same source of truth. If your data is fragmented, agents will fail before they even start. Choosing whether Snowflake, Databricks, or another warehouse becomes less about vendor preference and more about creating a system where every agent reads from the same place. That way you can avoid rework and inconsistencies before anything gets deployed.Orchestration follows as the layer that turns isolated tools into workflows. Most teams play with a single agent at a time, like one that generates subject lines or one that codes email templates. Those agents may produce something useful, but orchestration connects them into a process that runs without human babysitting. In lifecycle marketing, that could mean a copy agent handing text to a Figma agent for design, which then passes to a coding agent for HTML. The difference is night and day: disconnected experiments versus a relay where agents actually collaborate.“If I am sending out an email campaign, I could have a copy agent, a Figma agent, and a coding agent. Right now, teams are building those individually, but at some point you need orchestration so they can pass work back and forth.”Execution is where many experiments stall. An agent cannot just generate outputs in a vacuum. It needs an environment where the work lives and runs. Sometimes this looks like a custom GPT creating copy inside OpenAI. Other times it connects directly to a marketing automation platform to publish campaigns. Execution means wiring agents into systems that already matter for your business. That way you can turn novelty into production-level work.Feedback and human oversight close the loop. Feedback ensures agents learn from results instead of repeating the same mistakes, and human review protects brand standards, compliance, and legal requirements. Tools like Zapier already help agents talk across systems, and protocols like MCP push the idea even further. These pieces are developing quickly, but most teams still treat them as experiments. Building infrastructure means treating feedback and oversight as required layers, not extras.Key takeaway: Agentic infrastructure requires more than a handful of isolated agents. Build it in five layers: a unified data warehouse, orchestration to coordinate handoffs, execution inside production tools, feedback loops that improve performance, and human oversight for brand safety. Draw this stack for your own team and map what exists today. That way you can see the gaps clearly and design the next layer with intention instead of chasing hype.Self Healing Data Quality AgentsAutonomous data quality agents are being pitched as plug-and-play custodians for your warehouse. Vendors claim they can auto-fix more than 200 common data problems using patterns they have already mapped from other customers. Instead of ripping apart your stack, you “plug in” the agent to your warehouse or existing data layer. From there, the system runs on the execution layer, watching data as it flows in, cleaning and correcting records without waiting for human approval. The promise is speed and proactivity: problems handled in real time rather than reports generated after the damage is already done.The mechanics are ambitious. These agents rely on pre-mapped patterns, best practices, and the accumulated experience of diverse customer sources. Their features go beyond simple alerts. Vendors market capabilities like:Data issue detection that flags anomalies as records arrive.Auto-generated rules so you do not have to write manual SQL for every edge case.Auto-resolution workflows that decide which record wins in conflict scenarios.Self-healing pipelines that reroute or repair flows before they break downstream dashboards.Aboli noted that the concept makes sense in theory but still depends heavily on the quality of metadata. She recalled using Snowflake Copilot and asking it for user lists by specific criteria. The model understood her intent, but it pulled from the wrong tables.“If it had the right metadata, the right dictionary, or if I had access to the documentation, I could have navigated it better and corrected the tables it was looking at,” Aboli said.Phil highlighted how this overlaps with data observability tools. Companies like Informatica, Qlik, and Ataccama already dominate Gartner’s “augmented data quality” quadrant, while newcomers are rebranding the category as “agentic data management.” DQ Labs markets itself as a leader in this space. Startups like Acceldata in India and Delpha in France are pitching autonomous agents as the future, while Alation has gone further by releasing a suite of agents under an “Agentic Data Intelligence” platform. The buzz is loud, but the mechanics echo tools that ops teams have worked with for years.Aboli stressed that marketers and ops leaders should resist jumping straight to procurement. Demoing these tools can spark useful ideas, and sometimes the exposure itself inspires practical fixes in-house. The key is to connect adoption to a specific pain point. If your team loses days untangling duplicates and broken joins, the ROI might be obvious. If your pipelines already hold together through strict governance, then the spend may not pay off.Key takeaway: Autonomous data quality agents can detect issues, generate rules, resolve conflicts, and even heal pipelines in real time. Their effectiveness depends on metadata discipline and the actual pain of bad data in your org. Use vendor demos as a scouting tool, then match the investment to measurable business problems. That way you can avoid buzzword chasing and apply agentic tools where they drive the most immediate value.Data Activation Agents

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app