Humans of Martech cover image

Humans of Martech

Latest episodes

undefined
Jul 15, 2025 • 1h 7min

178: Guta Tolmasquim: Connecting brand to revenue with attribution algorithms that reflect brand complexity

What’s up everyone, today we have the pleasure of sitting down with Guta Tolmasquim, CEO at Purple Metrics. Summary: Brand measurement often feels like a polite performance nobody fully believes, and Guta learned this firsthand moving from performance marketing spreadsheets to startup rebrands that showed clear sales bumps everyone could feel. She kept seeing blind spots, like a bank’s soccer sponsorship that quietly cut churn or old LinkedIn pages driving conversions no one tracked. When she built Purple Metrics, she refused to pretend algorithms could explain everything, designing tools that encourage gradual shifts over sudden upheaval. She watched CMOs massage attribution settings to fit their instincts and knew real progress demanded something braver: smaller experiments, simpler language, and the courage to say, “We tried, we learned,” even when results stung. Her TikTok videos in Portuguese became proof that brand work can pay off fast if you track it honestly. If you’re tired of clean stories masking messy reality, her perspective feels like a breath of fresh air.How Brand Measurement Connects to RevenueBrand measurement drifted away from commercial reality when marketers decided to chase every click and impression. Guta traced this pattern back to the 1970s when companies decided to separate branding and sales into distinct functions. Before that split, teams treated branding as a sales lever that directly supported revenue. The division created two camps that rarely spoke the same language. One camp focused on lavish creative campaigns, and the other became fixated on dashboards filled with shallow metrics.Guta started her career in performance marketing because she valued seeing every dollar accounted for. She described those years as productive but ultimately unsatisfying. She moved to big enterprises and spent nearly a decade trying to make brand lift reports feel credible in boardrooms. She eventually turned her focus to startups and noticed a clearer path. Startups often have budgets that force prioritization. They pick one initiative, implement it, and measure its direct impact on revenue without dozens of overlapping campaigns.“When you only have money to do one thing, it becomes obvious what’s working,” Guta explained. “You almost get this A/B test without even planning for it.”That clarity shaped her view of brand measurement. She learned that disciplined isolation of variables makes results easier to trust. When a startup rebranded, sales moved in a way that confirmed the decision. The data was hard to ignore. Guta saw purchase volumes increase after brand updates, and she knew these signals were stronger than any generic awareness metric. The companies she worked with never relied on sentiment scores alone because they tracked actual transactions.Guta later built her own product to modernize brand research with a sharper focus on financial outcomes. She designed the system to map brand activities to revenue signals so marketing could prove its impact without resorting to vague reports. The product found traction because it respected the mindset of finance leaders and offered direct evidence that branding drives growth. Guta believed this connection was essential for any team that wants to secure resources and build trust across departments.Key takeaway: Brand measurement works best when you focus on one clear change at a time and track its impact on revenue without distractions. You can earn credibility with your finance partners by showing how brand decisions move purchase behavior in measurable ways. When you build discipline into measurement and align it with actual sales, you transform branding from a creative exercise into a proven growth lever.Examples Where Brand Investments Shifted Real Business OutcomesBrand investments often get treated as trophies that decorate a budget presentation. Guta shared a story that showed how sponsorships can drive specific business results when you track them properly. A Brazilian bank decided to sponsor a soccer championship. On the surface, the campaign looked like a glossy PR move. When Guta’s team measured what they called “mindset metrics,” they found that soccer fans reported higher loyalty toward the bank. The data set off a chain reaction that forced everyone involved to reconsider how they viewed sponsorships.The bank pulled internal reports and discovered a clear pattern. Fans who followed the soccer sponsorship churned at much lower rates than other customers. Guta said the marketing team realized they were sitting on a revenue engine they never fully understood. They began to see sponsorship as a serious retention tool rather than a vanity spend. That shift did not happen automatically. Someone had to ask whether the big brand push was connected to any measurable outcomes, and then look carefully for the link between sentiment and behavior.Guta described another client who rebranded their product suite under one name. They planned to delete the old LinkedIn pages that showed the previous brand identities. The team assumed nobody cared about those pages because LinkedIn conversions looked low in standard reports. Guta’s data proved otherwise. Those profiles accounted for more than 10% of conversions. Even though LinkedIn often buries links and limits reach, buyers visited those profiles before searching on Google and converting later.“Organic is a myth. It’s just conversions you forgot to measure.”Guta said this with the calm certainty of someone who has studied enough attribution to see where the gaps live. She explained that once you recognize how long it takes for a sponsorship impression to spark a branded search or a sale, you change how you plan. You stop guessing about campaign timing. You start working backward from the conversion window. If you expect a surge in July, you begin your campaigns in May so your budget has time to mature into real conversions instead of wasted impressions.Key takeaway: Map the path between your brand investments and your conversions with concrete data instead of assumptions. Use mindset metrics to identify early loyalty signals, then confirm whether those signals correlate with retention and branded search. When you see exactly how long each channel takes to drive revenue, you can plan campaigns months in advance and protect your budget with evidence that proves your strategy is working.The Tangible Outcomes of Brand: Purchase Intent and Memory StructuresBranding often carries a reputation as a soft layer of sentiment layered on top of performance campaigns, but Guta shares that it operates through a more rigorous mechanism than most teams realize. Branding creates memory structures that store signals in a person’s mind. When customers enter the market ready to buy, they retrieve those signals almost instantly. Their brains pull up familiar visuals, a sense of trust, or a specific promise that speeds up the choice. Guta has seen this happen repeatedly when people move straight from awareness to purchase without even visiting the company’s website again.Guta describes the reality that many marketing teams get stuck in a single-track mindset. They keep trying to hammer home immediate behaviors without any effort to create longer-term recall. She shares that brands can think about their work in two tracks running side by side:One track plants attributes in memory so customers can recall the brand later.The other track activates specific behaviors like trying, subscribing, or purchasing.When companies only focus on activation, they may end up with viral content that does not translate into any buying behavior. Guta has watched teams measure short-term engagement while ignoring whether the campaign ...
undefined
Jul 8, 2025 • 58min

177: Chris O’Neill: GrowthLoop CEO on how AI agent swarms and reinforcement learning boost velocity

What’s up everyone, today we have the pleasure of sitting down with Chris O'Neill, CEO at GrowthLoop. Summary: Chris explains how leading marketing teams are deploying swarms of AI agents to automate campaign workflows with speed and precision. By assigning agents to tasks like segmentation, testing, and feedback collection, marketers build fast-moving loops that adapt in real time. Chris also breaks down how reinforcement learning helps avoid a sea of sameness by letting campaigns evolve mid-flight based on live data. To support velocity without sacrificing control, top teams are running red team drills, assigning clear data ownership, and introducing internal AI regulation roles that manage risk while unlocking scale.The 2025 AI and Marketing Performance IndexThe 2025 AI and Marketing Performance Index that GrowthLoop put together is excellent, we’re honored to have gotten our hands on it before it went live and getting to unpack that with Chris in this episode. The report answers timely questions a lot of teams are are wrestling with:Are top performers ahead of the AI curve or just focused on solid foundations? Are top performers focused on speed and quantity or does quality still win in a sea of sameness?We’ve chatted with plenty of folks that are betting on patience and polish. But GrowthLoop’s data shows the opposite.🤖🏃 Top performerming marketing teams are already scaling with AI and their focus on speed is driving growth. For some, this might be a wake-up call. But for others, it’s confirmation and might seem obvious: Teams that are using AI and working fast are growing faster. We all get the why. But the big mystery is the how. So let’s dig into the how teams can implement AI to grow faster and how to prepare marketers and marketing ops folks for the next 5 years.Reframing AI in Marketing Around Outcomes and VelocityMarketing teams love speed. AI vendors promise it. Founders crave it. The problem is most people chasing speed have no idea where they’re going. Chris prefers velocity. Velocity means you are moving fast in a defined direction. That requires clarity. Not hype. Not generic goals. Clarity.AI belongs in your toolkit once you know exactly which metric needs to move. Chris puts it plainly: revenue, lifetime value, or cost. Pick one. Write it down. Then explain how AI helps you get there. Not in vague marketing terms. In business terms. If you cannot describe the outcome in a sentence your CFO would nod at, you are wasting everyone’s time.“Being able to articulate with precision how AI is going to drive and improve your profit and loss statement, that’s where it starts.”Too many teams start with tools. They get caught up in features and launch pilots with no destination. Chris sees this constantly. The projects that actually work begin with a clearly defined business problem. Only after that do they start choosing systems that will accelerate execution. AI helps when it fits into a system that already knows where it’s going.Velocity also forces prioritization. If your AI project can't show directional impact on a core business metric, it does not deserve resources. That way you can protect your time, your budget, and your credibility. Chris doesn’t get excited by experiments. He gets excited when someone shows him how AI will raise net revenue by half a percent this quarter. That’s the work.Key takeaway: Start with a business problem. Choose one outcome: revenue, lifetime value, or cost reduction. Define how AI contributes to that outcome in concrete terms. Use speed only when you know the direction. That way you can build systems that deliver velocity, not chaos.How to Use Agentic AI for Marketing Campaign ExecutionMany marketing teams still rely on AI to summarize campaign data, but stop there. They generate charts, read the output, and then return to the same manual workflows they have used for years. Chris sees this pattern everywhere. Teams label themselves as “data-driven,” while depending on outdated methods like list pulls, rigid segmentation, and one-off blasts that treat everyone in the same group the same way.Chris calls this “waterfall marketing.” A marketer decides on a goal like improving retention or increasing lifetime value. Then they wait in line for the data team to write SQL, generate lists, and pass it back. That process often takes days or weeks, and the result is usually too narrow or too broad. The entire workflow is slow, disconnected, and full of friction.Teams that are ahead have moved to agent-based execution. These systems no longer depend on one-off requests or isolated tools. AI agents access a shared semantic layer, interpret past outcomes, and suggest actions that align with business goals. These actions include:Identifying the best-fit audience based on past conversionsSuggesting campaign timing and sequencingLaunching experiments automaticallyFeeding all results back into a single data source“You don’t wait in line for a data pull anymore,” Chris said. “The agent already knows what audience will likely move the needle, based on what’s worked in the past.”Marketing teams using this model no longer debate which list to use or when to launch. They build continuous loops where agents suggest, execute, and learn at every stage. These agents now handle tasks better than most humans, especially when volume and speed matter. Marketers remain in the loop for creative decisions and audience understanding, but the manual overhead is no longer the cost of doing business.Key takeaway: AI agents become effective when they handle specific steps across your marketing workflow. By assigning agents to segmentation, timing, testing, and feedback collection, you can move faster and operate with more precision. That way you can replace the long list of disconnected tasks with a tight loop of execution that adapts in real time.How Reinforcement Learning Optimizes GenAI ContentReinforcement learning gives marketers a way to optimize AI-generated content without falling into repetition. Chris has seen firsthand how most outbound sequences feel eerily similar. Templates dominate, personalization tags glitch, and every message sounds like it was assembled by the same spreadsheet. The problem does not stem from the idea of automation but from its poor execution. Teams copy tactics without refining their inputs or measuring what actually works.Chris points to reinforcement learning as the fix for this stagnation. He contrasts it with more rigid machine learning models, which make predictions but often lack adaptability. Reinforcement learning works differently. It learns by doing. It tracks real-world feedback and updates decision-making logic in motion. That gives marketers an edge in adjusting timing, sequencing, and delivery based on signals from actual behavior.“It would be silly to ignore all the data from previous experiments,” Chris said. “Reinforcement learning gives us a way to build on it without starting over each time.”Chris believes this creates space for creative work rather than replacing it. Agents should own the tedious tasks. That includes segmenting lists, building reports, and managing repetitive logic. Human teams can then focus on storytelling, taste, and trend awareness. Chris referenced a conversation with a senior designer at Gap who shared a similar view. This designer believes AI lets him expand his creative range by clearing room for deep work. Chris sees the same opportunity in marketing. The system works best when agents handle the mechanical layers, and humans bring energy, weirdness, and originality.Many leaders are still caught in operational quicksand. Their teams wrestle with bl...
undefined
Jul 1, 2025 • 1h 9min

176: Rajeev Nair: Causal AI and a unified measurement framework

Rajeev Nair, Co-Founder and Chief Product Officer at Lifesight, shares insights on the future of marketing measurement. He advocates for a unified approach that combines multi-touch attribution, incrementality, and causal AI to uncover real driver insights. Rajeev dismantles traditional attribution myths, emphasizing that clarity beats correlation. By innovating a system beyond simple dashboards, he reveals the challenges marketers face with sparse data and tests. His work paves the way for improved decision-making and effective marketing strategies.
undefined
Jun 24, 2025 • 1h 3min

175: Hope Barrett: SoundCloud’s Martech Leader reflects on their huge messaging platform migration and structuring martech like a product

What’s up everyone, today we have the pleasure of sitting down with Hope Barrett, Sr Director of Product Management, Martech at SoundCloud. Summary: In twelve weeks, Hope led a full messaging stack rebuild with just three people. They cut 200 legacy campaigns down to what mattered, partnered with MoEngage for execution, and shifted messaging into the product org. Now, SoundCloud ships notifications like features that are part of a core product. Governance is clean, data runs through BigQuery, and audiences sync everywhere. The migration was wild and fast, but incredibly meticulous and the ultimate gain was making the whole system make sense again.About HopeHope Barrett has spent the last two decades building the machinery that makes modern marketing work, long before most companies even had names for the roles she was defining. As Senior Director of Product Management for Martech at SoundCloud, she leads the overhaul of their martech stack, making every tool in the chain pull its weight toward growth. She directs both the performance marketing and marketing analytics teams, ensuring the data is not just collected but used with precision to attract fans and artists at the right cost.Before SoundCloud, she spent over six years at CNN scaling their newsletter program into a real asset, not just a vanity list. She laid the groundwork for data governance, built SEO strategies that actually stuck, and made sure editorial, ad sales, and business development all had the same map of who their readers were. Her career also includes time in consulting, digital analytics agencies, and leadership roles at companies like AT&T, Patch, and McMaster-Carr. Across all of them, she has combined technical fluency with sharp business instincts.SoundCloud’s Big Messaging Platform Migration and What it Taught Them About Future-Proofing Martech: Diagnosing Broken Martech Starts With Asking Better QuestionsHope stepped into SoundCloud expecting to answer a tactical question: what could replace Nielsen’s multi-touch attribution? That was the assignment. Attribution was being deprecated. Pick something better. What she found was a tangle of infrastructure issues that had very little to do with attribution and everything to do with operational blind spots. Messages were going out, campaigns were triggering, but no one could say how many or to whom with any confidence. The data looked complete until you tried to use it for decision-making.The core problem wasn’t a single tool. It was a decade of deferred maintenance. The customer engagement platform dated back to 2016. It had been implemented when the vendor’s roadmap was still theoretical, so SoundCloud had built their own infrastructure around it. That included external frequency caps, one-off delivery logic, and measurement layers that sat outside the platform. The platform said it sent X messages, but downstream systems had other opinions. Hope quickly saw the pattern: legacy tooling buried under compensatory systems no one wanted to admit existed.That initial audit kicked off a full system teardown. The MMP wasn’t viable anymore. Google Analytics was still on Universal. Even the question that brought her in—how to replace MTA—had no great answer. Every path forward required removing layers of guesswork that had been quietly accepted as normal. It was less about choosing new tools and more about restoring the ability to ask direct questions and get direct answers. How many users received a message? What triggered it? Did we actually measure impact or just guess at attribution?“I came in to answer one question and left rebuilding half the stack. You start with attribution and suddenly you're gut-checking everything else.”Hope had done this before. At CNN, she had run full vendor evaluations, owned platform migrations, and managed post-rollout adoption. She knew what bloated systems looked like. She also knew they never fix themselves. Every extra workaround comes with a quiet cost: more dependencies, more tribal knowledge, more reasons to avoid change. Once the platforms can’t deliver reliable numbers and every fix depends on asking someone who left last year, you’re past the point of iteration. You’re in rebuild territory.Key takeaway: If your team can't trace where a number comes from, the stack isn’t helping you operate. It’s hiding decisions behind legacy duct tape. Fixing that starts with hard questions. Ask what systems your data passes through, which rules live outside the platform, and how long it’s been since anyone challenged the architecture. Clarity doesn’t come from adding more tools. It comes from stripping complexity until the answers make sense again.Why Legacy Messaging Platforms Quietly Break Your Customer ExperienceHope realized SoundCloud’s customer messaging setup was broken the moment she couldn’t get a straight answer to a basic question: how many messages had been sent? The platform could produce a number, but it was useless. Too many things happened after delivery. Support infrastructure kicked in. Frequency caps filtered volume. Campaign logic lived outside the actual platform. There was no single system of record. The tools looked functional, but trust had already eroded.The core problem came from decisions made years earlier. The customer engagement platform had been implemented in 2016 when the vendor was still early in its lifecycle. At the time, core features didn’t exist, so SoundCloud built their own solutions around it. Frequency management, segmentation logic, even delivery throttling ran outside the tool. These weren’t integrations. They were crutches. And they turned what should have been a centralized system into a loosely coupled set of scripts, API calls, and legacy logic that no one wanted to touch.Hope had seen this pattern before. At CNN, she dealt with similar issues and recognized the symptoms immediately. Legacy platforms tend to create debt you don’t notice until you start asking precise questions. Things work, but only because internal teams built workarounds that silently age out of relevance. Tech stacks like that don’t fail loudly. They fail in fragments. One missing field, one skipped frequency cap, one number that doesn’t reconcile across tools. By the time it’s clear something’s wrong, the actual root cause is buried under six years of operational shortcuts.“The platform gave me a number, but it wasn’t the real number. Everything important was happening outside of it.”Hope’s philosophy around messaging is shaped by how she defines partnership. She prefers vendors who act like partners, not ticket responders. Partners should care about long-term success, not just contract renewals. But partnership also means using the tool as intended. When the platform is bent around missing features, the relationship becomes strained. Every workaround is a vote of no confidence in the roadmap. Eventually, you're not just managing campaigns. You’re managing risk.Key takeaway: If your customer messaging platform can't report true delivery volume because critical logic happens outside of it, you're already in rebuild territory. Don’t wait for a total failure. Audit where key rules live. Centralize what matters. And only invest in tools where out-of-the-box features can support your real-world use cases. That way you can grow without outsourcing half your stack to workaround scripts and tribal knowledge.Why Custom Martech Builds Quietly Punish You LaterThe worst part of SoundCloud’s legacy stack wasn’t the duct-taped infrastructure. It was how long it took to admit it had become a problem. The platform had been in place since 2016, back when the vendor was still figuring out core features. Instead of switching, SoundCloud stayed locked in ...
undefined
Jun 17, 2025 • 1h 5min

174: Joshua Kanter: A 4-time CMO on the case against data democratization

What’s up everyone, today we have the pleasure of sitting down with Joshua Kanter, Co-Founder & Chief Data & Analytics Officer at ConvertML. Summary: Joshua spent the earliest parts of his career buried in SQL, only to watch companies hand out dashboards and call it strategy. Teams skim charts to confirm hunches while ignoring what the data actually says. He believes access means nothing without translation. You need people who can turn vague business prompts into clear, interpretable answers. He built ConvertML to guide those decisions. GenAI only raises the stakes. Without structure and fluency, it becomes easier to sound confident and still be completely wrong. That risk scales fast.About JoshuaJoshua started in data analytics at First Manhattan Consulting, then co-founded two ventures; Mindswift, focused on marketing experimentation, and Novantas, a consulting firm for financial services. From there, he rose to Associate Principal at McKinsey, where he helped companies make real decisions with messy data and imperfect information. Then he crossed into operating roles, leading marketing at Caesars Entertainment as SVP of Marketing, where budgets were wild.After Caesars, he became a 3-time CMO (basically 4-time); at PetSmart, International Cruise & Excursions, and Encora. Each time walking into a different industry with new problems. He now co-leads ConvertML, where he’s focused on making machine learning and measurement actually usable for the people in the trenches.Data Democratization Is Breaking More Than It’s FixingData democratization has become one of those phrases people repeat without thinking. It shows up in mission statements and vendor decks, pitched like some moral imperative. Give everyone access to data, the story goes, and decision-making will become magically enlightened. But Joshua has seen what actually happens when this ideal collides with reality: chaos, confusion, and a lot of people confidently misreading the same spreadsheet in five different ways.Joshua isn’t your typical out of the weeds CMO, he’s lived in the guts of enterprise data for 25 years. His first job out of college was grinding SQL for 16 hours a day. He’s been inside consulting rooms, behind marketing dashboards, and at the head of data science teams. Over and over, he’s seen the same pattern: leaders throwing raw dashboards at people who have no training in how to interpret them, then wondering why decisions keep going sideways.There are several unspoken assumptions built into the data democratization pitch. People assume the data is clean. That it’s structured in a meaningful way. That it answers the right questions. Most importantly, they assume people can actually read it. Not just glance at a chart and nod along, but dig into the nuance, understand the context, question what’s missing, and resist the temptation to cherry-pick for whatever narrative they already had in mind.“People bring their own hypotheses and they’re just looking for the data to confirm what they already believe.”Joshua has watched this play out inside Fortune 500 boardrooms and small startup teams alike. People interpret the same report with totally different takeaways. Sometimes they miss what’s obvious. Other times they read too far into something that doesn’t mean anything. They rarely stop to ask what data is not present or whether it even makes sense to draw a conclusion at all.Giving everyone access to data is great and all… but only works when people have the skills to use it responsibly. That means more than teaching Excel shortcut keys. It requires real investment in data literacy, mentorship from technical leads, and repeated, structured practice. Otherwise, what you end up with is a very expensive system that quietly fuels bias and bad decisions and just work for the sake of work.Key takeaway: Widespread access to dashboards does not make your company data-informed. People need to know how to interpret what they see, challenge their assumptions, and recognize when data is incomplete or misleading. Before scaling access, invest in skills. Make data literacy a requirement. That way you can prevent costly misreads and costly data-driven decision-making.How Confirmation Bias Corrupts Marketing Decisions at ScaleExecutives love to say they are “data-driven.” What they usually mean is “data-selective.” Joshua has seen the same story on repeat. Someone asks for a report. They already have an answer in mind. They skim the results, cherry-pick what supports their view, and ignore everything else. It is not just sloppy thinking. It’s organizational malpractice that scales fast when left unchecked.To prevent that, someone needs to sit between business questions and raw data. Joshua calls for trained data translators; people who know how to turn vague executive prompts into structured queries. These translators understand the data architecture, the metrics that matter, and the business logic beneath the request. They return with a real answer, not just a number in bold font, but a sentence that says: “Here’s what we found. Here’s what the data does not cover. Here’s the confidence range. Here’s the nuance.”“You want someone who can say, ‘The data supports this conclusion, but only under these conditions.’ That’s what makes the difference.”Joshua has dealt with both extremes. There are instinct-heavy leaders who just want validation. There are also data purists who cannot move until the spreadsheet glows with statistical significance. At a $7 billion retailer, he once saw a merchandising exec demand 9,000 survey responses; just so he could slice and dice every subgroup imaginable later. That was not rigor. It was decision paralysis wearing a lab coat.The answer is to build maturity around data use. That means investing in operators who can navigate ambiguity, reason through incomplete information, and explain caveats clearly. Data has power, but only when paired with skill. You need fluency, not dashboards. You need interpretation and above all, you need to train teams to ask better questions before they start fishing for answers.Key takeaway: Every marketing org needs a data translation layer; real humans who understand the business problem, the structure of the data, and how to bridge the two with integrity. That way you can protect against confirmation bias, bring discipline to decision-making, and stop wasting time on reports that just echo someone's hunch. Build that capability into your operations. It is the only way to scale sound judgment.You’re Thinking About Statistical Significance Completely WrongToo many marketers treat statistical significance like a ritual. Hit the 95 percent confidence threshold and it's seen as divine truth. Miss it, and the whole test gets tossed in the trash. Joshua has zero patience for that kind of checkbox math. It turns experimentation into a binary trap, where nuance gets crushed under false certainty and anything under 0.05 is labeled a failure. That mindset is lazy, expensive, and wildly limiting.95% statistical significance does not mean your result matters. It just means your result is probably not random, assuming your test is designed well and your assumptions hold up. Even then, you can be wrong 1 out of every 20 times, which no one seems to talk about in those Monday growth meetings. Joshua’s real concern is how this thinking cuts off all the good stuff that lives in the grey zone; tests that come in at 90 percent confidence, show a consistent directional lift, and still get ignored because someone only trusts green checkmarks.“People believe that if it doesn’t hit statistical significance, the result isn’t meaningful. That’s false. And danger...
undefined
Jun 10, 2025 • 60min

173: Samia Syed: Dropbox's Director of Growth Marketing on rethinking martech like HR efforts

What’s up everyone, today we have the pleasure of sitting down with Samia Syed, Director of Growth Marketing at Dropbox. Summary: Samia Syed treats martech like hiring. If it costs more than a headcount, it needs to prove it belongs. She scopes the problem first, tests tools on real data, and talks to people who’ve lived with them not just vendor reps. Then she tracks usage and outcomes from day one. If adoption stalls or no one owns it, the tool dies. She once watched a high-performing platform get orphaned after a reorg. Great tech doesn’t matter if no one’s accountable for making it work.Don’t Buy the Tool Until You’ve Scoped the JobMartech buying still feels like the Wild West. Companies drop hundreds of thousands of dollars on tools after a single vendor call, while the same teams will debate for weeks over whether to hire a junior coordinator. Samia calls this out plainly. If a piece of software costs more than a person, why wouldn’t it go through the same process as a headcount request?She maps it directly: recruiting rigor should apply to your tech stack. That means running a structured scoping process before you ever look at vendors. In her world, no one gets to pitch software until three things are clear:What operational problem exists right nowWhat opportunities are lost by not fixing itWhat the strategic unlock looks like if you doMost teams skip that. They hear about a product, read a teardown on LinkedIn, and spin up a trial to “explore options.” Then the feature list becomes the job description, and suddenly there’s a contract in legal. At no point did anyone ask whether the team actually needed this, what it was costing them not to have it, or what they were betting on if it worked.Samia doesn’t just talk theory. She has seen this pattern lead to ballooning tech stacks and stale tools that nobody uses six months after procurement. A shiny new platform feels like progress, but if no one scoped the actual need, you’re not moving forward. You’re burying yourself in debt, disguised as innovation.“Every new tool should be treated like a strategic hire. If you wouldn’t greenlight headcount without a business case, don’t greenlight tech without one either.”And it goes deeper. You can’t just build a feature list and call that a justification. Samia breaks it into a tiered case: quantify what you lose without the tool, and quantify what you gain with it. How much time saved? How much revenue unlocked? What functions does it enable that your current stack can’t touch? Get those answers first. That way you can decide like a team investing in long-term outcomes, not like a shopper chasing the next product demo.Key takeaway: Treat every Martech investment like a senior hire. Before you evaluate vendors, run a scoping process that defines the current gap, quantifies what it costs you to leave it open, and identifies what your team can achieve once it’s solved. Build a business case with numbers, not just feature wishlists. If you start by solving real problems, you’ll stop paying for shelfware.Your Martech Stack Is a Mess Because Mops Wasn’t in the Room EarlyMost marketing teams get budget the same way they get unexpected leftovers at a potluck. Something shows up, no one knows where it came from, and now it’s your job to make it work. You get a number handed down from finance. Then you try to retroactively justify it with people, tools, and quarterly goals like you’re reverse-engineering a jigsaw puzzle from the inside out.Samia sees this happen constantly. Teams make decisions reactively because their budget arrived before their strategy. A renewal deadline pops up, someone hears about a new tool at a conference, and suddenly marketing is onboarding something no one asked for. That’s how you end up with shelfware, disconnected workflows, and tech debt dressed up as innovation.This is why she pushes for a different sequence. Start with what you want to achieve. Define the real gaps that exist in your ability to get there. Then use that to build a case for people and platforms. It sounds obvious, but it rarely happens that way. In most orgs, Marketing Ops is left out of the early conversations entirely. They get handed a brief after the budget is locked. Their job becomes execution, not strategy.“If MOPS is treated like a support team, they can’t help you plan. They can only help you scramble.”Samia has seen two patterns when MOPS lacks influence. Sometimes the head of MOPS is technically in the room but lacks the confidence, credibility, or political leverage to speak up. Other times, the org’s workflows never gave them a shot to begin with. Everything is set up as a handoff. Business leaders define targets, finance approves the budget, then someone remembers to loop in the people who actually have to make it all run. That structure guarantees misalignment. If you want a smarter stack, you have to fix how decisions get made.Key takeaway: Build your Martech plan around strategic goals, not leftover budget. Start with what needs to be accomplished, define the capability gaps that block it, and involve MOPS from the beginning to shape how tools and workflows can solve those problems. If Marketing Ops is looped in only after the fact, you’re not planning. You’re cleaning up.Build Your Martech Stack Like You’re Hiring a TeamMost teams buy software like they’re following a recipe they’ve never tasted. Someone says “we need a CDP,” and suddenly everyone’s firing off RFPs, demoing the usual suspects, and comparing price tiers on platforms they barely understand. Samia draws a clean line between hiring and buying here. In both cases, the smartest teams treat the process as exploration, not confirmation.Hiring isn’t static. You open a rec, start meeting candidates, and quickly realize the original job description is outdated by the third interview. A standout candidate shows up, and suddenly the scope expands. You rewrite the role to fit the opportunity, not the other way around. Samia thinks buying Martech should work the same way. Instead of assuming a fixed category solves the problem, you should:Map your actual use caseTalk to vendors and real usersCompare radically different paths, not just direct competitors“You almost need to challenge yourself to zoom out and ask if this tool fits where your company is actually headed.”Samia’s lived the pain of teams chasing big-budget platforms with promises of deep functionality, only to realize no one has the bandwidth to implement them properly. The tool ends up shelved or duct-taped into place while marketing burns cycles trying to retrofit workflows around something they were never ready for. That kind of misalignment doesn’t show up in vendor decks or curated testimonials. You only catch it by doing your own research and talking to people who don’t have a sales quota.Buying tech is easy. Building capability is hard. Samia looks for tools that match the company’s maturity and provide room to grow. Not everything needs to be composable, modular, and future-proofed into infinity. Sometimes the right move is choosing what works today, then layering in complexity as your team levels up. Martech isn’t one-size-fits-all, and most vendor conversations are just shiny detours away from that uncomfortable truth.Key takeaway: Treat your Martech search like a hiring process in motion. Start with a goal, not a category. Stay open to evolving the solution as new context surfaces. Talk to actual users who’ve implemented the tool under real constraints. Ask what broke, what surprised them, and what they’d do differently. Choose the tech that fits your team’s real capabili...
undefined
Jun 3, 2025 • 53min

172: Ankur Kothari: A practical guide on implementing AI to improve retention and activation through personalization

What’s up everyone, today we have the pleasure of sitting down with Ankur Kothari, Adtech and Martech Consultant who’s worked with big tech names and finance/consulting firms like Salesforce, JPMorgan and McKinsey.The views and opinions expressed by Ankur in this episode are his own and do not necessarily reflect the official position of his employer.Summary: Ankur explains how most AI personalization flops cause teams ignore the basics. He helped a brand recover millions just by making the customer journey actually make sense, not by faking it with names in emails. It’s all about fixing broken flows first, using real behavior, and keeping things human even when it’s automated. Ankur is super sharp, he shares a practical maturity framework for AI personalization so you can assess where you currently fit and how you get to the next stage. AI Personalization That Actually Increases Retention - Practical ExampleMost AI personalization in marketing is either smoke, mirrors, or spam. People plug in a tool, slap a customer’s first name on a subject line, then act surprised when the retention numbers keep tanking. The tech isn't broken. The execution is lazy. That’s the part people don’t want to admit.Ankur worked with a mid-sized e-commerce brand in the home goods space that was bleeding revenue; $2.3 million a year lost to customers who made one purchase and never returned. Their churn rate sat at 68 percent. Think about that. For every 10 new customers, almost 7 never came back. And they weren’t leaving because the product was bad or overpriced. They were leaving because the whole experience felt like a one-size-fits-all broadcast. No signal, no care, no relevance.So he rewired their personalization from the ground up. No gimmicks. No guesswork. Just structured, behavior-based segmentation using first-party data. They looked at:Website interactionsPurchase historyEmail engagementCustomer service logsThen they fed that data into machine learning models to predict what each customer might actually want to do next. From there, they built 27 personalized customer journeys. Not slides in a strategy deck. Actual, functioning sequences that shaped content delivery across the website, emails, and mobile app.> “Effective AI personalization is only partly about the tech but more about creating genuinely helpful customer experiences that deliver value rather than just pushing products.”The results were wild. Customer retention rose 42 percent. Lifetime value jumped from $127 to $203. Repeat purchase rate grew by 38 percent. Revenue climbed by $3.7 million. ROI hit 7 to 1. One customer who previously spent $45 on a single sustainable item went on to spend more than $600 in the following year after getting dropped into a relevant, well-timed, and non-annoying flow.None of this happened because someone clicked "optimize" in a tool. It happened because someone actually gave a damn about what the customer experience felt like on the other side of the screen. The lesson isn’t that AI personalization works. The lesson is that it only works if you use it to solve real customer problems.Key takeaway: AI personalization moves the needle when you stop using it as a buzzword and start using it to deliver context-aware, behavior-driven customer experiences. Focus on first-party data that shows how customers interact. Then build distinct journeys that respond to actual behavior, not imagined personas. That way you can increase retention, grow customer lifetime value, and stop lighting your acquisition budget on fire.Why AI Personalization Fails Without Fixing Basic Automation FirstSigning up for YouTube ads should have been a clean experience. A quick onboarding, maybe a personalized email congratulating you for launching your first campaign, a relevant tip about optimizing CPV. Instead, the email that landed was generic and mismatched—“Here’s how to get started”—despite the fact the account had already launched its first ad. This kind of sloppiness doesn’t just kill momentum. It exposes a bigger problem: teams chasing personalization before fixing basic logic.Ankur saw this exact issue on a much more expensive stage. A retail bank had sunk $2.3 million into an AI-driven loan recommendation engine. Sophisticated architecture, tons of fanfare. Meanwhile, their onboarding emails were showing up late and recommending products users already had. That oversight translated to $3.7 million in missed annual cross-sell revenue. Not because the AI was bad, but because the foundational workflows were broken.The failure came from three predictable sources:Teams operated in silos. Innovation was off in its own corner, disconnected from marketing ops and customer experience.The tech stack was split in two. Legacy systems handled core functions, but were too brittle to change. AI was layered on top, using modern platforms that didn’t integrate cleanly.Leaders focused on innovation metrics, while no one owned the state of basic automation or email logic.To fix it, Ankur froze the AI rollout for 120 days and focused on repair work. The team rebuilt the essential customer journeys, cleaned up logic gaps, and restructured automation to actually respond to user behavior. This work lifted product adoption by 28 percent and generated an additional $4.2 million in revenue. Once the base was strong, they reintroduced the AI engine. Its impact increased by 41 percent, not because the algorithm improved, but because the environment finally supported it.> “The institutions that win with AI are the ones that execute flawlessly across all technology levels, from simple automation to cutting-edge applications.”That lesson applies everywhere, including in companies far smaller than Google or JPMorgan. When you skip foundational work, every AI project becomes a band-aid over a broken funnel. It might look exciting, but it can’t hold.Key takeaway: Stop using AI to compensate for broken customer journeys. Fix your onboarding logic, clean up your automation triggers, and connect your systems across teams. Once the fundamentals are working, you can layer AI on top of a system that supports it. That way you can generate measurable returns, instead of just spinning up another dashboard that looks good in a QBR.Step by Step Approach to AI Personalization With a Maturity Framework - The First Steps You Can Take on The Path To AI PersonalizationMost AI personalization projects start with a 50-slide vision deck, three vendors, and zero working use cases. Then teams wonder why things stall. What actually works is starting small and surgical. One product. One journey. Clear data. Clear upside.Ankur advised a regional bank that had plenty of customer data but zero AI in play. No need for new tooling or a six-month roadmap. They focused on one friction-heavy opportunity with direct payoff: mortgage pre-approvals. Forget trying to personalize every touchpoint. They picked the one that mattered and did it well.They built a clustering algorithm using transaction patterns, savings trends, and credit utilization to detect home-buying intent. From there, they pushed pre-approvals with tailored rates and terms. The bank already had the raw data in its core systems. No scraping, no extra collection, no “data enrichment” vendor needed.That decision paid off fast:The data already existed, so implementation moved quicklyThe scope was limited to a single high-stakes journeyThe impact landed hard: mortgage application rates jumped 31 percent and approval-to-close conversions climbed 24 percent within 60 days> “Start with a high-value product journey where pers...
undefined
May 27, 2025 • 1h 1min

171: Kim Hacker: Reframing tool FOMO, making AI face real work and catching up on AI skills

What’s up everyone, today we have the pleasure of sitting down with Kim Hacker, Head of Business Ops at Arrows. Summary: Tool audits miss the mess. If you’re trying to consolidate without talking to your team, you’re probably breaking workflows that were barely holding together. The best ops folks already know this: they’re in the room early, protecting momentum, not patching broken rollouts. Real adoption spreads through peer trust, not playbooks. And the people thriving right now are the generalists automating small tasks, spotting hidden friction, and connecting dots across sales, CX, and product. If that’s you (or you want it to be) keep reading or hit play.About KimKim started her career in various roles like Design intern and Exhibit designer/consultantShe later became an Account exec at a Marketing AgencyShe then moved over to Sawyer in a Partnerships role and later Customer OnboardingToday Kim is Head of Business Operations at Arrows Most AI Note Takers Just Parrot Back JunkKim didn’t set out to torch 19 AI vendors. She just wanted clarity.Her team at Arrows was shipping new AI features for their digital sales room, which plugs into HubSpot. Before she went all in on messaging, she decided to sanity check the market. What were other sales teams in the HubSpot ecosystem actually *doing* with AI? Over a dozen calls later, the pattern was obvious: everyone was relying on AI note takers to summarize sales calls and push those summaries into the CRM.But no one was talking about the quality. Kim realized if every downstream sales insight starts with the meeting notes, then those notes better be reliable. So she ran her own side-by-side teardown of 22 AI note takers. No configuration. No prompt tuning. Just raw, out-of-the-box usage to simulate what real teams would experience.> “If the notes are garbage, everything you build on top of them is garbage too.”She was looking for three things: accuracy, actionability, and structure. The kind of summaries that help reps do follow-ups, populate deal intelligence, or even just remember the damn call. Out of 22 tools, only *three* passed that bar. The rest ranged from shallow summaries to complete misinterpretations. Some even skipped entire sections of conversations or hallucinated action items that never came up.It’s easy to assume an AI-generated summary is “good enough,” especially if it sounds coherent. But sounding clean is not the same as being useful. Most note takers aren't designed for actual sales workflows. They're just scraping audio for keywords and spitting out templated blurbs. That’s fine for keeping up appearances, but not for decision-making or pipeline accuracy.Key takeaway: Before layering AI on top of your sales stack, audit your core meeting notes. Run a side-by-side test on your current tool, and look for three things: accurate recall, structured formatting, and clear next steps. If your AI notes aren’t helping reps follow up faster or making your CRM smarter, they’re just noise in a different font.Why Most Teams Will Miss the AI Agent Wave EntirelyThe vision is seductive. Sales reps won't write emails. Marketers won’t build workflows. Customer success won’t chase follow-ups. Everyone will just supervise agents that do the work for them. That future sounds polished, automated, and eerily quiet. But most teams are nowhere close. They’re stuck in duplicate records, tool bloat, and a queue of Jira tickets no one’s touching. AI agents might be on the roadmap, but the actual work is still being done by humans fighting chaos with spreadsheets.Kim sees the disconnect every day. AI fatigue isn’t coming from overuse. It’s coming from bad framing. “A lot of people talking about AI are just showing the most complex or viral workflows,” she explains. “That stuff makes regular folks feel behind.” People see demos built for likes, not for legacy systems, and it creates a false sense that they’re supposed to be automating their entire job by next quarter.> “You can’t rely on your ops team to AI-ify the company on their own. Everyone needs a baseline.”Most reps haven’t written a good prompt, let alone tried chaining tools together. You can’t go from zero to agent management without a middle step. That middle step is building a culture of experimentation. Start with small, daily use cases. Help people understand how to prompt, what clean AI output looks like, and how to tell when the tool is lying. Get the entire org to that baseline, then layer on tools like Zapier Agents or Relay App to handle the next tier of automation.Skipping the basics guarantees failure later. Flashy agents look great in demos, but they don’t compensate for unclear processes or teams that don’t trust automation. If the goal is to future-proof your workflows, the work starts with people, not tools.Key takeaway: If your team isn't fluent in basic AI usage, agent-powered workflows are a pipe dream. Build a shared baseline across departments by teaching prompt writing, validating outputs, and experimenting with small use cases. That way you can unlock meaningful automation later instead of chasing trends that no one has the capacity to implement.When AI Systems Meet The Chaos Of Actual Workplace ProcessesAI vendors keep shipping tools like everyone has an intern, a technical co-pilot, and five extra hours a week to configure dream workflows. The real buyers? They’re just trying to fix broken Salesforce fields, write one less follow-up email, or get through the day without copy-pasting notes into Notion. Somewhere between those extremes, the user gets lost in translation.Kim has felt that gap from both sides. She was hesitant to even start with ChatGPT. “I almost gave up on it,” she said. “I felt late and overwhelmed, and I just figured maybe I wasn’t going to be an AI person.” Fast forward to today, and it’s one of her most-used tools. She didn’t get there by wiring up agents. She started small. Simple things. Drafting ideas, summarizing content, clarifying messy thoughts. That built trust. Then momentum.“There’s a lot that has to happen before your calendar is filled with calls and nothing else. AI can help, but you have to let it earn its spot.”If you're trying to build that muscle, forget the multi-tool agent orchestration for a second. Focus on everyday wins like:Turning a messy Slack thread into a clean summaryWriting a follow-up email in your toneRewriting a calendar event title so it makes sense to your future selfCleaning up action items from a sales call without hallucinationsDrafting internal documentation from bullet pointsThe pace is accelerating. People feel it. You don’t need to watch keynote demos to know that change is coming fast. It’s easy to feel like you’re already behind. Kim doesn’t disagree. She just thinks most teams are solving the wrong problem. Vendors are focused on the sprint. Most people haven’t even laced up. “Everyone wants the big leap,” she said. “But most wins come from small, boring tools that actually do what they say they’ll do.”That’s the root issue. A lot of AI features today are solving theoretical problems. They assume workflows are tidy, perfectly tagged, and documented in Notion. Real work is messier. It happens in Slack threads, half-filled records, and follow-ups that never got logged. If your tool can’t handle that, then it doesn’t matter how shiny your roadmap is.Key takeaway: Stop evaluating AI features based on potential. Evaluate them based on current chaos. Ask whether the tool handles your worst-case scenario, not your ideal one. Prioritize small, boring use cases that save time immediately. That way yo...
undefined
May 20, 2025 • 59min

170: Keith Jones: OpenAI’s Head of GTM systems on building judgement with ghost stories, buying martech with cognitive extraction and why data dictionaries prevail

Keith Jones, Head of GTM Systems at OpenAI, has a rich background in sales operations and tech. He reveals that the best way to buy martech isn't through spreadsheets, but through cognitive extraction, combining stakeholder input with AI. Keith shares insights on his career journey from sales to operations, exploring how empathy shapes decision-making. He discusses the future of SaaS with fewer tools and stronger data infrastructure, and emphasizes the importance of a hands-on approach to integrating AI in marketing strategies for better outcomes.
undefined
May 13, 2025 • 1h 1min

169: Elena Hassan: Visa acquires your startup but nobody warns you about the tech stack aftermath and enterprise culture shock

Summary: Elena has done what most startup marketers only guess at; made it through multiple acquisitions and now leads global integrated marketing at Visa. In this episode, she breaks down what actually changes when you go from scrappy lead gen to enterprise brand building, why most martech tools don’t survive security reviews, and how leadership without authority is the skill that really matters. We get into messy tech migrations, broken attribution dreams, and why picking up the phone still beats Slack. If you’ve ever wondered why your startup playbook stops working at scale, this conversation spells it out.What Startup Marketers Learn the Hard Way When They Land at a Big CorporationElena does not call herself an “acquisition master,” even though her resume might suggest otherwise. Three startups she worked at were acquired, Sivan by Refinitiv, WorkMarket by ADP, and Currencycloud by Visa, where she works today. Some might spin that track record as a strategic playbook for career navigation. Elena sees it differently. She credits great teams and good companies, not some personal Midas touch.The truth is, you cannot force an acquisition. What you can do is get really good at reading the room. Elena’s career started deep in the weeds of lead generation and demand marketing, chasing performance metrics and measuring everything that moved. Early on, she dipped into other areas, event planning, employee engagement, but demand gen was where she built muscle. That was her lane at WorkMarket, where the first big learning curve hit.It turns out the skills that build the lead gen engine are not the same ones you need when a company shifts from hypergrowth to prepping for acquisition. Elena experienced firsthand the moment when leadership stops asking about lead volume and starts asking about brand perception. Suddenly the focus pivots from how many MQLs you can squeeze out of a campaign to how the company is positioned in the market, what the media is saying, and whether the brand looks credible at scale. She admitted she did not fully appreciate that switch at first.> "I came there with a mindset of if I can't track it, I'm not gonna do it," Elena said. "Every performance marketer would probably relate."That perspective doesn’t fly for long in environments where brand and reputation start to outweigh click-through rates. Elena’s time at Visa has only reinforced that lesson. Today, much of her work revolves around brand building and awareness, the same areas she once side-eyed for being soft and unmeasurable. It is one thing to believe in brand. It is another thing entirely to understand how hard it is to build one well.The scale jump from startup life to a company with over 30,000 employees does not just change the headcount. It rewires the entire pace and process of how work gets done. Elena described the gut-check moment that made it clear she was not at a scrappy startup anymore. It was not a high-level strategy meeting or a sweeping corporate memo. It was the moment she tried to get a simple social graphic approved.In a startup, that kind of thing takes a few minutes on Canva and the green light from whoever’s closest to the Slack channel. At Visa, especially as a regulated financial institution, it involves legal reviews, vendor contracts, approval workflows, and enough compliance checks to make your head spin. Campaigns that once rolled out in days now take months. Not because anyone is slow, but because the stakes are high and the rules are different.That culture shock is where many startup marketers either adapt or tap out. What Elena figured out is that the skills that work at one stage of company life are not the ones that get you through the next. If you want to survive the jump from lean team to enterprise machine, you have to stop resenting the process and start respecting what it protects.Key takeaway: If you're coming from startup life, expect a painful adjustment when you move into a large, regulated company. The speed, autonomy, and scrappiness you are used to will collide hard with approval chains and compliance processes. The faster you stop fighting it and start learning why those systems exist, the faster you'll find your footing. Metrics-driven marketing only gets you so far. To thrive at scale, you need to understand the power and patience required to build brand trust.What Nobody Tells You About Merging Tech Stacks After an AcquisitionThe fantasy version of an acquisition is clean and celebratory. Two companies come together, the deal closes, the press release goes out, and life moves on. The reality, especially for marketing teams, is a long, often frustrating grind of systems audits, security reviews, and endless conversations about whether your beloved tools will survive the merger.Elena has lived through that grind more than once. When Visa acquired Currencycloud, she was not navigating that shift alone. Many of her teammates made the journey with her, which helped. But solidarity does not make the process move faster. It just means you have people to vent to while you wait for approvals.One of the first and hardest parts of that transition was not a debate between marketers. It was the clash between marketing teams and security teams. Every single piece of tech Currencycloud used, whether it was their website hosting, HubSpot marketing automation, or even individual add-ons, had to go under the microscope. Security teams needed to assess, vet, and approve each tool, often asking questions that made sense from a cybersecurity perspective but sounded completely out of touch to anyone in marketing.The back-and-forth was not casual. It escalated all the way up to the chief technology officer and the cybersecurity team at HubSpot sitting down with Elena's group to explain, in detail, what the platform could and could not do. None of this was about malice or incompetence. It was about two fundamentally different mindsets trying to find common ground.> "These are security people. They’re not marketers. They don’t always know why we need a particular tool or what it does," Elena explained.That learning curve is brutal if you're not prepared for it. The deeper into operations you sit, the more of these conversations you end up having. Elena found herself in rooms with people from multiple marketing ops teams across Visa, comparing tech stacks, workflows, and priorities. There was no easy answer to which system would win out. Sometimes the decision was clear. Other times it came down to questions like, is it really worth fighting for this tool, or is now the time to adapt to what already exists?She describes it as less like transferring from one job to another and more like moving from a Montessori school to a traditional classroom. Both systems can deliver a good education. They just teach in wildly different ways. One thrives on flexibility and autonomy. The other runs on structure and process. Neither is wrong. They are simply different environments, and surviving the switch requires a willingness to adjust.The biggest mistake marketers make in these situations is believing the process is about what *they* want. Elena was quick to point out that the companies she has worked for, especially Visa, keep customer experience at the center of these decisions. It is not about which tool is most familiar to the internal team. It is about which systems create the least friction for the end user. That mindset helps keep the process grounded, even when the day-to-day feels like a slow march through bureaucracy.Patience is not optional in these transitions. You will hit walls. You will repeat yourself. You will explain the same use case to five different people across three different teams. And eventually, you will e...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app