Humans of Martech cover image

Humans of Martech

Latest episodes

undefined
Jun 24, 2025 • 1h 3min

175: Hope Barrett: SoundCloud’s Martech Leader reflects on their huge messaging platform migration and structuring martech like a product

What’s up everyone, today we have the pleasure of sitting down with Hope Barrett, Sr Director of Product Management, Martech at SoundCloud. Summary: In twelve weeks, Hope led a full messaging stack rebuild with just three people. They cut 200 legacy campaigns down to what mattered, partnered with MoEngage for execution, and shifted messaging into the product org. Now, SoundCloud ships notifications like features that are part of a core product. Governance is clean, data runs through BigQuery, and audiences sync everywhere. The migration was wild and fast, but incredibly meticulous and the ultimate gain was making the whole system make sense again.About HopeHope Barrett has spent the last two decades building the machinery that makes modern marketing work, long before most companies even had names for the roles she was defining. As Senior Director of Product Management for Martech at SoundCloud, she leads the overhaul of their martech stack, making every tool in the chain pull its weight toward growth. She directs both the performance marketing and marketing analytics teams, ensuring the data is not just collected but used with precision to attract fans and artists at the right cost.Before SoundCloud, she spent over six years at CNN scaling their newsletter program into a real asset, not just a vanity list. She laid the groundwork for data governance, built SEO strategies that actually stuck, and made sure editorial, ad sales, and business development all had the same map of who their readers were. Her career also includes time in consulting, digital analytics agencies, and leadership roles at companies like AT&T, Patch, and McMaster-Carr. Across all of them, she has combined technical fluency with sharp business instincts.SoundCloud’s Big Messaging Platform Migration and What it Taught Them About Future-Proofing Martech: Diagnosing Broken Martech Starts With Asking Better QuestionsHope stepped into SoundCloud expecting to answer a tactical question: what could replace Nielsen’s multi-touch attribution? That was the assignment. Attribution was being deprecated. Pick something better. What she found was a tangle of infrastructure issues that had very little to do with attribution and everything to do with operational blind spots. Messages were going out, campaigns were triggering, but no one could say how many or to whom with any confidence. The data looked complete until you tried to use it for decision-making.The core problem wasn’t a single tool. It was a decade of deferred maintenance. The customer engagement platform dated back to 2016. It had been implemented when the vendor’s roadmap was still theoretical, so SoundCloud had built their own infrastructure around it. That included external frequency caps, one-off delivery logic, and measurement layers that sat outside the platform. The platform said it sent X messages, but downstream systems had other opinions. Hope quickly saw the pattern: legacy tooling buried under compensatory systems no one wanted to admit existed.That initial audit kicked off a full system teardown. The MMP wasn’t viable anymore. Google Analytics was still on Universal. Even the question that brought her in—how to replace MTA—had no great answer. Every path forward required removing layers of guesswork that had been quietly accepted as normal. It was less about choosing new tools and more about restoring the ability to ask direct questions and get direct answers. How many users received a message? What triggered it? Did we actually measure impact or just guess at attribution?“I came in to answer one question and left rebuilding half the stack. You start with attribution and suddenly you're gut-checking everything else.”Hope had done this before. At CNN, she had run full vendor evaluations, owned platform migrations, and managed post-rollout adoption. She knew what bloated systems looked like. She also knew they never fix themselves. Every extra workaround comes with a quiet cost: more dependencies, more tribal knowledge, more reasons to avoid change. Once the platforms can’t deliver reliable numbers and every fix depends on asking someone who left last year, you’re past the point of iteration. You’re in rebuild territory.Key takeaway: If your team can't trace where a number comes from, the stack isn’t helping you operate. It’s hiding decisions behind legacy duct tape. Fixing that starts with hard questions. Ask what systems your data passes through, which rules live outside the platform, and how long it’s been since anyone challenged the architecture. Clarity doesn’t come from adding more tools. It comes from stripping complexity until the answers make sense again.Why Legacy Messaging Platforms Quietly Break Your Customer ExperienceHope realized SoundCloud’s customer messaging setup was broken the moment she couldn’t get a straight answer to a basic question: how many messages had been sent? The platform could produce a number, but it was useless. Too many things happened after delivery. Support infrastructure kicked in. Frequency caps filtered volume. Campaign logic lived outside the actual platform. There was no single system of record. The tools looked functional, but trust had already eroded.The core problem came from decisions made years earlier. The customer engagement platform had been implemented in 2016 when the vendor was still early in its lifecycle. At the time, core features didn’t exist, so SoundCloud built their own solutions around it. Frequency management, segmentation logic, even delivery throttling ran outside the tool. These weren’t integrations. They were crutches. And they turned what should have been a centralized system into a loosely coupled set of scripts, API calls, and legacy logic that no one wanted to touch.Hope had seen this pattern before. At CNN, she dealt with similar issues and recognized the symptoms immediately. Legacy platforms tend to create debt you don’t notice until you start asking precise questions. Things work, but only because internal teams built workarounds that silently age out of relevance. Tech stacks like that don’t fail loudly. They fail in fragments. One missing field, one skipped frequency cap, one number that doesn’t reconcile across tools. By the time it’s clear something’s wrong, the actual root cause is buried under six years of operational shortcuts.“The platform gave me a number, but it wasn’t the real number. Everything important was happening outside of it.”Hope’s philosophy around messaging is shaped by how she defines partnership. She prefers vendors who act like partners, not ticket responders. Partners should care about long-term success, not just contract renewals. But partnership also means using the tool as intended. When the platform is bent around missing features, the relationship becomes strained. Every workaround is a vote of no confidence in the roadmap. Eventually, you're not just managing campaigns. You’re managing risk.Key takeaway: If your customer messaging platform can't report true delivery volume because critical logic happens outside of it, you're already in rebuild territory. Don’t wait for a total failure. Audit where key rules live. Centralize what matters. And only invest in tools where out-of-the-box features can support your real-world use cases. That way you can grow without outsourcing half your stack to workaround scripts and tribal knowledge.Why Custom Martech Builds Quietly Punish You LaterThe worst part of SoundCloud’s legacy stack wasn’t the duct-taped infrastructure. It was how long it took to admit it had become a problem. The platform had been in place since 2016, back when the vendor was still figuring out core features. Instead of switching, SoundCloud stayed locked in ...
undefined
Jun 17, 2025 • 1h 5min

174: Joshua Kanter: A 4-time CMO on the case against data democratization

What’s up everyone, today we have the pleasure of sitting down with Joshua Kanter, Co-Founder & Chief Data & Analytics Officer at ConvertML. Summary: Joshua spent the earliest parts of his career buried in SQL, only to watch companies hand out dashboards and call it strategy. Teams skim charts to confirm hunches while ignoring what the data actually says. He believes access means nothing without translation. You need people who can turn vague business prompts into clear, interpretable answers. He built ConvertML to guide those decisions. GenAI only raises the stakes. Without structure and fluency, it becomes easier to sound confident and still be completely wrong. That risk scales fast.About JoshuaJoshua started in data analytics at First Manhattan Consulting, then co-founded two ventures; Mindswift, focused on marketing experimentation, and Novantas, a consulting firm for financial services. From there, he rose to Associate Principal at McKinsey, where he helped companies make real decisions with messy data and imperfect information. Then he crossed into operating roles, leading marketing at Caesars Entertainment as SVP of Marketing, where budgets were wild.After Caesars, he became a 3-time CMO (basically 4-time); at PetSmart, International Cruise & Excursions, and Encora. Each time walking into a different industry with new problems. He now co-leads ConvertML, where he’s focused on making machine learning and measurement actually usable for the people in the trenches.Data Democratization Is Breaking More Than It’s FixingData democratization has become one of those phrases people repeat without thinking. It shows up in mission statements and vendor decks, pitched like some moral imperative. Give everyone access to data, the story goes, and decision-making will become magically enlightened. But Joshua has seen what actually happens when this ideal collides with reality: chaos, confusion, and a lot of people confidently misreading the same spreadsheet in five different ways.Joshua isn’t your typical out of the weeds CMO, he’s lived in the guts of enterprise data for 25 years. His first job out of college was grinding SQL for 16 hours a day. He’s been inside consulting rooms, behind marketing dashboards, and at the head of data science teams. Over and over, he’s seen the same pattern: leaders throwing raw dashboards at people who have no training in how to interpret them, then wondering why decisions keep going sideways.There are several unspoken assumptions built into the data democratization pitch. People assume the data is clean. That it’s structured in a meaningful way. That it answers the right questions. Most importantly, they assume people can actually read it. Not just glance at a chart and nod along, but dig into the nuance, understand the context, question what’s missing, and resist the temptation to cherry-pick for whatever narrative they already had in mind.“People bring their own hypotheses and they’re just looking for the data to confirm what they already believe.”Joshua has watched this play out inside Fortune 500 boardrooms and small startup teams alike. People interpret the same report with totally different takeaways. Sometimes they miss what’s obvious. Other times they read too far into something that doesn’t mean anything. They rarely stop to ask what data is not present or whether it even makes sense to draw a conclusion at all.Giving everyone access to data is great and all… but only works when people have the skills to use it responsibly. That means more than teaching Excel shortcut keys. It requires real investment in data literacy, mentorship from technical leads, and repeated, structured practice. Otherwise, what you end up with is a very expensive system that quietly fuels bias and bad decisions and just work for the sake of work.Key takeaway: Widespread access to dashboards does not make your company data-informed. People need to know how to interpret what they see, challenge their assumptions, and recognize when data is incomplete or misleading. Before scaling access, invest in skills. Make data literacy a requirement. That way you can prevent costly misreads and costly data-driven decision-making.How Confirmation Bias Corrupts Marketing Decisions at ScaleExecutives love to say they are “data-driven.” What they usually mean is “data-selective.” Joshua has seen the same story on repeat. Someone asks for a report. They already have an answer in mind. They skim the results, cherry-pick what supports their view, and ignore everything else. It is not just sloppy thinking. It’s organizational malpractice that scales fast when left unchecked.To prevent that, someone needs to sit between business questions and raw data. Joshua calls for trained data translators; people who know how to turn vague executive prompts into structured queries. These translators understand the data architecture, the metrics that matter, and the business logic beneath the request. They return with a real answer, not just a number in bold font, but a sentence that says: “Here’s what we found. Here’s what the data does not cover. Here’s the confidence range. Here’s the nuance.”“You want someone who can say, ‘The data supports this conclusion, but only under these conditions.’ That’s what makes the difference.”Joshua has dealt with both extremes. There are instinct-heavy leaders who just want validation. There are also data purists who cannot move until the spreadsheet glows with statistical significance. At a $7 billion retailer, he once saw a merchandising exec demand 9,000 survey responses; just so he could slice and dice every subgroup imaginable later. That was not rigor. It was decision paralysis wearing a lab coat.The answer is to build maturity around data use. That means investing in operators who can navigate ambiguity, reason through incomplete information, and explain caveats clearly. Data has power, but only when paired with skill. You need fluency, not dashboards. You need interpretation and above all, you need to train teams to ask better questions before they start fishing for answers.Key takeaway: Every marketing org needs a data translation layer; real humans who understand the business problem, the structure of the data, and how to bridge the two with integrity. That way you can protect against confirmation bias, bring discipline to decision-making, and stop wasting time on reports that just echo someone's hunch. Build that capability into your operations. It is the only way to scale sound judgment.You’re Thinking About Statistical Significance Completely WrongToo many marketers treat statistical significance like a ritual. Hit the 95 percent confidence threshold and it's seen as divine truth. Miss it, and the whole test gets tossed in the trash. Joshua has zero patience for that kind of checkbox math. It turns experimentation into a binary trap, where nuance gets crushed under false certainty and anything under 0.05 is labeled a failure. That mindset is lazy, expensive, and wildly limiting.95% statistical significance does not mean your result matters. It just means your result is probably not random, assuming your test is designed well and your assumptions hold up. Even then, you can be wrong 1 out of every 20 times, which no one seems to talk about in those Monday growth meetings. Joshua’s real concern is how this thinking cuts off all the good stuff that lives in the grey zone; tests that come in at 90 percent confidence, show a consistent directional lift, and still get ignored because someone only trusts green checkmarks.“People believe that if it doesn’t hit statistical significance, the result isn’t meaningful. That’s false. And danger...
undefined
Jun 10, 2025 • 60min

173: Samia Syed: Dropbox's Director of Growth Marketing on rethinking martech like HR efforts

What’s up everyone, today we have the pleasure of sitting down with Samia Syed, Director of Growth Marketing at Dropbox. Summary: Samia Syed treats martech like hiring. If it costs more than a headcount, it needs to prove it belongs. She scopes the problem first, tests tools on real data, and talks to people who’ve lived with them not just vendor reps. Then she tracks usage and outcomes from day one. If adoption stalls or no one owns it, the tool dies. She once watched a high-performing platform get orphaned after a reorg. Great tech doesn’t matter if no one’s accountable for making it work.Don’t Buy the Tool Until You’ve Scoped the JobMartech buying still feels like the Wild West. Companies drop hundreds of thousands of dollars on tools after a single vendor call, while the same teams will debate for weeks over whether to hire a junior coordinator. Samia calls this out plainly. If a piece of software costs more than a person, why wouldn’t it go through the same process as a headcount request?She maps it directly: recruiting rigor should apply to your tech stack. That means running a structured scoping process before you ever look at vendors. In her world, no one gets to pitch software until three things are clear:What operational problem exists right nowWhat opportunities are lost by not fixing itWhat the strategic unlock looks like if you doMost teams skip that. They hear about a product, read a teardown on LinkedIn, and spin up a trial to “explore options.” Then the feature list becomes the job description, and suddenly there’s a contract in legal. At no point did anyone ask whether the team actually needed this, what it was costing them not to have it, or what they were betting on if it worked.Samia doesn’t just talk theory. She has seen this pattern lead to ballooning tech stacks and stale tools that nobody uses six months after procurement. A shiny new platform feels like progress, but if no one scoped the actual need, you’re not moving forward. You’re burying yourself in debt, disguised as innovation.“Every new tool should be treated like a strategic hire. If you wouldn’t greenlight headcount without a business case, don’t greenlight tech without one either.”And it goes deeper. You can’t just build a feature list and call that a justification. Samia breaks it into a tiered case: quantify what you lose without the tool, and quantify what you gain with it. How much time saved? How much revenue unlocked? What functions does it enable that your current stack can’t touch? Get those answers first. That way you can decide like a team investing in long-term outcomes, not like a shopper chasing the next product demo.Key takeaway: Treat every Martech investment like a senior hire. Before you evaluate vendors, run a scoping process that defines the current gap, quantifies what it costs you to leave it open, and identifies what your team can achieve once it’s solved. Build a business case with numbers, not just feature wishlists. If you start by solving real problems, you’ll stop paying for shelfware.Your Martech Stack Is a Mess Because Mops Wasn’t in the Room EarlyMost marketing teams get budget the same way they get unexpected leftovers at a potluck. Something shows up, no one knows where it came from, and now it’s your job to make it work. You get a number handed down from finance. Then you try to retroactively justify it with people, tools, and quarterly goals like you’re reverse-engineering a jigsaw puzzle from the inside out.Samia sees this happen constantly. Teams make decisions reactively because their budget arrived before their strategy. A renewal deadline pops up, someone hears about a new tool at a conference, and suddenly marketing is onboarding something no one asked for. That’s how you end up with shelfware, disconnected workflows, and tech debt dressed up as innovation.This is why she pushes for a different sequence. Start with what you want to achieve. Define the real gaps that exist in your ability to get there. Then use that to build a case for people and platforms. It sounds obvious, but it rarely happens that way. In most orgs, Marketing Ops is left out of the early conversations entirely. They get handed a brief after the budget is locked. Their job becomes execution, not strategy.“If MOPS is treated like a support team, they can’t help you plan. They can only help you scramble.”Samia has seen two patterns when MOPS lacks influence. Sometimes the head of MOPS is technically in the room but lacks the confidence, credibility, or political leverage to speak up. Other times, the org’s workflows never gave them a shot to begin with. Everything is set up as a handoff. Business leaders define targets, finance approves the budget, then someone remembers to loop in the people who actually have to make it all run. That structure guarantees misalignment. If you want a smarter stack, you have to fix how decisions get made.Key takeaway: Build your Martech plan around strategic goals, not leftover budget. Start with what needs to be accomplished, define the capability gaps that block it, and involve MOPS from the beginning to shape how tools and workflows can solve those problems. If Marketing Ops is looped in only after the fact, you’re not planning. You’re cleaning up.Build Your Martech Stack Like You’re Hiring a TeamMost teams buy software like they’re following a recipe they’ve never tasted. Someone says “we need a CDP,” and suddenly everyone’s firing off RFPs, demoing the usual suspects, and comparing price tiers on platforms they barely understand. Samia draws a clean line between hiring and buying here. In both cases, the smartest teams treat the process as exploration, not confirmation.Hiring isn’t static. You open a rec, start meeting candidates, and quickly realize the original job description is outdated by the third interview. A standout candidate shows up, and suddenly the scope expands. You rewrite the role to fit the opportunity, not the other way around. Samia thinks buying Martech should work the same way. Instead of assuming a fixed category solves the problem, you should:Map your actual use caseTalk to vendors and real usersCompare radically different paths, not just direct competitors“You almost need to challenge yourself to zoom out and ask if this tool fits where your company is actually headed.”Samia’s lived the pain of teams chasing big-budget platforms with promises of deep functionality, only to realize no one has the bandwidth to implement them properly. The tool ends up shelved or duct-taped into place while marketing burns cycles trying to retrofit workflows around something they were never ready for. That kind of misalignment doesn’t show up in vendor decks or curated testimonials. You only catch it by doing your own research and talking to people who don’t have a sales quota.Buying tech is easy. Building capability is hard. Samia looks for tools that match the company’s maturity and provide room to grow. Not everything needs to be composable, modular, and future-proofed into infinity. Sometimes the right move is choosing what works today, then layering in complexity as your team levels up. Martech isn’t one-size-fits-all, and most vendor conversations are just shiny detours away from that uncomfortable truth.Key takeaway: Treat your Martech search like a hiring process in motion. Start with a goal, not a category. Stay open to evolving the solution as new context surfaces. Talk to actual users who’ve implemented the tool under real constraints. Ask what broke, what surprised them, and what they’d do differently. Choose the tech that fits your team’s real capabili...
undefined
Jun 3, 2025 • 53min

172: Ankur Kothari: A practical guide on implementing AI to improve retention and activation through personalization

What’s up everyone, today we have the pleasure of sitting down with Ankur Kothari, Adtech and Martech Consultant who’s worked with big tech names and finance/consulting firms like Salesforce, JPMorgan and McKinsey.The views and opinions expressed by Ankur in this episode are his own and do not necessarily reflect the official position of his employer.Summary: Ankur explains how most AI personalization flops cause teams ignore the basics. He helped a brand recover millions just by making the customer journey actually make sense, not by faking it with names in emails. It’s all about fixing broken flows first, using real behavior, and keeping things human even when it’s automated. Ankur is super sharp, he shares a practical maturity framework for AI personalization so you can assess where you currently fit and how you get to the next stage. AI Personalization That Actually Increases Retention - Practical ExampleMost AI personalization in marketing is either smoke, mirrors, or spam. People plug in a tool, slap a customer’s first name on a subject line, then act surprised when the retention numbers keep tanking. The tech isn't broken. The execution is lazy. That’s the part people don’t want to admit.Ankur worked with a mid-sized e-commerce brand in the home goods space that was bleeding revenue; $2.3 million a year lost to customers who made one purchase and never returned. Their churn rate sat at 68 percent. Think about that. For every 10 new customers, almost 7 never came back. And they weren’t leaving because the product was bad or overpriced. They were leaving because the whole experience felt like a one-size-fits-all broadcast. No signal, no care, no relevance.So he rewired their personalization from the ground up. No gimmicks. No guesswork. Just structured, behavior-based segmentation using first-party data. They looked at:Website interactionsPurchase historyEmail engagementCustomer service logsThen they fed that data into machine learning models to predict what each customer might actually want to do next. From there, they built 27 personalized customer journeys. Not slides in a strategy deck. Actual, functioning sequences that shaped content delivery across the website, emails, and mobile app.> “Effective AI personalization is only partly about the tech but more about creating genuinely helpful customer experiences that deliver value rather than just pushing products.”The results were wild. Customer retention rose 42 percent. Lifetime value jumped from $127 to $203. Repeat purchase rate grew by 38 percent. Revenue climbed by $3.7 million. ROI hit 7 to 1. One customer who previously spent $45 on a single sustainable item went on to spend more than $600 in the following year after getting dropped into a relevant, well-timed, and non-annoying flow.None of this happened because someone clicked "optimize" in a tool. It happened because someone actually gave a damn about what the customer experience felt like on the other side of the screen. The lesson isn’t that AI personalization works. The lesson is that it only works if you use it to solve real customer problems.Key takeaway: AI personalization moves the needle when you stop using it as a buzzword and start using it to deliver context-aware, behavior-driven customer experiences. Focus on first-party data that shows how customers interact. Then build distinct journeys that respond to actual behavior, not imagined personas. That way you can increase retention, grow customer lifetime value, and stop lighting your acquisition budget on fire.Why AI Personalization Fails Without Fixing Basic Automation FirstSigning up for YouTube ads should have been a clean experience. A quick onboarding, maybe a personalized email congratulating you for launching your first campaign, a relevant tip about optimizing CPV. Instead, the email that landed was generic and mismatched—“Here’s how to get started”—despite the fact the account had already launched its first ad. This kind of sloppiness doesn’t just kill momentum. It exposes a bigger problem: teams chasing personalization before fixing basic logic.Ankur saw this exact issue on a much more expensive stage. A retail bank had sunk $2.3 million into an AI-driven loan recommendation engine. Sophisticated architecture, tons of fanfare. Meanwhile, their onboarding emails were showing up late and recommending products users already had. That oversight translated to $3.7 million in missed annual cross-sell revenue. Not because the AI was bad, but because the foundational workflows were broken.The failure came from three predictable sources:Teams operated in silos. Innovation was off in its own corner, disconnected from marketing ops and customer experience.The tech stack was split in two. Legacy systems handled core functions, but were too brittle to change. AI was layered on top, using modern platforms that didn’t integrate cleanly.Leaders focused on innovation metrics, while no one owned the state of basic automation or email logic.To fix it, Ankur froze the AI rollout for 120 days and focused on repair work. The team rebuilt the essential customer journeys, cleaned up logic gaps, and restructured automation to actually respond to user behavior. This work lifted product adoption by 28 percent and generated an additional $4.2 million in revenue. Once the base was strong, they reintroduced the AI engine. Its impact increased by 41 percent, not because the algorithm improved, but because the environment finally supported it.> “The institutions that win with AI are the ones that execute flawlessly across all technology levels, from simple automation to cutting-edge applications.”That lesson applies everywhere, including in companies far smaller than Google or JPMorgan. When you skip foundational work, every AI project becomes a band-aid over a broken funnel. It might look exciting, but it can’t hold.Key takeaway: Stop using AI to compensate for broken customer journeys. Fix your onboarding logic, clean up your automation triggers, and connect your systems across teams. Once the fundamentals are working, you can layer AI on top of a system that supports it. That way you can generate measurable returns, instead of just spinning up another dashboard that looks good in a QBR.Step by Step Approach to AI Personalization With a Maturity Framework - The First Steps You Can Take on The Path To AI PersonalizationMost AI personalization projects start with a 50-slide vision deck, three vendors, and zero working use cases. Then teams wonder why things stall. What actually works is starting small and surgical. One product. One journey. Clear data. Clear upside.Ankur advised a regional bank that had plenty of customer data but zero AI in play. No need for new tooling or a six-month roadmap. They focused on one friction-heavy opportunity with direct payoff: mortgage pre-approvals. Forget trying to personalize every touchpoint. They picked the one that mattered and did it well.They built a clustering algorithm using transaction patterns, savings trends, and credit utilization to detect home-buying intent. From there, they pushed pre-approvals with tailored rates and terms. The bank already had the raw data in its core systems. No scraping, no extra collection, no “data enrichment” vendor needed.That decision paid off fast:The data already existed, so implementation moved quicklyThe scope was limited to a single high-stakes journeyThe impact landed hard: mortgage application rates jumped 31 percent and approval-to-close conversions climbed 24 percent within 60 days> “Start with a high-value product journey where pers...
undefined
May 27, 2025 • 1h 1min

171: Kim Hacker: Reframing tool FOMO, making AI face real work and catching up on AI skills

What’s up everyone, today we have the pleasure of sitting down with Kim Hacker, Head of Business Ops at Arrows. Summary: Tool audits miss the mess. If you’re trying to consolidate without talking to your team, you’re probably breaking workflows that were barely holding together. The best ops folks already know this: they’re in the room early, protecting momentum, not patching broken rollouts. Real adoption spreads through peer trust, not playbooks. And the people thriving right now are the generalists automating small tasks, spotting hidden friction, and connecting dots across sales, CX, and product. If that’s you (or you want it to be) keep reading or hit play.About KimKim started her career in various roles like Design intern and Exhibit designer/consultantShe later became an Account exec at a Marketing AgencyShe then moved over to Sawyer in a Partnerships role and later Customer OnboardingToday Kim is Head of Business Operations at Arrows Most AI Note Takers Just Parrot Back JunkKim didn’t set out to torch 19 AI vendors. She just wanted clarity.Her team at Arrows was shipping new AI features for their digital sales room, which plugs into HubSpot. Before she went all in on messaging, she decided to sanity check the market. What were other sales teams in the HubSpot ecosystem actually *doing* with AI? Over a dozen calls later, the pattern was obvious: everyone was relying on AI note takers to summarize sales calls and push those summaries into the CRM.But no one was talking about the quality. Kim realized if every downstream sales insight starts with the meeting notes, then those notes better be reliable. So she ran her own side-by-side teardown of 22 AI note takers. No configuration. No prompt tuning. Just raw, out-of-the-box usage to simulate what real teams would experience.> “If the notes are garbage, everything you build on top of them is garbage too.”She was looking for three things: accuracy, actionability, and structure. The kind of summaries that help reps do follow-ups, populate deal intelligence, or even just remember the damn call. Out of 22 tools, only *three* passed that bar. The rest ranged from shallow summaries to complete misinterpretations. Some even skipped entire sections of conversations or hallucinated action items that never came up.It’s easy to assume an AI-generated summary is “good enough,” especially if it sounds coherent. But sounding clean is not the same as being useful. Most note takers aren't designed for actual sales workflows. They're just scraping audio for keywords and spitting out templated blurbs. That’s fine for keeping up appearances, but not for decision-making or pipeline accuracy.Key takeaway: Before layering AI on top of your sales stack, audit your core meeting notes. Run a side-by-side test on your current tool, and look for three things: accurate recall, structured formatting, and clear next steps. If your AI notes aren’t helping reps follow up faster or making your CRM smarter, they’re just noise in a different font.Why Most Teams Will Miss the AI Agent Wave EntirelyThe vision is seductive. Sales reps won't write emails. Marketers won’t build workflows. Customer success won’t chase follow-ups. Everyone will just supervise agents that do the work for them. That future sounds polished, automated, and eerily quiet. But most teams are nowhere close. They’re stuck in duplicate records, tool bloat, and a queue of Jira tickets no one’s touching. AI agents might be on the roadmap, but the actual work is still being done by humans fighting chaos with spreadsheets.Kim sees the disconnect every day. AI fatigue isn’t coming from overuse. It’s coming from bad framing. “A lot of people talking about AI are just showing the most complex or viral workflows,” she explains. “That stuff makes regular folks feel behind.” People see demos built for likes, not for legacy systems, and it creates a false sense that they’re supposed to be automating their entire job by next quarter.> “You can’t rely on your ops team to AI-ify the company on their own. Everyone needs a baseline.”Most reps haven’t written a good prompt, let alone tried chaining tools together. You can’t go from zero to agent management without a middle step. That middle step is building a culture of experimentation. Start with small, daily use cases. Help people understand how to prompt, what clean AI output looks like, and how to tell when the tool is lying. Get the entire org to that baseline, then layer on tools like Zapier Agents or Relay App to handle the next tier of automation.Skipping the basics guarantees failure later. Flashy agents look great in demos, but they don’t compensate for unclear processes or teams that don’t trust automation. If the goal is to future-proof your workflows, the work starts with people, not tools.Key takeaway: If your team isn't fluent in basic AI usage, agent-powered workflows are a pipe dream. Build a shared baseline across departments by teaching prompt writing, validating outputs, and experimenting with small use cases. That way you can unlock meaningful automation later instead of chasing trends that no one has the capacity to implement.When AI Systems Meet The Chaos Of Actual Workplace ProcessesAI vendors keep shipping tools like everyone has an intern, a technical co-pilot, and five extra hours a week to configure dream workflows. The real buyers? They’re just trying to fix broken Salesforce fields, write one less follow-up email, or get through the day without copy-pasting notes into Notion. Somewhere between those extremes, the user gets lost in translation.Kim has felt that gap from both sides. She was hesitant to even start with ChatGPT. “I almost gave up on it,” she said. “I felt late and overwhelmed, and I just figured maybe I wasn’t going to be an AI person.” Fast forward to today, and it’s one of her most-used tools. She didn’t get there by wiring up agents. She started small. Simple things. Drafting ideas, summarizing content, clarifying messy thoughts. That built trust. Then momentum.“There’s a lot that has to happen before your calendar is filled with calls and nothing else. AI can help, but you have to let it earn its spot.”If you're trying to build that muscle, forget the multi-tool agent orchestration for a second. Focus on everyday wins like:Turning a messy Slack thread into a clean summaryWriting a follow-up email in your toneRewriting a calendar event title so it makes sense to your future selfCleaning up action items from a sales call without hallucinationsDrafting internal documentation from bullet pointsThe pace is accelerating. People feel it. You don’t need to watch keynote demos to know that change is coming fast. It’s easy to feel like you’re already behind. Kim doesn’t disagree. She just thinks most teams are solving the wrong problem. Vendors are focused on the sprint. Most people haven’t even laced up. “Everyone wants the big leap,” she said. “But most wins come from small, boring tools that actually do what they say they’ll do.”That’s the root issue. A lot of AI features today are solving theoretical problems. They assume workflows are tidy, perfectly tagged, and documented in Notion. Real work is messier. It happens in Slack threads, half-filled records, and follow-ups that never got logged. If your tool can’t handle that, then it doesn’t matter how shiny your roadmap is.Key takeaway: Stop evaluating AI features based on potential. Evaluate them based on current chaos. Ask whether the tool handles your worst-case scenario, not your ideal one. Prioritize small, boring use cases that save time immediately. That way yo...
undefined
May 20, 2025 • 59min

170: Keith Jones: OpenAI’s Head of GTM systems on building judgement with ghost stories, buying martech with cognitive extraction and why data dictionaries prevail

Keith Jones, Head of GTM Systems at OpenAI, has a rich background in sales operations and tech. He reveals that the best way to buy martech isn't through spreadsheets, but through cognitive extraction, combining stakeholder input with AI. Keith shares insights on his career journey from sales to operations, exploring how empathy shapes decision-making. He discusses the future of SaaS with fewer tools and stronger data infrastructure, and emphasizes the importance of a hands-on approach to integrating AI in marketing strategies for better outcomes.
undefined
May 13, 2025 • 1h 1min

169: Elena Hassan: Visa acquires your startup but nobody warns you about the tech stack aftermath and enterprise culture shock

Summary: Elena has done what most startup marketers only guess at; made it through multiple acquisitions and now leads global integrated marketing at Visa. In this episode, she breaks down what actually changes when you go from scrappy lead gen to enterprise brand building, why most martech tools don’t survive security reviews, and how leadership without authority is the skill that really matters. We get into messy tech migrations, broken attribution dreams, and why picking up the phone still beats Slack. If you’ve ever wondered why your startup playbook stops working at scale, this conversation spells it out.What Startup Marketers Learn the Hard Way When They Land at a Big CorporationElena does not call herself an “acquisition master,” even though her resume might suggest otherwise. Three startups she worked at were acquired, Sivan by Refinitiv, WorkMarket by ADP, and Currencycloud by Visa, where she works today. Some might spin that track record as a strategic playbook for career navigation. Elena sees it differently. She credits great teams and good companies, not some personal Midas touch.The truth is, you cannot force an acquisition. What you can do is get really good at reading the room. Elena’s career started deep in the weeds of lead generation and demand marketing, chasing performance metrics and measuring everything that moved. Early on, she dipped into other areas, event planning, employee engagement, but demand gen was where she built muscle. That was her lane at WorkMarket, where the first big learning curve hit.It turns out the skills that build the lead gen engine are not the same ones you need when a company shifts from hypergrowth to prepping for acquisition. Elena experienced firsthand the moment when leadership stops asking about lead volume and starts asking about brand perception. Suddenly the focus pivots from how many MQLs you can squeeze out of a campaign to how the company is positioned in the market, what the media is saying, and whether the brand looks credible at scale. She admitted she did not fully appreciate that switch at first.> "I came there with a mindset of if I can't track it, I'm not gonna do it," Elena said. "Every performance marketer would probably relate."That perspective doesn’t fly for long in environments where brand and reputation start to outweigh click-through rates. Elena’s time at Visa has only reinforced that lesson. Today, much of her work revolves around brand building and awareness, the same areas she once side-eyed for being soft and unmeasurable. It is one thing to believe in brand. It is another thing entirely to understand how hard it is to build one well.The scale jump from startup life to a company with over 30,000 employees does not just change the headcount. It rewires the entire pace and process of how work gets done. Elena described the gut-check moment that made it clear she was not at a scrappy startup anymore. It was not a high-level strategy meeting or a sweeping corporate memo. It was the moment she tried to get a simple social graphic approved.In a startup, that kind of thing takes a few minutes on Canva and the green light from whoever’s closest to the Slack channel. At Visa, especially as a regulated financial institution, it involves legal reviews, vendor contracts, approval workflows, and enough compliance checks to make your head spin. Campaigns that once rolled out in days now take months. Not because anyone is slow, but because the stakes are high and the rules are different.That culture shock is where many startup marketers either adapt or tap out. What Elena figured out is that the skills that work at one stage of company life are not the ones that get you through the next. If you want to survive the jump from lean team to enterprise machine, you have to stop resenting the process and start respecting what it protects.Key takeaway: If you're coming from startup life, expect a painful adjustment when you move into a large, regulated company. The speed, autonomy, and scrappiness you are used to will collide hard with approval chains and compliance processes. The faster you stop fighting it and start learning why those systems exist, the faster you'll find your footing. Metrics-driven marketing only gets you so far. To thrive at scale, you need to understand the power and patience required to build brand trust.What Nobody Tells You About Merging Tech Stacks After an AcquisitionThe fantasy version of an acquisition is clean and celebratory. Two companies come together, the deal closes, the press release goes out, and life moves on. The reality, especially for marketing teams, is a long, often frustrating grind of systems audits, security reviews, and endless conversations about whether your beloved tools will survive the merger.Elena has lived through that grind more than once. When Visa acquired Currencycloud, she was not navigating that shift alone. Many of her teammates made the journey with her, which helped. But solidarity does not make the process move faster. It just means you have people to vent to while you wait for approvals.One of the first and hardest parts of that transition was not a debate between marketers. It was the clash between marketing teams and security teams. Every single piece of tech Currencycloud used, whether it was their website hosting, HubSpot marketing automation, or even individual add-ons, had to go under the microscope. Security teams needed to assess, vet, and approve each tool, often asking questions that made sense from a cybersecurity perspective but sounded completely out of touch to anyone in marketing.The back-and-forth was not casual. It escalated all the way up to the chief technology officer and the cybersecurity team at HubSpot sitting down with Elena's group to explain, in detail, what the platform could and could not do. None of this was about malice or incompetence. It was about two fundamentally different mindsets trying to find common ground.> "These are security people. They’re not marketers. They don’t always know why we need a particular tool or what it does," Elena explained.That learning curve is brutal if you're not prepared for it. The deeper into operations you sit, the more of these conversations you end up having. Elena found herself in rooms with people from multiple marketing ops teams across Visa, comparing tech stacks, workflows, and priorities. There was no easy answer to which system would win out. Sometimes the decision was clear. Other times it came down to questions like, is it really worth fighting for this tool, or is now the time to adapt to what already exists?She describes it as less like transferring from one job to another and more like moving from a Montessori school to a traditional classroom. Both systems can deliver a good education. They just teach in wildly different ways. One thrives on flexibility and autonomy. The other runs on structure and process. Neither is wrong. They are simply different environments, and surviving the switch requires a willingness to adjust.The biggest mistake marketers make in these situations is believing the process is about what *they* want. Elena was quick to point out that the companies she has worked for, especially Visa, keep customer experience at the center of these decisions. It is not about which tool is most familiar to the internal team. It is about which systems create the least friction for the end user. That mindset helps keep the process grounded, even when the day-to-day feels like a slow march through bureaucracy.Patience is not optional in these transitions. You will hit walls. You will repeat yourself. You will explain the same use case to five different people across three different teams. And eventually, you will e...
undefined
5 snips
May 6, 2025 • 54min

168: AI's Talent Crunch: Marketing jobs on the brink and those set to thrive

What’s up folks, today we’re diving into the AI talent crunch and exploring which marketing roles have the strongest staying power and which are most likely to be replaced by AI.Summary: Shit is changing fast. Don’t wait for someone to guide you. Navigate this transition by focusing on judgment tasks while letting AI handle predictions. At risk are campaign operators, generic content creators, and report-pulling analysts. Set to thrive are resident AI implementation experts who select worthy tools, data orchestrators connecting proprietary data to AI, product/customer marketers with genuine empathy, ethics guardians preventing bias issues, and localization specialists understanding cultural nuances.Marketing Jobs AI Will Kill (And What Skills Actually Matter Now)AI tools have cut strange new patterns into the marketing job market. Pay attention and you'll spot which roles face extinction risk, which command premium salaries, and which hang precariously in the balance. We've watched marketing teams across dozens of companies scramble to realign their talent strategies around this new reality. Some roles vanish while entirely new job titles materialize almost weekly.One of the good things is that AI impacts marketing jobs based on task predictability and context, not seniority or experience. A CMO who mostly approves creative and manages schedules faces more displacement risk than a junior analyst who excels at extracting bizarre but valuable insights from data chaos. You probably feel this tension already. Half your marketing tasks could disappear next quarter, but the other half suddenly requires superpowers you're frantically trying to develop before your next performance review.This episode is meant to give you something to think about in terms of your particular role in marketing. We’ll explore roles we think are at risk of vanishing and roles that are well positioned to become even more valuable. Shit is changing fast, no one is going to take your hand through this transition. You need to own it and take action.Marketing Roles Most at Risk to be Replaced by AIAI's Coming for Your Campaign Ops Job (Unless You Evolve Now)Phil and Darrell explored which campaign operations roles will vanish first and which might actually strengthen in the algorithmic storm ahead. Darrell struck first with brutal honesty about traditional campaign operations: "The role of configuring marketing automation tools to spec will be definitely at risk." He's talking about those roles where marketers simply implement predefined elements - predetermined images, pre-written text, established CTAs, and mapped-out lead routing. AI already handles this configuration work. Darrell has witnessed actual demos from startups building tools where marketers type requirements and - poof - the system builds it automatically. What seemed like science fiction months ago now exists in alpha versions across the industry.Phil slightly pushed back by referencing one of Darrell's recent posts, fracturing campaign ops into distinct categories rather than treating it as one vulnerable block. "Campaign ops encompasses way more than pressing buttons in Marketo," he insisted. He sorted these functions into two buckets:* **Highly vulnerable to AI replacement**:   * Reporting execution  * Campaign analysis and performance tracking  * Paid media bid adjustments  * Email automation and nurture flows  * Landing page and form creation* **Likely to survive the AI wave**:  * Setting strategic objectives and KPIs  * Creative decision-making requiring business understanding  * Budget planning involving cross-functional negotiation  * QA processes demanding human judgment  * Development of truly innovative best practices> "I had it in the unclear bucket because there's a box of some things under there that I feel like are still pretty likely to survive," Phil explained. "Coming up with campaign goals requires so much business understanding, strategic alignment, and political navigation."The conversation crystallized around evolution rather than extinction. Darrell sees campaign ops professionals transforming from button-pushers to strategic partners: "What it's going to evolve into is actually looking at objectives and KPIs, changing requirements, and modifying briefs." He advocated for campaign ops to shift toward continuous "always-on programs" requiring constant optimization rather than churning out repetitive one-off campaigns - a far more AI-resistant position.Key takeaway: To keep your campaign operations job when AI comes knocking, immediately shift your focus from tactical execution to strategic functions. Master business alignment skills, develop creative decision-making capabilities, and build continuous optimization programs. The marketers who survive will be those who stop configuring systems to spec and start reshaping campaign requirements based on deep business understanding and cross-functional collaboration.AI Will Eat Generic Content Creation (But Experts Will Thrive)Phil explored a pretty obvious category of marketing roles: "I think a lot of folks are really excited about Generative AI and using it to create basic posts and pages without editing any of the text." The bloodbath has already begun. Copywriters and content marketers producing unremarkable work find themselves outpaced by algorithms that can churn out mediocre content at scale, for pennies. The particularly exposed are those creating "routine content without a distinctive voice or cultural nuance," especially when working across global markets where nuance matters deeply.Darrell pulled no punches on what's coming: "Bad content is going to become obsolete." AI tools supercharge this dynamic, flooding channels with generated material that looks competent but lacks soul. The truly valuable is content that actually connects with people. Content that makes them feel something. Content that solves real problems in ways that show genuine understanding.What struck me as particularly insightful was Darrell's observation about subject matter experts potentially winning big in this new reality. These experts:* Often possess deep knowledge but lack time or writing skills* Can now leverage AI to amplify their expertise with minimal effort* Only need to provide "the spark of an idea and a few bullet points" * Create output that vastly outperforms generic content from disconnected marketers> "All it takes is like the spark of an idea and a few bullet points. And you have a full post and it's gonna be way better than someone, like a marketer for example, that doesn't really care about the product or about the industry and is writing like crappy content."This represents a fundamental power shift in content creation. The value no longer sits with those who can string sentences together but with those who bring authentic expertise, perspective, and lived experience. AI struggles with these human elements, the exact qualities that make readers stop scrolling and actually pay attention.Key takeaway: Your content survival strategy requires becoming either irreplaceably human or strategically AI-augmented. Build genuine subject matter expertise, develop a distinctive voice that reflects your unique perspective, and learn to use AI as an amplifier rather than a replacement for any kind of original thought. The future belongs to the specialized expert who can provide the strategic direction that AI can't generate on its own.Which Data Analyst Jobs Will Survive the AI Revolution?Marketing data analysts who build dashboards for a li...
undefined
7 snips
Apr 29, 2025 • 1h 3min

167: Moni Oloyede: The marketing ops identity paradox, why attribution is a waste of time and why GTM engineering is just sales ops

Moni Oloyede, founder of MO Martech, is a veteran in marketing operations with a passion for teaching. She discusses the flawed nature of attribution systems, emphasizing that buyers often forget why they purchased. Moni advocates for understanding content performance over tracking random touchpoints and argues that marketing efforts need not always be tied to revenue. Additionally, she critiques job title inflation in GTM engineering and explores the balance between digital strategies and in-person events, all while sharing her love for teaching.
undefined
Apr 22, 2025 • 1h 8min

166: Constantine Yurevich: Visit Scoring, an alternative to MMM and MTA few marketers know about

What’s up everyone, today we have the pleasure of sitting down with Constantine Yurevich, CEO and Co-Founder at SegmentStream. Summary: Multi-touch attribution is a beautifully crafted illusion we all pretend to believe in while knowing deep down it's flawed. The work is mysterious, but is it important? The big ad platforms sell us sophisticated solutions they don't even trust for their own internal decisions. Is it time we accept marketing causation is a thing we can’t measure? Visitor behavior scoring is a really interesting alternative or extra ingredient to consider. Often thought of as a tool for lead management to help prioritize your SDR’s time, the team at SegmentStream started using the same scoring methodology, but with an attribution application. Enter synthetic conversions. Instead of just tracking conversions, track meaningful visits like  time spent, pages explored, comparisons made. This allows you to connect upper-funnel campaigns to real behavior patterns rather than just looking at who converted in a single session. About Constantine/SegmentStreamSegmentStream was founded in 2018 in LondonFeb 2022 raised a first funding round of 2.7MSegmentStream is now trusted by more than 100 leading customers across the globe including L’Oreal, KitchenAid, Synthesia, Carshop, InstaHeadShots, and many othersThe Messy Truth About B2B vs B2C Attribution ModelsPrice tags and decision timeframes obliterate the B2B/B2C attribution divide faster than most marketers realize. Constantine shatters conventional wisdom by showing how his team leverages their own attribution tools to measure website engagement because enterprise software purchases rarely follow predictable patterns. "Trusting last click is impossible," he explains, "because it takes too much time before conversion happens."You've likely noticed this pattern in your own marketing stack. A $2,000 direct-to-consumer exercise bike creates the same multi-touch, 60-day consideration journey as many supposedly "straightforward" B2B software purchases. Meanwhile, those $30/month SaaS tools targeting small businesses convert with the immediacy of consumer products. Constantine points out how this pricing reality creates measurement challenges that transcend business categories:High-ticket B2C products demand extended 30-60 day consideration windows SMB-focused B2B subscriptions ($20-30/month) behave like impulse purchasesEnterprise B2B sales cycles stretch beyond a year with critical offline componentsThe offline measurement void plagues marketers everywhere. Constantine admits many of his most valuable marketing activities resist quantification. "I write a lot of LinkedIn posts, newsletters, we do podcasts. Some of these activities are very hard to measure unless you explicitly ask someone, 'How did you hear about us?'" Your gut tightens reading this because you've felt this same tension between attribution models and marketing reality.Scale transforms your attribution approach more dramatically than business classification ever could. Small operations handling 100 monthly leads can simply ask each prospect about their discovery journey. Large enterprises processing thousands of conversions require sophisticated multi-touch models regardless of whether they sell to businesses or consumers. Constantine explains this convergence clearly: "When we talk about larger B2B businesses with thousands of leads and purchases, it becomes more similar to B2C with a long sales cycle plus an offline component."The unmeasurable brand-building activities require a leap of faith that makes data-driven marketers squirm. Constantine embraces this uncertainty with refreshing honesty: "When you post on LinkedIn, build your personal brand, share content—that's really hard to measure and I don't even want to go there." His team focuses on delivering value through content, trusting that results will materialize. "You just share your content and eventually you see how it plays off." This pragmatic acceptance of attribution limitations feels like cool water in the desert of measurement obsession.Key takeaway: Match your attribution model to purchase complexity rather than business category. Implement multi-touch attribution with lead scoring for high-consideration purchases across both B2B and B2C, while accepting that valuable brand-building work often exists beyond the reach of your measurement tools.Why Marketing Attribution Still Matters Despite Its FlawsAttribution chaos continues to haunt marketers drowning in competing methodologies and high-priced solutions. Constantine blasts through the measurement fog with brutal practicality when tackling the Multi-Touch Attribution (MTA) debate. While many have written MTA's obituary due to its diminishing visibility into customer journeys, his take might surprise you.The attribution landscape brims with alternatives that look impressive in PowerPoint presentations but crumble under real business conditions:Geo holdout testing sounds brilliant: Turn off ads in half your markets, keep them running in others, measure the difference. Simple! Except it'll cost you millions in lost revenue during testing. Constantine points out the brutal math: "For some businesses, this is like losing 1 million, $2 million during the test. Would you be willing to run a test that's gonna cost you $1 million?" These tests require a minimum 5% revenue contribution from the channel to even register effects, making them impractical for anything but your biggest channels.MMM promises statistical rigor: But demands absurd amounts of data covering everything from your competitors' moves to presidential elections and global conflicts. Good luck collecting that comprehensive dataset spanning 2-3 years, then validating whether the TV attribution your fancy model spits out actually reflects reality.> "Mathematically, everything works fine, but when you apply it in reality, there is no way to test it. You just see some numbers and there is no way to test it."For scrappy D2C brands, SaaS startups, and lead gen businesses, Constantine argues MTA still delivers more practical value than its supposedly superior alternatives. You won't achieve perfect attribution, but you can compare campaigns at the same funnel stage against each other. Your lower-funnel campaigns can be measured against other lower-funnel efforts. Mid-funnel initiatives can compete with similar tactics.Constantine drops a bombshell observation that should make you question the industry's MMM evangelism: "If Google and Facebook so willingly open-source different MMM technologies and they really believe in this technology, why wouldn't they implement it into their own product?" These data behemoths with unparalleled user visibility still rely on variations of touch-based attribution internally. Something doesn't add up.Key takeaway: Stop chasing perfect attribution unicorns. MTA delivers practical campaign comparisons within funnel stages despite its flaws. For most businesses, sophisticated alternatives cost more than they're worth in lost revenue during testing or impossible data requirements. Compare apples to apples (lower-funnel to lower-funnel campaigns) with MTA, test different creatives, and focus on relative performance improvement. The big platforms themselves don't fully trust their publicly promoted alternatives - why should you bet your marketing budget on them?Simplified MMM is a Measurement Fantasy You're Being SoldMarketing Mix Modeling has roared back into fashion as third-party cookies crumble and marketers scramble for measurement alternatives. Constantine cuts through the hype with brutal clarity. Traditional MMM demands...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app