FutureCraft GTM

Erin Mills & Ken Roden
undefined
Dec 18, 2025 • 43min

Special Episode: Why Customer Success Can’t Be Automated (And What AI Can Actually Do)

Why Customer Success Can't Be Automated (And What AI Can Actually Do) In this special year-end episode of the FutureCraft GTM Podcast, hosts Ken Roden and Erin Mills sit down with Amanda Berger, Chief Customer Officer at Employ, to tackle the biggest question facing CS leaders in December 2026: What can AI actually do in customer success, and where do humans remain irreplaceable? Amanda brings 20+ years at the intersection of data and human decision-making—from AI-powered e-commerce personalization at Rich Relevance, to human-led security at HackerOne, to now implementing AI companions for recruiters. Her journey is a masterclass in understanding where the machine ends and the human begins. This conversation delivers hard truths about metrics, change management, and the future of CS roles—plus Amanda's controversial take that "if you don't use AI, AI will take your job." Unpacking the Human vs. Machine Balance in Customer Success Amanda returns with a reality check: AI doesn't understand business outcomes or motivation—humans do. She reveals how her career evolved from philosophy major studying "man versus machine" to implementing AI across radically different contexts (e-commerce, security, recruiting), giving her unique pattern recognition about what AI can genuinely do versus where it consistently fails. The Lagging Indicator Problem: Why NRR, churn, and NPS tell you what already happened (6 months ago) instead of what you can influence. Amanda makes the case for verified outcomes, leading indicators, and real-time CSAT at decision points. The 70% Rule for CS in Sales: Why most churn starts during implementation, not at renewal—and exactly when to bring CS into the deal to prevent it (technical win stage/vendor of choice). Segmentation ≠ Personalization: The jumpsuit story that proves AI is still just sophisticated bucketing, even with all the advances in 2026. True personalization requires understanding context, motivation, and individual goals. The Delegation Framework: Don't ask "what can AI do?" Ask "what parts of my job do I hate?" Delegate the tedious (formatting reports, repetitive emails, data analysis) so humans can focus on what makes them irreplaceable. Timestamps 00:00 - Introduction and AI Updates from Ken & Erin 01:28 - Welcoming Amanda Berger: From Philosophy to Customer Success 03:58 - The Man vs. Machine Question: Where AI Ends and Humans Begin 06:30 - The Jumpsuit Story: Why AI Personalization Is Still Segmentation 09:06 - Why NRR Is a Lagging Indicator (And What to Measure Instead) 12:20 - CSAT as the Most Underrated CS Metric 17:34 - The $4M Vulnerability: House Security Analogy for Attribution 21:15 - Bringing CS Into Sales at 70% Probability (The Non-Negotiable) 25:31 - Getting Customers to Actually Tell You Their Goals 28:21 - AI Companions at Employ: The Recruiting Reality Check 32:50 - The Delegation Mindset: What Parts of Your Job Do You Hate? 36:40 - Making the Case for Humans in an AI-First World 40:15 - The Framework: When to Use Digital vs. Human Touch 43:10 - The 8-Hour Workflow Reduced to 30 Minutes (Real ROI Examples) 45:30 - By 2027: The Hardest CX Role to Hire 47:49 - Lightning Round: Summarization, Implementation, Data Themes 51:09 - Wrap-Up and Key Takeaways Edited Transcript Introduction: Where Does the Machine End and Where Does the Human Begin? Erin Mills: Your career reads like a roadmap of enterprise AI evolution—from AI-powered e-commerce personalization at Rich Relevance, to human-powered collective intelligence at HackerOne, and now augmented recruiting at Employ. This doesn't feel random—it feels intentional. How has this journey shaped your philosophy on where AI belongs in customer experience? Amanda Berger: It goes back even further than that. I started my career in the late '90s in what was first called decision support, then business intelligence. All of this is really just data and how data helps humans make decisions. What's evolved through my career is how quickly we can access data and how spoon-fed those decisions are. Back then, you had to drill around looking for a needle in a haystack. Now, does that needle just pop out at you so you can make decisions based on it? I got bit by the data bug early on, realizing that information is abundant—and it becomes more abundant as the years go on. The way we access that information is the difference between making good business decisions and poor business decisions. In customer success, you realize it's really just about humans helping humans be successful. That convergence of "where's the data, where's the human" has been central to my career. The Jumpsuit Story: Why AI Personalization Is Still Just Segmentation Ken Roden: Back in 2019, you talked about being excited for AI to become truly personal—not segment-based. Flash forward to December 2026. How close are we to actual personalization? Amanda Berger: I don't think we're that close. I'll give you an example. A friend suggested I ask ChatGPT whether I should buy a jumpsuit. So I sent ChatGPT a picture and my measurements. I'm 5'2". ChatGPT's answer? "If you buy it, you should have it tailored." That's segmentation, not personalization. "You're short, so here's an answer for short people." Back in 2019, I was working on e-commerce personalization. If you searched for "black sweater" and I searched for "black sweater," we'd get different results—men's vs. women's. We called it personalization, but it was really segmentation. Fast forward to now. We have exponentially more data and better models, but we're still segmenting and calling it personalization. AI makes segmentation faster and more accessible, but it's still segmentation. Erin Mills: But did you get the jumpsuit? Amanda Berger: (laughs) No, I did not get the jumpsuit. But maybe I will. The Philosophy Degree That Predicted the Future Erin Mills: You started as a philosophy major taking "man versus machine" courses. What would your college self say? And did philosophy prepare you in ways a business degree wouldn't have? Amanda Berger: I actually love my philosophy degree because it really taught me to critically think about issues like this. I don't think I would have known back then that I was thinking about "where does the machine end and where does the human begin"—and that this was going to have so many applicable decision points throughout my career. What you're really learning in philosophy is logical thought process. If this happens, then this. And that's fundamentally the foundation for AI. "If you're short, you should get your outfit tailored." "If you have a customer with predictive churn indicators, you should contact that customer." It's enabling that logical thinking at scale. The Metrics That Actually Matter: Leading vs. Lagging Indicators Erin Mills: You've called NRR, churn rate, and NPS "lagging indicators." That's going to ruffle boardroom feathers. Make the case—what's broken, and what should we replace it with? Amanda Berger: By the time a customer churns or tells you they're gonna churn, it's too late. The best thing you can do is offer them a crazy discount. And when you're doing that, you've already kind of lost. What CS teams really need to be focused on is delivering value. If you deliver value—we all have so many competing things to do—if a SaaS tool is delivering value, you're probably not going to question it. If there's a question about value, then you start introducing lower price or competitors. And especially in enterprise, customers decide way, way before they tell you whether they're gonna pull the technology out. You usually miss the signs. So you've gotta look at leading indicators. What are the signs? And they're different everywhere I've gone. I've worked for companies where if there's a lot of engagement with support, that's a sign customers really care and are trying to make the technology work—it's a good sign, churn risk is low. Other companies I've worked at, when customers are heavily engaged with support, they're frustrated and it's not working—churn risk is high. You've got to do the work to figure out what those churn indicators are and how they factor into leading indicators: Are they achieving verified outcomes? Are they healthy? Are there early risk warnings? CSAT: The Most Underrated Metric Ken Roden: You're passionate about customer satisfaction as a score because it's granular and actionable. Can you share a time where CSAT drove a change and produced a measurable business result? Amanda Berger: I spent a lot of my career in security. And that's tough for attribution. In e-commerce, attribution is clear: Person saw recommendations, put them in cart, bought them. In hiring, their time-to-fill is faster—pretty clear. But in security, it's less clear. I love this example: We all live in houses, right? None of our houses got broken into last night. You don't go to work saying, "I had such a good night because my house didn't get broken into." You just expect that. And when your house didn't get broken into, you don't know what to attribute that to. Was it the locked doors? Alarm system? Dog? Safe neighborhood? That's true with security in general. You have to really think through attribution. Getting that feedback is really important. In surveys we've done, we've gotten actionable feedback. Somebody was able to detect a vulnerability, and we later realized it could have been tied to something that would have cost $4 million to settle. That's the kind of feedback you don't get without really digging around for it. And once you get that once, you're able to tie attribution to other things. Bringing CS Into the Sales Cycle: The 70% Rule Erin Mills: You're a religious believer in bringing CS into the sales cycle. When exactly do you insert CS, and how do you build trust without killing velocity? Amanda Berger: With bigger customers, I like to bring in somebody from CX when the deal is at the technical win stage or 70% probability—vendor of choice stage. Usually it's for one of two reasons: One: If CX is gonna have to scope and deliver, I really like CX to be involved. You should always be part of deciding what you're gonna be accountable to deliver. And I think so much churn actually starts to happen when an implementation goes south before anyone even gets off the ground. Two: In this world of technology, what really differentiates an experience is humans. A lot of our technology is kind of the same. Competitive differentiation is narrower and narrower. But the approach to the humans and the partnership—that really matters. And that can make the difference during a sales cycle. Sometimes I have to convince the sales team this is true. But typically, once I'm able to do that, they want it. Because it does make a big difference. Technology makes us successful, but humans do too. That's part of that balance between what's the machine and what is the human. The Art of Getting Customers to Articulate Their Goals Ken Roden: One challenge CS teams face is getting customers to articulate their goals. Do customers naturally say what they're looking to achieve, or do you have a process to pull it out? Amanda Berger: One challenge is that what a recruiter's goal is might be really different than what the CFO's goal is. Whose outcome is it? One reason you want to get involved during the sales cycle is because customers tell you what they're looking for then. It's very clear. And nothing frustrates a company more than "I told you that, and now you're asking me again? Why don't you just ask the person selling?" That's infuriating. Now, you always have legacy customers where a new CSM comes in and has to figure it out. Sometimes the person you're asking just wants to do their job more efficiently and can't necessarily tie it back to the bigger picture. That's where the art of triangulation and relationships comes in—asking leading discovery questions to understand: What is the business impact really? But if you can't do that as a CS leader, you probably won't be successful and won't retain customers for the long term. AI as Companion, Not Replacement: The Employ Philosophy Erin Mills: At Employ, you're implementing AI companions for recruiters. How do you think about when humans are irreplaceable versus when AI should step in? Amanda Berger: This is controversial because we're talking about hiring, and hiring is so close to people's hearts. That's why we really think about companions. I earnestly hope there's never a world where AI takes over hiring—that's scary. But AI can help companies and recruiters be more efficient. Job seekers are using AI. Recruiters tell me they're getting 200-500% more applicants than before because people are using AI to apply to multiple jobs quickly or modify their resumes. The only way recruiters can keep up is by using AI to sort through that and figure out best fits. So AI is a tool and a friend to that recruiter. But it can't take over the recruiter. The Delegation Framework: What Do You Hate Doing? Ken Roden: How do you position AI as companion rather than threat? Amanda Berger: There's definitely fear. Some is compliance-based—totally justifiable. There's also people worried about AI taking their jobs. I think if you don't use AI, AI is gonna take your job. If you use AI, it's probably not. I've always been a big fan of delegation. In every aspect of my life: If there's something I don't want to do, how can I delegate it? Professionally, I'm not very good at putting together beautiful PowerPoint presentations. I don't want to do it. But AI can do that for me now. Amazingly well. What I'm really bad at is figuring out bullets and formatting. AI does that. So I think about: What are the things I don't want to do? Usually we don't want to do the things we're not very good at or that are tedious. Use AI to do those things so you can focus on the things you're really good at. Maybe what I'm really good at is thinking strategically about engaging customers or articulating a message. I can think about that, but AI can build that PowerPoint. I don't have to think about "does my font match here?" Take the parts of your job that you don't like—sending the same email over and over, formatting things, thinking about icebreaker ideas—leverage AI for that so you can do those things that make you special and make you stand out. The people who can figure that out and leverage it the right way will be incredibly successful. Making the Case to Keep Humans in CS Ken Roden: Leaders face pressure from boards and investors to adopt AI more—potentially leading to roles being cut. How do you make the case for keeping humans as part of customer success? Amanda Berger: AI doesn't understand business outcomes and motivation. It just doesn't. Humans understand that. The key to relationships and outcomes is that understanding. The humanity is really important. At HackerOne, it was basically a human security company. There are millions of hackers who want to identify vulnerabilities before bad actors get to them. There are tons of layers of technology—AI-driven, huge stacks of security technology. And yet no matter what, there's always vulnerabilities that only a human can detect. You want full-stack security solutions—but you have to have that human solution on top of it, or you miss things. That's true with customer success too. There's great tooling that makes it easier to find that needle in the haystack. But once you find it, what do you do? That's where the magic comes in. That's where a human being needs to get involved. Customer success—it is called customer success because it's about success. It's not called customer retention. We do retain through driving success. AI can point out when a customer might not be successful or when there might be an indication of that. But it can't solve that and guide that customer to what they need to be doing to get outcomes that improve their business. What actually makes success is that human element. Without that, we would just be called customer retention. The Framework: When to Use Digital vs. Human Touch Erin Mills: We'd love to get your framework for AI-powered customer experience. How do you make those numbers real for a skeptical CFO? Amanda Berger: It's hard to talk about customer approach without thinking about customer segmentation. It's very different in enterprise versus a scaled model. I've dealt with a lot of scale in my last couple companies. I believe that the things we do to support that long tail—those digital customers—we need to do for all customers. Because while everybody wants human interaction, they don't always want it. Think about: As a person, where do I want to interact digitally with a machine? If it's a bot, I only want to interact with it until it stops giving me good answers. Then I want to say, "Stop, let me talk to an operator." If I can find a document or video that shows me how to do something quickly rather than talking to a human, it's human nature to want to do that. There are obvious limits. If I can change my flight on my phone app, I'm gonna do that rather than stand at a counter. Come back to thinking: As a human, what's the framework for where I need a human to get involved? Second, it's figuring out: How do I predict what's gonna happen with my customers? What are the right ways of looking and saying "this is a risk area"? Creating that framework. Once you've got that down, it's an evolution of combining: Where does the digital interaction start? Where does it stop? What am I looking for that's going to trigger a human interaction? Being able to figure that out and scale that—that's the thing everybody is trying to unlock. The 8-Hour Workflow Reduced to 30 Minutes Erin Mills: You've mentioned turning some workflows from an 8-hour task to 30 minutes. What roles absorbed the time dividend? What were rescoped? Amanda Berger: The roles with a lot of repetition and repetitive writing. AI is incredible when it comes to repetitive writing and templatization. A lot of times that's more in support or managed services functions. And coding—any role where you're coding, compiling code, or checking code. There's so much efficiency AI has already provided. I think less so on the traditional customer success management role. There's definitely efficiencies, but not that dramatic. Where I've seen it be really dramatic is in managed service examples where people are doing repetitive tasks—they have to churn out reports. It's made their jobs so much better. When they provide those services now, they can add so much more value. Rather than thinking about churning out reports, they're able to think about: What's the content in my reports? That's very beneficial for everyone. By 2027: The Hardest CX Role to Hire Erin Mills: Mad Libs time. By 2027, the hardest CX job to hire will be _______ because of _______. Amanda Berger: I think it's like these forward-deployed engineer types of roles. These subject matter experts. One challenge in CS for a while has been: What's the value of my customer success manager? Are they an expert? Or are they revenue-driven? Are they the retention person? There's been an evolution of maybe they need to be the expert. And what does that mean? There'll continue to be evolution on that. And that'll be the hardest role. That standard will be very, very hard. Lightning Round Ken Roden: What's one AI workflow go-to-market teams should try this week? Amanda Berger: Summarization. Put your notes in, get a summary, get the bullets. AI is incredible for that. Ken Roden: What's one role in go-to-market that's underusing AI right now? Amanda Berger: Implementation. Ken Roden: What's a non-obvious AI use case that's already working? Amanda Berger: Data-related. People are still scared to put data in and ask for themes. Putting in data and asking for input on what are the anomalies. Ken Roden: For the go-to-market leader who's not seeing value in AI—what should they start doing differently tomorrow? Amanda Berger: They should start having real conversations about why they're not seeing value. Take a more human-led, empathetic approach to: Why aren't they seeing it? Are they not seeing adoption, or not seeing results? I would guess it's adoption, and then it's drilling into the why. Ken Roden: If you could DM one thing to all go-to-market leaders, what would it be? Amanda Berger: Look at your leading indicators. Don't wait. Understand your customer, be empathetic, try to get results that matter to them. Key Takeaways The Human-AI Balance in Customer Success: AI doesn't understand business outcomes or motivation—humans do. The winning teams use AI to find patterns and predict risk, then deploy humans to understand why it matters and what strategic action to take. The Lagging Indicator Trap: By the time NRR, churn rate, or NPS move, customers decided 6 months ago. Focus on leading indicators you can actually influence: verified outcomes, engagement signals specific to your business, early risk warnings, and real-time CSAT at decision points. The 70% Rule: Bring CS into the sales cycle at the technical win stage (70% probability) for two reasons: (1) CS should scope what they'll be accountable to deliver, and (2) capturing customer goals early prevents the frustrating "I already told your sales rep" moment later. Segmentation ≠ Personalization: AI makes segmentation faster and cheaper, but true personalization requires understanding context, motivation, and individual circumstances. The jumpsuit story proves we're still just sophisticated bucketing, even with 2026's advanced models. The Delegation Framework: Don't ask "what can AI do?" Ask "what parts of my job do I hate?" Delegate the tedious (formatting, repetitive emails, data analysis) so humans can focus on strategy, relationships, and outcomes that only humans can drive. "If You Don't Use AI, AI Will Take Your Job": The people resisting AI out of fear are most at risk. The people using AI to handle drudgery and focusing on what makes them irreplaceable—strategic thinking, relationship-building, understanding nuanced goals—are the future leaders. Customer Success ≠ Customer Retention: The name matters. Your job isn't preventing churn through discounts and extensions. Your job is driving verified business outcomes that make customers want to stay because you're improving their business. Stay Connected To listen to the full episode and stay updated on future episodes, visit the FutureCraft GTM website. Connect with Amanda Berger: Connect with Amanda on LinkedIn Employ Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past.
undefined
Nov 13, 2025 • 38min

Why AI Rollouts Failed in 2025, And What's Actually Working in Go-to-Market

Join hosts Ken Roden and Erin Mills as they reflect on an incredible Season 2 of the FutureCraft GTM podcast. From pilot purgatory to agent swarms, they unpack how AI in go-to-market evolved throughout the year, share their biggest lessons learned, and make bold predictions for 2026. Key Topics Covered Season 2 Reflections [00:01:00] The slow start vs. strong finish of AI adoption Pilot purgatory and why 95% of AI rollouts struggled The accordion effect of AI tools throughout the year Guest Predictions Review - "They Called It" [00:04:00] Rachel Tru Air on AI SDRs: Still a work in progress Chase Hannigan on no-code agentic systems: Ahead of the curve Liza Adams on EQ being the edge: Called it perfectly Major Themes That Emerged [00:08:00] Adoption over tools as the key to success AI as teammate vs. AI as output generator The "sandwich model" - humans at both ends, AI in the middle Curiosity and EQ as critical differentiators What Failed This Year [00:10:00] AI vendor spray-and-pray marketing Custom GPT overload (600 GPTs at one company!) Rolling out LLMs without proper change management Business Impact Wins [00:17:00] Speed to market improvements Analytics accessibility for non-technical users 600% more time on site from AI-driven traffic Time auditing as a measurement strategy Personal Lightning Round [00:32:00] Most overhyped buzzword: AIEO Underrated tool: N8N Biggest personal unlock: Self-regulation with AI use Best use case: Digital twins and content workflows 2026 Predictions [00:24:00] Agent swarms and workforces (Erin's pick) Digital twins as the hero (Ken's pick) Closed company-specific LLMs Fractional AI experts with their own agent teams New organizational structures emerging Notable Quotes "AI is like an intern with a PhD who doesn't have any business experience" - Ken "Digital twins are great, but I think it's gonna be swarms" - Erin "It's 90% focus on the people and 10% on the execution now, not the other way around" - Erin "Get your hands dirty. Because this is new to everybody, there's a real need to understand what your team is going through" - Erin Guests Mentioned This Episode Liza Adams Rachel Truair (Simpro) Chase Hannegan Sheena Miles Rebecca Shaddix Chris Penn Key Takeaways Change management is critical - 80% focus on people, 20% on execution Start with boring problems - Don't chase the sexiest AI use cases Define acceptable mistakes - Know when to call a pilot a failure Agent swarms are the future - Moving beyond single-purpose tools Communities matter - AI has opened unprecedented knowledge sharing Speed to market - Months-long processes now taking days or hours Resources Mentioned N8N workflow automation platform Relevance AI Lindy ElevenLabs (voice) Planet Money AI recruiting segment Chris Penn's analytics community Coming in Season 3 (March 2026) Human agentic workflows with verification stopgaps Agent swarm implementations New modalities: voice and video applications More on the Iron Man suit approach to fractional AI work Share what you want to see in Season 3 & Connect with the Hosts: Ken Roden Erin Mills   About FutureCraft Stay tuned for more insightful episodes from the FutureCraft podcast, where we continue to explore the evolving intersection of AI and GTM. Take advantage of the full episode for in-depth discussions and much more. To listen to the full episode and stay updated on future episodes, visit our website, https://www.futurecraftai.media/ Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past. Music: Far Away - MK2
undefined
Nov 6, 2025 • 51min

Boring Problems, Big Wins, Community‑Driven AI Adoption

Boring Problems, Big Wins, Community‑Driven AI Adoption AI is not overhyped, it is under-implemented. Ken Roden and Erin Mills chat with Sheena Miles on how to move from tool obsession to behavior change, her three stage framework, and the practical KPIs that prove progress before revenue shows up. We also talk AI policy that unlocks safe experimentation, community as an accelerator, and Sheena demos how she spins up n8n workflows from a prompt. Chapter markers 00:00, Cold open and disclaimer 01:00, Is AI overhyped, what is really failing 03:20, Early indicators versus lagging revenue, set better goals 04:20, Exec view, target 3 percent faster time to market 06:00, Avoid AI slop, find repetitive, boring work 07:00, Guest intro 09:00, Real state of adoption, dual speed orgs and siloed champions 10:45, Teach concepts, not tools 12:00, Policy, security review, AI council 14:00, Behavior beats features 15:30, Community for accountability and shared assets 17:30, Live n8n demo, import a skeleton workflow and adapt 35:00, AI first versus AI native, embed into workflows 36:30, Influence without authority, solve a champion’s boring problem 38:00, Inclusion and usage gaps, why it matters to the business 40:00, Skills that matter now, prompting, rapid testing, communicating thought process 43:00, Why to be optimistic 45:00, Lightning round 48:00, Host debrief and takeaways Key takeaways Hype versus reality, most failures are vague goals and tool-first rollouts, not AI itself. • Measure what you can now, speed to market, cycle time, sprint throughput, ticket deflection, before revenue. • Framework, Activate, Amplify, Accelerate, start small, spread what works, then institutionalize. • Policy unlocks velocity, simple rules for data and tool vetting plus a cross functional council. • Behavior over features, learn inputs and outputs so skills transfer across tools. • Community compounds, accountability and shared templates speed learning. • Start with boring problems, compliance questionnaires, asset generation, ticket clustering, call insights. • AI first versus AI native, move from sidecar to embedded with human review gates. • Inclusion is a business lever, close usage gaps or accept a productivity gap. Sheena’s three stage framework Activate, prove value safely • Define the problem, validate AI fit, run a small pilot. • Track accuracy thresholds and time saved. • Example, auto draft responses to repetitive compliance questionnaires from a vetted knowledge base. Amplify, spread what works • Connect adjacent teams, add light governance, share patterns. • Run cross team pilots and publish playbooks. • Example, connect support tickets, payments, compliance, partner success to detect issues proactively. Accelerate, institutionalize • Assign ownership, embed training, integrate tools, set ROI guardrails. • Roll out across channels and systems with quality gates. • Example, ad copy system owned by demand gen, content as QA, used across paid, email, social. Hot Takes from Sheena “Policy enables speed if you write it to unblock safe experiments.” “Stop memorizing tool steps, learn the concepts so they transfer.” “Solve the boring problem first, that is where AI pays for itself.” “If NRR belongs to someone, it belongs to everyone.” Resources & Links Sheena Miles on LinkedIn Women Defining AI, podcast and community n8n About FutureCraft Stay tuned for more insightful episodes from the FutureCraft podcast, where we continue to explore the evolving intersection of AI and GTM. Take advantage of the full episode for in-depth discussions and much more. To listen to the full episode and stay updated on future episodes, visit our website, https://www.futurecraftai.media/ Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past. Music: Far Away - MK2
undefined
Oct 30, 2025 • 52min

From Funnels to Playgrounds: Atlassian's Ashley Faus on Human-Centered Marketing in the AI Era

What happens when an Atlassian marketing veteran who decorates cakes and rides motorcycles decides the traditional marketing funnel is completely broken? You get Ashley Faus, Head of Lifecycle Marketing Portfolio at Atlassian, author of "Human-Centered Marketing," and today's guest on FutureCraft. Ashley has spent 8+ years at Atlassian revolutionizing how B2B marketers think about customer journeys, replacing linear funnels with her "content playground" framework where audiences can go up, down, and sideways through your content—just like kids on an actual playground. In this episode, we get into: Why ChatGPT 5 might be getting worse for marketing professionals (and what to use instead) Erin's live demo of Gemini's deep research for account-based marketing that analyzes hundreds of sources Ashley's content playground framework that treats audiences like humans, not funnel steps How trust becomes your only defensible moat when AI can fake everything else Why organizational silos are killing your customer experience (and how to fix them) The "18-month rule" for career evolution in an AI-accelerated world Whether you're a CMO fighting for budget, a product marketer drowning in requests, or a lifecycle specialist trying to prove ROI, Ashley breaks down how to keep humans at the center while leveraging AI as your creative co-pilot.   🛠 Tools & Mentions: ChatGPT 5 (improved memory but weaker professional responses) Gemini Pro (superior deep research capabilities) Atlassian Rovo (AI agents and integrations) Notebook LM (content analysis and mind mapping) CASINO Framework (Context, Audience, Scope, Intent, Narrator, Outcome) Content Playground Model (conceptual, strategic, tactical content depths)   🎯 Try This: Map your existing content using Ashley's playground framework: sticky note brainstorm → group themes → classify by depth (conceptual/strategic/tactical) and intent (buy/use/trust/help/learn). 🧠 Learn More from Ashley: Follow Ashley Faus on LinkedIn Read "Human-Centered Marketing: How to Connect with Audiences in the Age of AI" Explore Atlassian's Team Playbook Timestamps: 00:00 Introduction and Disclaimer 02:15 Ken's ChatGPT 5 Reality Check 05:45 Erin's Gemini Deep Research Breakthrough 07:30 Live Demo: Account Research That Actually Works 18:20 Interview with Ashley Faus Begins 20:15 From Classical Singer to Marketing Revolutionary 25:40 Why She Wrote "Human-Centered Marketing" Now 32:10 Trust: The Thing You Can't Automate 38:25 Content Playground Framework Deep Dive 52:30 Breaking Down Marketing Silos Without Losing Your Mind 58:45 The 18-Month Rule for Career Evolution 01:02:15 Gladiator Round: AI-Powered Debate Prep 01:08:30 Lightning Round Rapid Fire 01:12:45 Key Takeaways and Episode Wrap   📥 Subscribe & Share: New episodes drop weekly. If Ashley's playground framework changed how you think about customer journeys, leave a review, share it with a friend, and tag us with your biggest takeaway. About our Guest: Ashley Faus is the Head of Lifecycle Marketing Portfolio at Atlassian, author of "Human-Centered Marketing: How to Connect with Audiences in the Age of AI," and a Forbes contributor. In more than eight years at Atlassian, she's spanned corporate communications, product marketing, and lifecycle leadership. She's become known for replacing traditional marketing funnels with her content playground model, advocating for audience trust over vanity metrics, and showing how creativity—from musical theater to elaborate cakes—makes us better marketers. Resources: Ashley's Book: "Human-Centered Marketing: How to Connect with Audiences in the Age of AI" LinkedIn: Ashley Faus Stay tuned for more insightful episodes from the FutureCraft podcast, where we continue to explore the evolving intersection of AI and GTM. Take advantage of the full episode for in-depth discussions and much more. To listen to the full episode and stay updated on future episodes, visit the website. Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past. Music: Far Away - MK2
undefined
Aug 14, 2025 • 58min

Acceptable Mistakes & Ruthless Prioritization: How Top PMMs Are Winning in AI GTM

Episode Summary: Rebecca Shaddix joins Erin and Ken to blow up tired go-to-market tropes and rewrite what it means to lead with product marketing in an AI-native era. She shares the frameworks behind “acceptable mistakes,” why critical thinking is the superpower in a world of noisy AI outputs, and how to avoid chasing 80 experiments that go nowhere. If you’re a CMO, PMM, or founder trying to separate signal from AI hype, this is your roadmap. About Our Guest: Rebecca Shaddix is the Head of Product & Lifecycle Marketing at Garner Health, Forbes contributor, and GTM strategy pioneer. She’s built GTM engines for high-growth SaaS and EdTech, founded Strategica, and is known for making complex data actionable (without losing trust or speed). Her frameworks are shaping the new AI playbook for marketers who want repeatable results, not just activity. 00:59 Ken's AI Sandwich Framework 04:26 Erin's AI-Powered Book Series 07:10 Interview with Rebecca Shaddix 08:24 Rebecca on Acceptable Mistakes in AI Implementation 17:44 AI's Impact on Product Marketing 23:30 Balancing AI Training and Deep Research 28:41 AI Tools and Budget Constraints 30:32 Navigating the Rapid Evolution of AI in Business 30:59 Balancing Risk and Reward in AI Tool Selection 32:44 Effective Team Collaboration and AI Integration 37:08 Building Trust in AI Insights 45:15 The Future of Product Marketing 54:13 Lightning Round and Final Thoughts   Quote of the Episode: “Trust in AI starts with transparency and ends with collaboration. Bring your teams in early, and let them own the process.” – Rebecca Shaddix 🎧 What You’ll Learn: How to Make (the Right) Acceptable Mistakes: Rebecca’s “acceptable mistakes” framework—why defining what you won’t optimize is the move that unlocks true speed and clarity for GTM leaders. Experiments Without Strategy Are Chaos: Why most teams run too many experiments and how to build a ruthless prioritization model that gets buy-in before the test. The Real Role of AI in Product Marketing: How AI gives PMMs “junior analyst superpowers” but why human discernment, critical thinking, and cross-functional trust still win the day. Segmentation, ICP, and the New Power User: How machine learning is uncovering hidden patterns in the middle of your user base (not just among your superfans)—and why most marketers overweight the wrong signals. Building Trust in AI-Generated Insights: Rebecca’s battle-tested approach to cross-functional buy-in, demystifying black box outputs, and making AI actionable across the org. Budgeting for AI When Cash Is Tight: The no-BS guide to picking AI tools (hint: treat it like every other investment—hypothesis, use case, ROI) and why you should always start manual. 🧠 Next-Level Insights: The difference between motion and momentum in modern marketing—why activity ≠ impact Why the “blank page” problem is now dead for good (and why that changes who wins in marketing teams) How to democratize AI experimentation without losing control—or trust The hidden risk: Over-relying on your top users for feedback and missing the 10x opportunity in the “middle layer” Action Steps Audit your own “acceptable mistakes.” What are you over-optimizing that doesn’t matter? Try running a single, ruthlessly prioritized experiment—get buy-in, define the problem, THEN launch Empower your team to bring AI wins (and failures) to the table—share the learning Stop listening only to your power users. Find what the “middle” is doing and why. Resources Mentioned: Human-Centered Marketing by Ashley Faus ChatGPT, Claude, Gemini (and why switching tools is easier than switching marketing automation) LinkedIn: Connect with Rebecca Shaddix Stay tuned for more FutureCraft episodes at futurecraftai.media Liked this episode? Rate us on Spotify/Apple, share with a forward-thinking marketer, or DM us with what you want to hear next. Let’s keep crafting the future of GTM, together. Music: Far Away - MK2
undefined
Aug 7, 2025 • 47min

On AI: Replacing Recruiters, Scaling Agents, and Getting Out of the Pilot Phase

Lennard Kooy, CEO of Lleverage, discusses how AI is reshaping business operations by emphasizing that companies care more about outcomes than the technology itself. He highlights the importance of 'assist before replace' strategies to drive AI adoption and reveals how Lleverage automated 70% of its hiring process. Lennard also demonstrates building a cold outreach AI agent live and shares practical tips for GTM leaders to stay relevant in an evolving landscape. This conversation is a wake-up call for those in recruitment and marketing!
undefined
7 snips
Jul 31, 2025 • 40min

The AI Adoption Plateau: Why Change Management Still Rules Everything

Liza Adams, AI MarketBlazer and influential LinkedIn personality, returns to discuss the pressing challenges of AI adoption within marketing teams. She emphasizes that despite advancements in technology, human change management remains a significant hurdle. Liza shares a practical framework for implementing AI, highlights the innovative 'digital twin' strategy to enhance organizational readiness, and advocates for the '80% rule' that leverages human oversight with AI outputs. Her engaging live demonstration turns a dense marketing report into an interactive Jeopardy game, showcasing her dynamic approach.
undefined
Jul 24, 2025 • 48min

AI, AEO, and GTM Engineering: How to Build a B2B Marketing Engine

Episode Summary: In this packed episode, Lacey Miller joins Erin and Ken to demystify what it means to be a "Go-to-Market Engineer" in today’s AI-fueled marketing landscape. She breaks down how she uses agentic AI workflows to build repeatable, high-output growth systems without the team bloat. If you’ve ever wondered how AI changes content strategy, brand building, or TikTok for B2B... this is your playbook. 🎧 What You’ll Learn: Why the first 100 days in a marketing role now demands a full AI-first mindset How AEO (Answer Engine Optimization) flips SEO on its head and what it means for your content The creative tech stack behind Lacey’s agent-led outbound machine (yes, Lindy + Replit + persona recognition) Why brand visuals and voice need prompt engineering too How AI is changing buying behavior—and what smart marketers are doing in response Practical ways to use podcasts, transcripts, and conversational content to win in LLM search The real ROI of TikTok for B2B and why your team needs a “TikTok hour” 🛠️ Gladiator Round: Behind the Scenes of Lacey’s Stack Lacey walks through how she classifies inbound leads, triggers AI workflows, and scales one-to-one GTM—all without a dev team. You’ll see her build live in Replit and Lindy. 📢 Next Steps Try using NotebookLM to create your own podcast content Share your AI wins or roadblocks with us on Twitter @FutureCraftpod   01:00 Ken's AI Journey: Building Connectors 02:16 Erin's AI Research Project 03:37 Guest Introduction: Lacey Miller 04:24 Lacey Miller's Marketing Insights 07:43 The Role of AI in Modern Marketing 11:37 AI-Driven Search and Content Strategies 16:57 Challenges and Opportunities in AI Marketing 22:12 Future of AI in B2B Marketing 25:01 SEO Strategies for AI Enthusiasts 25:38 Trends in User Engagement 26:50 The Evolution of Search Behavior 29:22 Disrupting Traditional Advertising 36:01 The Rise of TikTok in B2B Marketing 40:20 Practical AI Tools for Marketers 41:17 Lightning Round: Quick Marketing Insights 44:09 Final Thoughts and Takeaways About our Guest: Lacey Miller Lacey Miller is a highly experienced and impactful marketing executive, currently spearheading Growth Marketing at Perigon, an AI context engine. She is recognized for her innovative approaches to AI Visibility Optimization (AIVO), "Answer-Engine Optimization (AEO)," and pioneering TikTok-for-B2B playbooks. She focuses on  how to leverage ai in saas b2b in  content marketing As a proven first-hire marketing leader, Lacey specializes in building robust Go-to-Market (GTM) strategies from the ground up, particularly for B2B, developer, and enterprise AI SaaS products. Her expertise lies in translating complex technical capabilities into compelling narratives that resonate with diverse audiences and drive revenue growth. At Perigon, she is actively driving category creation, positioning the company as a leader in real-time AI context. Prior to Perigon, Lacey served as Head of Marketing at Bezi, where she built the initial GTM framework and integrated AI into product strategy. Her experience also includes building high-performing marketing functions at LoudCrowd, where she significantly contributed to ARR and fundraising, and leading full-funnel GTM strategies at VertifyData. Notable Quotes: “Robots are making us sound more human.” “You can’t blog your way into LLMs—you need conversation.” “AI is not your replacement, it’s your multiplier.” “Your GPT isn't a toy. It’s your co-pilot.” “We’re not just building campaigns anymore—we’re building products.” Resources: Lindy.ai Stay tuned for more insightful episodes from the FutureCraft podcast, where we continue to explore the evolving intersection of AI and GTM. Take advantage of the full episode for in-depth discussions and much more. To listen to the full episode and stay updated on future episodes, visit the FutureCraft GTM website. Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past. Music: Far Away - MK2
undefined
Jul 17, 2025 • 52min

No More Slop: AI That Actually Works for GTM

In this engaging discussion, Christopher Penn, co-founder and Chief Data Scientist at Trust Insights, shares his insights on cutting through the fluff surrounding AI in marketing. He highlights why much AI content fails and introduces effective prompting frameworks to drive better results. Chris also demonstrates how to transform chaotic data into actionable strategies. Furthermore, he addresses the real challenges facing SaaS and education, emphasizing the need for personal branding as a safety net in an AI-driven world. This conversation is packed with practical wisdom and spicy truths!
undefined
14 snips
Jul 10, 2025 • 48min

Inside the AI Agent Workflow: What n8n Makes Possible for Non-Technical Builders with Chase Hannegan

Chase Hannegan, founder of Chase AI and former Marine Corps Osprey pilot, shares his journey from military aviation to becoming a viral sensation in AI. He reveals why n8n is his go-to platform for building scalable agents without coding. Chase explains the crucial differences between AI workflows and true agents, and how to sidestep tutorial overload for effective building. He also walks through his personal assistant agent and offers practical tips like starting with a 'minimum viable agent' to enhance productivity using AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app