The User Research Strategist: UXR | Impact | Career

Nikki Anderson
undefined
Aug 21, 2025 • 31min

Using AI as Your Research Intern | Ryan Glasgow (Sprig)

Listen now on Apple, Spotify, and YouTube.—Ryan Glasgow is the CEO of Sprig, an AI-native survey platform built to replace legacy tools like Qualtrics. Sprig combines advanced survey capabilities, AI-powered analysis, and an intuitive user experience to help research and product teams get richer insights, faster. Before starting Sprig, Ryan led product at Weebly and Vurb, where he saw how slow, fragmented research workflows could limit product velocity. Today, Sprig is used by companies like Stripe, DoorDash, Notion, and Netflix to run surveys, analyze results instantly with AI, and deliver insights that shape product decisions.In our conversation, we discuss:* How the research community’s mindset toward AI has shifted from fear to experimentation.* What it means to treat AI like an eager intern and why that mental model changes everything.* How to break your workflow into “job steps” and plug AI in where it can actually help.* What Sprig is building to support both qual and quant researchers at different levels.* Why sharing raw data with stakeholders (not just summaries) might be the next big unlock for research impact.Some takeaways:* Many teams outside of research are still figuring out what AI is good for. Meanwhile, UXRs are already experimenting with real tasks, like synthesis, survey creation, and study planning, because they’ve had to. Ryan points out that researchers are becoming internal AI evangelists, getting asked to present their workflows to other departments. The field’s willingness to experiment is turning into a quiet leadership moment.* The best way to work with AI? Pretend it’s a new intern. It’s fast, eager, and can take on a ton but needs oversight, review, and clear direction. That framing unlocks a very different way of thinking: not “Will it replace me?” but “What can I delegate to it so I can focus on higher-impact work?” That shift is showing up in how researchers manage tasks across their workflow.* Before you plug AI into your stack, audit your actual workflow. Break it into steps, like study planning, stakeholder requests, distribution, synthesis, insight sharing, and decide where AI can support. Sprig is built around this approach, helping researchers insert AI at specific job steps. Trying to use one tool for everything often backfires; success comes from surgical fits, not general use.* Sprig is expanding from in-product surveys into long-form survey support with built-in AI features for study creation, open text clustering, and synthesis. Ryan shared how qual researchers use AI to draft surveys, get feedback, and even generate first-pass summaries of open ends. Other tools like Notebook LM and Gamma help researchers do faster analysis and deck creation without skipping the rigor.* One of the most radical ideas Ryan shared: share your raw research data, transcripts, open ends, survey results, so stakeholders can ask their own questions. With tools like ChatGPT or Notebook LM, that data becomes living, queryable insight. It turns research from a static deliverable into an exploratory tool. It also takes the pressure off researchers to have all the answers, all the time.Where to find Ryan:* Website* LinkedIn* XStop piecing it together. Start leading the work.The Everything UXR Bundle is for researchers who are tired of duct-taping free templates and second-guessing what good looks like.You get my complete set of toolkits, templates, and strategy guides. used by teams across Google, Spotify, , to run credible research, influence decisions, and actually grow in your role.It’s built to save you time, raise your game, and make you the person people turn to—not around.→ Save 140+ hours a year with ready-to-use templates and frameworks→ Boost productivity by 40% with tools that cut admin and sharpen your focus→ Increase research adoption by 50% through clearer, faster, more strategic deliveryInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
undefined
Aug 8, 2025 • 31min

Researching for Real Life | Loren Flores & Kathryn Ambroze (JPMorgan Chase)

Kathryn Ambroze, a behavioral neuroscientist and user researcher at JP Morgan Chase, teams up with Lauren Flores, a UX researcher specializing in user-centered financial solutions. They delve into the intricacies of end-to-end research, emphasizing its importance for understanding the complete customer journey. The duo discusses using habit loops to influence real user behavior and the value of live account interviews in capturing authentic insights. They also share strategies for fostering collaboration and alignment in corporate environments, ensuring the customer remains central to every design decision.
undefined
Jul 24, 2025 • 35min

Business-Savvy Research and Onion Layers of UX | Amanda Stockwell (Stockwell Strategy)

Amanda Stockwell runs Stockwell Strategy, specializing in user experience research and innovation. She discusses the concept of 'research onions,' emphasizing layers that go beyond core skills in UX. Amanda highlights the importance of aligning user research with business goals for actionable insights. She shares tips for solo researchers on effective collaboration and networking, advocating for authentic connections. The conversation lightens up with fascinating lobster facts, showcasing Amanda's love for fun amidst serious discussions.
undefined
Jun 26, 2025 • 27min

The Work Research Enables | Dave Hora (Consultant)

Dave Hora, an independent research consultant from Porto, Portugal, shares his extensive experience in helping teams enhance product processes. He stresses the need for researchers to understand broader strategic goals and the importance of contextual awareness in their work. The discussion also covers journey mapping as a tool for identifying decision patterns, navigating ambiguous organizational strategies, and aligning pet projects with company objectives. Tune in for valuable insights on elevating research through collaboration and strategic alignment!
undefined
Jun 12, 2025 • 34min

Reporting Without Control | Steve Jenks (MeasuringU)

Steve Jenks, a UX researcher at MeasuringU and faculty member at the University of Denver, shares his journey from academia to user research. He discusses the art of conducting research without direct influence, emphasizing the significance of understanding business needs and shaping decisions effectively. Jenks also dives into client management, offering insights on guiding them toward appropriate methodologies. With tips on collaboration and maintaining client engagement, he underscores the importance of ongoing skill enhancement for researchers, regardless of their organizational maturity.
undefined
May 29, 2025 • 35min

Reframing Democratization | Ned Dwyer (Great Question)

Listen now on Apple, Spotify, and YouTube.—Ned Dwyer is the Co-Founder and CEO of Great Question, the all-in-one UX research platform designed to democratize research at scale.After two successful exits as a founder, Ned launched his biggest idea to date: helping enterprise teams better understand their users. Ned has led Great Question in empowering UX researchers, designers, and product teams to collaborate seamlessly and uncover the insights needed to build something great.With over a decade of experience at the intersection of product, design & research; Ned has driven innovation and scaled businesses that solve complex challenges for enterprises.Outside of his professional pursuits, Ned loves spending time in sunny Oakland, California with his wife, two kids and three cats.In our conversation, we discuss:* What democratization really means and why it’s not just about “everyone doing research.”* The shift in sentiment and adoption—from early-stage startups to 16,000-person enterprises.* How researchers can avoid being sidelined by becoming facilitators, not gatekeepers.* The role of tools, policies, and AI in scaling high-quality research safely across teams.* Strategies for building the business case for tools and training—especially in resource-limited orgs.Some takeaways:* Democratization is already happening whether you’re involved or not. Ned emphasizes that research is already being done across organizations by non-researchers, just not always well. The opportunity for researchers is to step into a facilitator role: setting standards, defining guardrails, and ensuring quality without hoarding control.* Big orgs are leading the way, not just scrappy startups. Contrary to early assumptions, the most aggressive adopters of democratization aren’t just startups, they’re enterprises with thousands of employees. The difference? These organizations invest in scalable infrastructure, permissions, and training to empower safe, responsible research at scale.* Guardrails matter more than gatekeeping. With the right systems, democratization doesn’t have to mean chaos. Great Question includes features like eligibility criteria, access controls, incentive limits, study approval flows, and AI-powered report validation. These guardrails enable research at scale without compromising integrity or participant experience.* Make your case by speaking leadership’s language. To advocate for democratization tools or training, tie your request to business goals: reduced legal risk, better participant experience, efficiency gains, and fewer headcount needs. Use the “researcher effort score” to quantify pain points and show progress over time.* Want more influence? Get close to the money. Strategic researchers don’t wait for requests, they go to sales, marketing, and product to understand pain points and proactively solve them. Running win/loss research or unblocking customer access helps build trust, grow research demand, and elevate your role beyond usability testing.Where to find Ned:* Website* LinkedIn: Great Question* LinkedIn: Ned* Twitter/XInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
undefined
May 19, 2025 • 45min

Resume critique series - Part one

Dive into a resume critique session that reveals essential tips for job seekers. Common mistakes are dissected, emphasizing the importance of effective language and content placement. Learn how to refine bullet points by highlighting unique contributions and quantifiable impacts to stand out. The discussion also covers crafting resumes with specificity and avoiding pitfalls, ensuring your achievements catch the eye of hiring managers. Transform your application strategies and increase your chances of landing interviews!
undefined
May 16, 2025 • 31min

Inside Games User Research | Steve Bromley (Games User Research)

Listen now on Apple, Spotify, and YouTube.—Steve is a games user research consultant, helping teams use player insight to create successful games. He works with publishers, platforms and studios of all sizes to transform their game development process, and build product strategies that combines player data with creativity. He work from ideation to post-launch in order to de-risk game development, and make games players love.Prior to this he was a senior user researcher for PlayStation and worked on many of their top European titles, including Horizon Zero Dawn, SingStar, the LittleBigPlanet series and the PlayStation VR lineup.Steve started the Games User Research mentoring scheme, which has linked hundreds of students with industry professionals from top games companies such as Sony, EA, Valve, Ubisoft and Microsoft. He wrote the bestselling book How To Be A Games User Researcher to share the expertise needed to work in the games industry.He regularly speaks at games industry conferences and on podcasts about games user research + playtesting, and has been recognised as a member of BAFTA. He also wrote the bestselling book Building User Research Teams, and helps teams build impactful research practice in-house.In our conversation, we discuss:* The evolution of Steve’s career from early days at PlayStation to running his own games UX consultancy.* The difference between research in games vs. traditional tech, especially around the lack of discovery work.* How to measure subjective experiences like “fun,” and why that starts by redefining what “fun” even means.* The influence of secrecy, creative ownership, and marketing pressure on research methods in the games industry.* Real-world methods used in games UX, like mass playtesting labs and segment-based multiplayer analysis.Some takeaways:* Research in games is heavily evaluative. Unlike traditional UX, which often starts with uncovering user needs, games UX usually kicks in once there’s a playable prototype. Because the “user need” in games is often just “make it fun,” research is focused more on assessing emotional impact and usability than on early-stage exploration.* Measuring fun is both subjective and contextual. Teams often ask, “Is this fun?”—but that question is too broad to act on. Steve explains that researchers must first help define what kind of fun is intended, whether that’s emotional engagement, replay behavior, or challenge. Only then can appropriate metrics or qualitative signals be applied.* Creative ownership adds complexity to stakeholder management. Games are seen as artistic work. Designers may be deeply emotionally invested in their ideas, which can make it harder to embrace critical feedback. This makes relationship-building, empathy, and framing feedback constructively especially important in games UX.* Secrecy shapes everything, from methods to sampling. Due to high financial stakes and aggressive marketing timelines, games researchers often can’t test publicly. This leads to lab-based studies with high participant control. Mass playtesting labs (20–80 people at once) are common for running controlled, large-scale tests without leaking content.* Toxicity and matchmaking need research too. Games with multiplayer or social components must test how players interact, especially when strangers are thrown together online. Teams look at voice/chat features, segmentation by playstyle, and matchmaking fairness to reduce toxicity and create balanced experiences.Where to find Steve:* Website* LinkedIn* Twitter/X* BlueSkyInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
undefined
May 13, 2025 • 43min

Inside Insight: How I use Optimal to set up a prototype test

In this episode, I cover:* Common mistakes teams make when prototype testing becomes routine or rushed.* A method for deciding whether a prototype test is even the right approach.* Clear goal-setting techniques that make your test focused and relevant.* How to define metrics that show both research quality and product value.* Writing user tasks that reflect real behavior and reveal friction points.Key Takeaways:* Low-fidelity prototypes limit learning. If your design doesn’t give people room to explore, or fail, you won’t see how they truly interact with it. Higher fidelity versions are much more effective for unmoderated studies.* Not every question needs a usability test. If you’re looking to understand motivations or needs, observing task flows may not be the right method. Start by asking what kind of data you’re actually trying to gather.* Goals guide everything. Strong prototype tests begin with clear goals. They shape the tasks, help with team alignment, and create a direct line between what you learn and what changes.* Track outcomes that matter to your team. Define a few ways you’ll measure success before the test begins, such as friction points found, task completion behaviors, or whether changes from the study affect real usage.* Write tasks people can relate to. Use short, specific scenarios rooted in familiar behavior. Instead of vague prompts, give people a purpose and context so their actions reflect how they’d use the product in real life.The prototype guide:Grab the full prototype guide with all the examples and formulas here and try it out with your next project (or with a project you recently did!).Try Optimal:Want to try this out on Optimal? You can grab a 20% discount using code Prototype2025 at checkoutInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Reach out to me at nikki@userresearchacademy.com to learn more about sponsorship opportunities! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
undefined
May 2, 2025 • 33min

Designing for the Real World | Erik Stoltenberg Lahm (The LEGO Group)

Listen now on Apple, Spotify, and YouTube.—Erik is a behavioral scientist with a passion for understanding how people, especially kids, interact with digital experiences. He works at The LEGO Group, where he leads behavioral research to create safer, more inspiring, and more playful digital spaces for children. He specializes in using behavioral science, experimentation, and innovative research methodologies to uncover what kids need and love in digital play.Beyond his professional role, he is a self-proclaimed research methodology nerd, always exploring better ways to understand and test how kids engage with the digital world.In our conversation, we discuss:* Why ecological validity is critical to meaningful product testing and what it means in practice.* How Erik approaches testing with kids at LEGO, including the need for playful environments and cognitive load considerations.* The pitfalls of lab-based research and why researchers must move beyond “zoo-like” conditions to see real-world behavior.* Ways to mitigate social desirability and authority bias, especially when conducting research with children.* How remote research, diary studies, and mixed methods can provide deeper behavioral insights—if done with context in mind.Some takeaways:* Validity is about realism. Erik defines ecological validity as the extent to which research reflects real-world behavior. While traditional labs optimize for internal validity, in product development, what matters is whether your findings will translate when people are distracted, tired, or juggling multiple tasks.* Don’t study lions at the zoo. One of Erik’s standout metaphors urges researchers to avoid overly sanitized environments. Testing products in sterile labs might remove variables, but it also strips away the chaotic, layered reality where your product must actually succeed. Aim for the “Serengeti”—not the zoo.* Researching with kids requires creativity, play, and caution.Kids aren’t small adults, they process and respond differently. Erik emphasizes using play as a language, minimizing cognitive load, and focusing on behavioral observation over verbal responses. A child saying “I loved it” means little if they looked disengaged the whole time.* Remote testing can work if grounded in real-life context. Remote methods like diary studies and follow-up interviews can capture valuable insights, especially if paired with contextual in-person research first. The key is triangulating methods and validating self-reports with observed behavior.* Think beyond usability, map the behavior chain. A product’s ease of use in isolation means little if the behavior it enables is derailed by real-life obstacles. Erik illustrates this with a simple example: refilling soap sounds easy until you’re cold, wet, and have other priorities. Designing for behavior means understanding the entire chain around your product.Where to find Erik:* LinkedInInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app