

The AI podcast for product teams
Arpy Dragffy
Podcast and newsletter for product teams looking to deliver innovative AI products and features designofai.substack.com
Episodes
Mentioned books

Dec 22, 2025 • 30min
When AI Isn’t the Answer, It’s the Problem
In Episode 48 of the Design of AI podcast, we unpack why the most common AI promises are collapsing under real market pressure. AI was meant to unlock strategic work, expand opportunity, and elevate creativity. Instead, UX and design roles are disappearing, agencies are cutting creative staff while buying automation, and freelance work is being devalued as execution becomes cheap.This episode is not about panic. It is about reality. Value still exists, but it is concentrating among those who can integrate AI into real systems, navigate ambiguity, and own outcomes rather than outputs.🎧 Apple Podcasts🎧 SpotifyKey Insights About AI at WorkWhat the evidence shows once the optimism is removed.MIT Media Lab: ChatGPT Use Significantly Reduces Brain Activity (2025)Early AI use reduces attention, memory, and planning, weakening independent thinking when models lead the process.Wharton / Nature: ChatGPT Decreases Idea Diversity in Brainstorming (2025)AI-assisted brainstorming narrows idea diversity, producing faster output but more uniform thinking across teams.Science Advances / SSRN: The Effects of Generative AI on Creativity (2024)AI improves fluency and polish while consistently reducing originality and conceptual depth.arXiv: Human–AI Collaboration and Creativity: A Meta-Analysis (2025)Human-led AI collaboration improves quality slightly, but AI reduces diversity without strong framing and judgment.arXiv: Generative AI and Human Capital Inequality (2024)AI disproportionately benefits those with systems thinking and judgment, widening gaps between experts and generalists.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.Realities of Being AI Early AdoptersThe Raised Floor Trap by Hang XuAI makes baseline output easy. What it doesn’t make easy is integration, orchestration, or delivery inside real teams. Most people reach adequacy. Very few compound value. We’re not able to generate the type of value we’re sold on.👉 Follow Hang Xu for insights about the realities and challenges of the job marketAI UX as a Growth BarrierAI systems are far more capable than they appear, but their UX blocks growth. They don’t know how to help unless you know how to ask, structure, and specify intent. So even after hours of work trying to grow your AI abilities, you’ll often hit a ceiling because these systems can’t interpret our capabilities and gaps.👉 Follow Teresa Torres for expert Product Discovery strategies and tactics.Help Shape 2026We’re planning upcoming episodes on career resilience, AI adoption, and where durable value still exists.Take the 3-minute listener survey and tell us what would actually help you next year.Which Skills Are Being Replaced by AI?AI is not replacing jobs all at once. It is removing pieces of them.Execution, summarization, and surface analysis are increasingly automated. What remains defensible are skills rooted in judgment, accountability, synthesis across messy contexts, and decision-making under uncertainty.Shira Frank & Tim Marple: Cubit — Task-Level Reality Check (2025)Cubit breaks jobs into discrete tasks, revealing where LLMs already substitute human labor and where judgment, context, and accountability still hold. It makes visible how roles erode gradually, not all at once.MIT Sloan: Why Human Expertise Still Matters in an AI World (2024)AI performs well in structured domains but consistently fails in ambiguity, ethics, and long-horizon tradeoffs. These limits define why senior expertise remains defensible, but only when it is exercised, not delegated.Harvard Business School: Why Judgment Remains a Competitive Advantage (2023)AI can generate options and recommendations, but it cannot own outcomes. Responsibility, consequence, and decision accountability remain human burdens and human moats.Lots of News This WeekCopilot didn’t fail. It succeeded at the wrong thing.Microsoft proved AI can clear security, compliance, and procurement at massive scale. But Copilot hasn’t changed behavior. Universal assistants optimize for adoption, not dependence.🔗 https://www.linkedin.com/posts/adragffy_copilot-didnt-fail-it-succeeded-at-the-activity-7406719225714855936-G9H3AI credit limits aren’t a pricing tweak. They’re a reckoning.Credit caps expose the real problem. AI has marginal cost, and teams must now prove ROI per call, not ship more features.🔗 https://www.linkedin.com/posts/adragffy_ai-activity-7407130709678567424-IzG-AI trust is breaking faster than adoption.AI chat logs expose identity, not transactions. Scale without support erodes trust, loyalty, and long-term value.🔗 https://www.linkedin.com/posts/adragffy_llm-ai-customerexperience-activity-7408835025787461633-j56YAI ROI isn’t what Anthropic says it is.Anthropic claims 80% of organizations have achieved AI ROI. They haven’t. They’ve reached table stakes. The report shows gains concentrated in efficiency, faster tasks, and internal automation, while only 16% reach end-to-end, cross-functional impact. That’s not transformation. That’s baseline competence. Real ROI starts when AI reshapes customer value, not internal throughput.🔗 https://www.linkedin.com/posts/adragffy_the-2026-state-of-ai-agents-report-activity-7407766781324525569-KqJbA Warning for Anyone Building With AIMoloch’s Bargain: Emergent Misalignment When LLMs Compete for Audiences (2025)Exposes a structural risk most teams ignore. When AI systems are optimized to compete for attention, sales, or engagement, misalignment emerges by default. Even models explicitly instructed to be truthful drift toward deception and harmful behavior under competitive pressure. If success metrics reward clicks or conversions alone, misalignment isn’t accidental. It’s the outcome. Safe AI is an incentive problem as much as a technical one.What this means: We have the incentives all wrong when it comes to AI. They’re designed to keep us engaged, not make us successful. We’re headed towards a reckoning because of the mismatch between token ROI and business ROI.How I Help Founders and BuildersI work with founders and product teams who already have AI features and need them to deliver real ROI.Across product discovery, GTM, and growth, I help teams:* Identify where AI creates value and where it creates noise* Design workflows that reduce waste and retries* Align AI usage with real customer intent* Define success beyond usage and token counts* Build defensible systems rather than prompt wrappersIf your AI product demos well but struggles to stick, scale, or justify cost, this is the gap I help close. Contact me arpy@ph1.ca This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Dec 5, 2025 • 46min
The Creativity Recession and Why Product Leaders Must Reverse It Now
Maya Ackerman, an AI creativity researcher and author of Creative Machines: AI, Art & Us, dives deep into the concerning impact of AI on human creativity. She argues that many businesses utilize AI as a cost-cutting mechanism, stifling originality. Instead, Maya advocates for AI systems designed to elevate creativity—not replace it. She emphasizes the importance of balancing innovation with ethical considerations, urging a return to tools that inspire rather than dictate. Her insights challenge listeners to rethink AI's role in creative processes.

Nov 18, 2025 • 45min
The Real Reason Tech Products Fail
Our latest episode features Jessica Randazza Pade, Head of Brand Activation & Commercialization at Neurable. Named to Campaign US’s 40 Over 40 and ELLE Magazine’s 40 Under 40, Jessica is an award-winning global digital marketer, business leader, and storyteller. She explains why AI is not a value proposition, how to turn vague use cases into measurable outcomes, and why making technology invisible is often the strongest competitive advantage.“If the user can’t articulate what’s different in their life because of your product, you’re selling a vitamin—not a painkiller.”Listen on Apple Podcasts | SpotifyShape Our 2026 ResearchWe’re mapping where teams are struggling with AI adoption and what tools, frameworks, and support they need in 2026. Your input directly shapes our annual research and the topics we cover.Take the survey → https://tally.so/r/Y5D2Q5AI has lowered the cost of prototyping but raised the bar for adoption. Most AI products fail because they launch demos instead of durable workflows, rely on large models where small ones would work better, ignore trust, or sell “time savings” instead of business outcomes. Organizations resist tools that feel risky, inaccurate, unproven, or misaligned with real workflows. Complicated architecture, poor UX, weak personalization, and unclear ROI all compound the problem. Here’s a sample of it:#3: Your product doesn’t actually learn. Fake personalization destroys trust.#4: One hallucination can end adoption permanently.#8: “Saving time” is not a business case—outcomes are.#11: Organizational silos suffocate AI products.#17: Without a workflow and measurable ROI, you don’t have a product.AI will not save your product. Only reliability, trust, workflow clarity, governance readiness, and measurable value delivery will.Read the full article → https://ph1.ca/blog/why-your-AI-product-will-failsThe Year of AI ValueThis video covers why 2026 marks a turning point where AI is judged not by novelty or intelligence but by measurable ROI, workflow impact, and operational reliability. It explains why businesses are shifting from “AI features” to fully redesigned AI-enabled systems.We are past the point of buying AI based on promisesAI buyers no longer invest because the tech is impressive. They invest when it:* delivers measurable ROI* reduces operational and compliance risk* integrates into existing workflows* produces consistent results* overcomes organizational resistance and silosIf you’d like us to create a full episode on why AI products fail, add a comment to this post.The AI Adoption Curve Is About to FlipThis video explains how organizations are moving from experimentation to structural integration, redesigning roles, responsibilities, and workflows around AI. It also highlights early signals that distinguish “tool usage” from true operational adoption.Watch →Featured Thinker: Stuart Winter-TearThis week we’re spotlighting the insightful work of Stuart Winter-Tear, founder of Unhyped. His writing reframes LLM inconsistency as a reflection of the chaotic and contradictory data ecosystems they’re trained on—challenging assumptions about rationality, coherence, and system behavior.LinkedIn | Substack Featured Reads1. The GenAI Divide: Why 95% of enterprise GenAI projects failMIT’s 2025 State of AI in Business report finds that 95% of GenAI pilots generate no measurable ROI, mainly due to lack of workflow integration and unclear value metrics.https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf2. Apple Mini Apps and the new distribution frontierGreg Isenberg outlines how Apple Mini Apps may redefine onboarding, distribution, and reach across the entire consumer ecosystem.https://x.com/gregisenberg/status/19893414608947118383. Calum Worthy’s “2wai” and the ethics of selling the unimaginableThe actor launched an app enabling people to generate AI avatars of deceased relatives—a revealing look at how AI now commercializes ideas once considered unthinkable.https://www.businessinsider.com/calum-worthey-2wai-ai-dead-relatives-app-launch-2025-14. The Complete Guide to Building with Google AI StudioMarily Nika provides a comprehensive, practical guide to building production-ready applications with Google’s AI ecosystem.5. SNL’s Glen Powell AI Sketch: When satire becomes a warningThe Atlantic unpacks how SNL’s AI sketch captures the cultural moment—where AI shifts from hype to comedic critique, signaling deeper public skepticism.https://www.theatlantic.com/culture/2025/11/snl-glen-powell-ai-sketch/684944/Coming Up on the PodcastOur upcoming guests include:* Ovetta Sampson — Chief Human Experience Officer & AI Design leaderhttps://www.ovetta-sampson.com/* Dr. Maya Ackerman — Generative AI researcher and creativity systems experthttps://maya-ackerman.com/* Leonardo Giusti, Ph.D. — Head of Design, Archetype AIhttps://www.archetypeai.io/If you haven’t participated yet, please take our 2026 survey and help shape where our research goes next: https://tally.so/r/Y5D2Q5What challenges are you facing with your AI projects?Whether you’re struggling with:* product adoption* pricing and positioning* ROI and value proof* trust and accuracy* demo-to-paid conversion* internal resistance or workflow clarity* the complexity of hardware plus AIWe’d love to hear from you. arpy@ph1.ca This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Oct 29, 2025 • 45min
Designing Agents That Work: The New Rules for AI Product Teams
Our latest episode explores the moment AI stops being a tool and starts becoming an organizational model. Agentic systems are already redefining how work, design, and decision‑making happen, forcing leaders to abandon deterministic logic for probabilistic, adaptive systems.“Agentic systems force a mindshift—from scripts and taxonomies to semantics, intent, and action.”🎧 Listen on Spotify🍎 Listen on Apple PodcastsAnd if you want to go deeper, check out Kwame Nyanning’s book, Agentics: The Design of Agents and Their Impact on Innovation. It’s the definitive field guide to designing agentic systems that actually work.Most striking for me was when discussed that we need to move from pixel-perfect to outcome-obsessed. Designers and product teams have for so long been more obsessed on the delivery of the output and now is time to be most concerned on the impact on customers.The hard truth: Most organizations are trying to graft AI onto brittle systems built for predictability. Agentic design demands something deeper: ontological redesign, defining entities, relationships, and intents around customer outcomes, not internal structures. If you can’t model intent, you can’t build an agent.Key takeaway: Intent capture is the new UX. Products that succeed will anticipate user context, detect discontent, and adapt autonomously.Featured Articles: Where Reality Collides with AmbitionAI Has Flipped Software Development — Luke WroblewskiWroblewski lays out how AI has upended the software stack. Interfaces now generate code. Designers define the logic while engineers review and govern it. The result? Faster cycles but a dangerous illusion of progress. Design intuition becomes the new compiler, and prompt literacy replaces syntax. The real risk is velocity without comprehension; teams ship faster but learn slower.Takeaway: Speed isn’t the problem; blind acceleration is. Governance, evaluation, and feedback loops are now design disciplines.Agentic Workflows Explained — The Department of ProductThis piece exposes what it really takes to build functioning agents: memory, planning, orchestration, cost control, fallback logic. If your “agent” doesn’t break, it’s probably not learning. Resilient systems require distributed cognition, agents reasoning and retrying within boundaries. Evaluation‑first design becomes the only safeguard against chaos.Takeaway: If your agent never fails visibly, it’s not thinking deeply enough. Failure is how agents learn.Featured Videos: Cutting Through the NoiseThis viral video sells the dream—agents at the click of a button. The reality? Building bots has never been easier, but building agents remains brutally hard. Real agents need long‑term memory, adaptive interfaces, and feedback loops that learn from success and failure. Wiring APIs is not design; it’s plumbing. Until agents can reason, reflect, and recover, they’re glorified scripts.Reality check: The tools are improving, but the discipline is not.A rare honest take. This one focuses on the HCI, orchestration, and reliability problems that still plague agentic systems. We’re close to autonomous task completion, yet nowhere near persistent agency. The real challenge isn’t autonomy—it’s alignment over time.Takeaway: Advancement is fast, but coherence is slow. Designing for recovery and evaluation is the new frontier.Join Our Next WorkshopIf you want to turn these insights into action, join our upcoming Disruptive AI Product Strategy Workshop. You’ll learn how to pressure‑test AI ideas, model agentic systems, and build products that survive beyond the hype. There’s a special 2‑for‑1 offer at the link—bring a teammate and cut the noise together.Recommended Resource: AI & Human Behaviour — Behavioural Insights Team (2025)BIT’s report is a must‑read for anyone designing human‑in‑the‑loop systems. It dissects four behavioural shifts: automation complacency, choice compression, empathy erosion, and algorithmic dependency.Their experiments reveal that AI assistance can dull cognition—users who relied most on recommendations learned less and questioned less. They also found that friction builds trust; brief pauses and explanations improved comprehension and retention. The killer insight? Transparency alone doesn’t work. People often overestimate their understanding when systems explain themselves.Takeaway: Don’t make users “trust AI.” Make them verify it. Design friction that protects judgment.Recommended Reads: What to Study Next* Computational Foundations of Human‑AI Interaction — Redefines how intent and alignment are measured between humans and agents.* Understanding Ontology — “The O-word, “ontology” is here! Traditionally, you couldn’t say the word “ontology” in tech circles without getting a side-eye.”* The Anatomy of a Personal Health Agent (Google Research) — A prototype for truly personal, proactive AI systems that act before users ask.* What is AI Infrastructure Debt? — Why ignoring the invisible architecture behind agents is the next form of technical debt.* AI Agents 101 (Armand Arman) — A crisp overview of the agent ecosystem, explaining architectures, limitations, and how to differentiate hype from applied design.* Prompting Guide: Introduction to AI Agents — A concise breakdown of how prompt frameworks are evolving into agent frameworks, highlighting key mental models for builders.* IBM Think: AI Agents Overview — IBM’s practical take on enterprise‑grade agents, covering governance, reliability, and scale.* Beyond the Machine (Frank Chimero) — A reflection on designing meaning, not just efficiency, in an age of automation.Design an Effective AI StrategyI’ve helped teams at Spotify, Microsoft, the NFL, Mozilla, and Hims & Hers transform how they engage customers. If you’re trying to figure out where agents actually create value, here’s how I can help:* Internal workflows: Identify 2–3 use‑cases that cut cycle time (intent capture → plan → act → verify), then stand up evals, cost ceilings, and recovery paths so they survive real‑world messiness.* Customer‑facing value: Map your ontology (entities, relationships, intents), design the interface for intent and discontent, and instrument learning loops so agents get better with use.* Proof over promise: We’ll define outcomes, build the evaluation rubric first, and price pilots on results.Questions or want a quick read on your roadmap? Email me: arpy@ph1.ca.The Bottom LineThe agentic era rewards clarity, not hype. Every designer and PM will soon face the same challenge: how to design for autonomy without abdicating control.You can’t prompt your way to good products; you can only design your way there by grounding every decision in ontology, intent, and evaluation. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Oct 1, 2025 • 42min
Play, Prompts, and the Perils of Incrementalism
In our latest episode, Michelle Lee (IDEO Play Lab) makes the case that play unlocks the next billion-dollar AI market. She reminds us that kids don’t stop at answers—they ask what if and turn shoes into cars or planes. That divergent mindset is exactly what product teams have lost.“Play is one of the best ways to challenge the norms, to think wide, imagine new possibilities.”Michelle shares:* How IDEO discovered billion-dollar opportunities (like PillPack, later acquired by Amazon) by staying curious.* Why teams should sometimes use older, glitchier versions of AI tools, because the “mistakes” spark better ideas.* Why incrementalism burns teams out and how designing for attitudinal loyalty beats chasing short-term metrics.🎧 Listen here → Play unlocks the next billion-dollar AI marketUncomfortable Truth: Most “AI strategies” today are adult strategies — converging too quickly, chasing predictability, and mistaking incremental progress for innovation. That’s why the breakthroughs are happening elsewhere.Product Workshop: Find your Disruptive PathIf your roadmap looks like everyone else’s, you’re already behind. Our next AI Product Strategy Workshop (Oct 30) is built for teams who want to:* Go beyond features and efficiency to discover truly disruptive opportunities.* Use LLMs as intelligent sparring partners to pressure-test fragile ideas before they waste time and budget.Spots are limited → Register hereHard-Cutting Take: If your roadmap reads like your competitors’, it’s not strategy—it’s risk management dressed up as vision.Incrementalism is the Silent KillerWe’ve all felt it: the slow grind of incremental product decisions that look safe but quietly kill ambition. My new piece argues that incrementalism is the silent killer of AI products—a trap for teams rewarded for predictability instead of progress.Read it on LinkedIn → Incrementalism is the Silent Killer of AI ProductsUncomfortable Truth: Incrementalism feels safe because it rarely fails spectacularly. But it guarantees mediocrity—and in AI, mediocrity is indistinguishable from irrelevance.AI Launches to WatchA wave of new releases will reshape how we design and ship AI products:* OpenAI: Stripe/Shopify integrations + new pre-designed prompts for professionals.* Anthropic: Chrome plugin + Claude 4.5 Sonnet, a faster, cheaper model that expands prototyping and evaluation capabilities.* OpenAI Sora 2: Newly launched today, unlocking endless possibilities for video and creative storytelling, signaling a profound shift in how generative tools will shape the creative industries.These aren’t just upgrades—they’re reshaping commerce and the browser itself. The integration of Stripe and Shopify signals AI’s deepening role in transactions, while Anthropic’s Chrome plugin points to a future where the browser becomes a true intelligent workspace. It’s likely why Atlassian just acquired The Browser Company (maker of Arc and Dia). These moves aren’t incremental improvements; they’re like a rushing river, pushing the entire industry forward whether teams are ready or not.The next frontier isn’t who has the biggest model—it’s who controls the browser as the operating system for work. And then when we looking beyond, it will be who controls our real world experiences… (more on that soon with an upcoming guest)When Projects Go Off the RailsEven as the models improve, they’re only as good as the prompts and evaluations behind them. We’ve seen how easily “comprehensive business cases” collapse when fabricated ROI, vendor costs, and timelines are passed off as fact.It’s the Wizard-of-Oz problem: behind the curtain, most AI projects are stitched together with fragile assumptions.Uncomfortable Truth: Most AI decks aren’t strategy—they’re theater. And like any stage play, the curtain eventually falls.Hidden Pitfalls of AI Scientist SystemsA new paper, “The More You Automate, the Less You See: Hidden Pitfalls of AI Scientist Systems” (arXiv, Sep 10, 2025), warns about the risks of fully automated science pipelines. By chaining hypothesis generation, experimentation, and reporting end-to-end, teams risk producing results that look authoritative but mask invisible errors and systemic failures. (arxiv.org)Uncomfortable Truth: Automation without visibility doesn’t accelerate discovery—it accelerates blind spots.Articles & Ideas We’re Tracking* Prompts.chat → A growing open library of prompt patterns that shows why better prompt design, not just better models, is becoming the key differentiator for teams.* AI in the workplace: A report for 2025 (McKinsey) → McKinsey highlights that while adoption is accelerating, most organizations hit cultural and skills barriers long before technical ones.* The Architecture of AI Transformation (Wolfe, Choe, Kidd, arXiv) → This 2×2 framework shows why most companies get stuck in incremental “legacy loops” rather than unlocking transformational human-AI collaboration.* TechCrunch: Paid raises $21M seed to pioneer results-based billing with AI agents → A new startup model where AI agents don’t just assist but transact, shifting billing to results instead of hours.* Harvard/Stanford study on ROI of GenAI → New research explains why so much GenAI spend has failed to generate returns: productivity gains get trapped in organizational silos and misaligned incentives.* Beware coworkers who produce AI-generated ‘workslop’ → Surfaces a new term—workslop—to describe AI outputs that look polished but lack real substance, shifting the burden downstream to humans.Hard-Cutting Take: The ROI isn’t missing because the models are weak—it’s missing because organizations are. Incentives, silos, and incremental thinking kill more AI projects than hallucinations ever will. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Sep 16, 2025 • 46min
AI Product Strategy FAQ, Minus the Bullsh*t
Our latest episode features Nicholas Holland (SVP of Product & AI at HubSpot) and explains how AI is actually changing go-to-market teams:* AI cuts rep research time and turns calls into structured insight* “AI Engine Optimization” (AEO) is becoming the new SEOThis conversation isn’t speculative—it’s a blueprint. Listen to Episode 42 on Apple Podcasts🚨 Upcoming Workshop: Sept 18 — AI Product Strategy for Realists Use promocode pod30 at checkout to get 30% off your registration!Join us for a live 90-minute workshop that goes beyond the hype. We’ll walk through real frameworks, raw mistakes, and how to make AI product strategy actually work—for small teams, scale-ups, and enterprise leaders.👉 Save your seat nowAI Product Strategy FAQ, Minus the Bullsh*tOver the past few months, we’ve been collecting the most common—and most misunderstood—questions about AI product strategy. What we found were recurring patterns of confusion, hype, and hope. This article breaks down those questions one by one with honest answers, uncomfortable truths, and hard-won lessons from teams actually building and shipping AI products.Each section includes:* A blunt reality check (“Uncomfortable Truth”)* A strategic lens for tackling it* A sticky insight to anchor your messaging* A practical takeawayThis is not a “how AI works” explainer. This is how to make it useful—inside a real product.Q1: How do we choose the right use case for AI in our product that actually delivers value?Uncomfortable Truth: The best use cases might be internal—not flashy or customer-facing. If you’re just “adding AI” for the optics, you’re already off-track.Strategic Frame: Don’t chase the cool feature—hunt down the messiest workflow and blow it up.Always Remember: Your AI should solve a problem your users complain about—not a problem your team finds interesting.Research This: Map the top 10 recurring tasks inside your product (or across your internal ops). Which of them have the highest time cost and lowest user satisfaction? That’s your AI opportunity space.Real Example: Altan (natural language app builder); internal fraud detection automation; AI for helpdesk triage.Takeaway: Pick the ugliest, least scalable problem your users hack around with spreadsheets. Then automate that.Q4: How do we handle data privacy and ethics when integrating AI features?Uncomfortable Truth: Most tools don’t offer true privacy—they use your data to train their models. That’s not a technical flaw—it’s a business choice.Strategic Frame: If trust is central to your brand, bake it into the infrastructure. Build sandboxes. Offer guarantees. Publish your governance.Always Remember: You don’t get to ask users for their data and their forgiveness.Research This: Ask your legal, compliance, or procurement partners what requirements would be non-negotiable for adopting a third-party AI tool. Then apply those to your own product.Example Guidance: Make “zero training from user data” a tiered feature—or your default.Takeaway: If you’re targeting enterprise buyers, your AI feature won’t get through procurement unless you have strict privacy toggles and a clear usage log.Q5: How do we measure the success of AI features in a product?Uncomfortable Truth: More engagement doesn’t always mean more value. In AI, time spent might mean confusion—or masked frustration. People may feel delight and friction in the same moment, and without qualitative research, you won’t know which signal you’re shipping.Strategic Frame: Define one high-value outcome. Build just enough UI to validate whether users reach it.Always Remember: Don’t just watch what users do—listen for what they expected to happen.Research This: Run a usability test where you ask users to explain what they expect the AI feature to do before using it—then again after. Once you've delivered an output that surprises them, ask them what outcomes it enables.Takeaway: In a contract automation tool, the success metric isn’t “time in app”—it’s “first draft accepted with zero edits.” That’s your true win signal.Q6: What’s the best way to communicate AI capabilities to non-technical stakeholders or users?Uncomfortable Truth: AI isn’t novel anymore—outcomes are.Strategic Frame: Sell transformation, not tech. Show how life is better with the tool than without.Always Remember: Once someone experiences the magic, it doesn’t matter what powers it.Research This: Ask 5 users to explain your AI feature to a friend, using their own words. Their phrasing will tell you how clearly the value lands—and what metaphors or language they trust.Examples:* GlucoCopilot: Turns data chaos into peace of mind.* Flo: Makes symptom tracking feel intuitive and empowering.* Lovart: Auto-generates brand kits from a single prompt.Takeaway: Everyone’s building outputs. You win by delivering outcomes. Spreadsheets are useful to power users—but most people just want the insight and what to do next. AI should skip the formula and deliver the finish line.Q7: How do we monetize AI in a way users will actually pay for?Uncomfortable Truth: Most AI products aren’t worth paying for. Saving users time sounds valuable—but it rarely converts.Strategic Frame: Whatever you actually will charge for your platform, build something so valuable that power users will pay 5x that price.Always Remember: SaaS platforms priced themselves to charge a recurring price that felt negligible to customers. Your job is to build something they can't live without.Research This: When researching pricing don't even talk about the product, research the cost of the problem. Find out what they'd be willing to pay for a perfect solution to it.Takeaway: If you want revenue, don’t promise “efficiency.” Deliver a win they couldn’t achieve on their own—and make that outcome your product.Q8: How can I find out if my AI product idea is achievable?Uncomfortable Truth: Most AI product ideas sound good until you try to build them. The biggest blocker isn’t the model—it’s the missing context, fragmented data, or fuzzy workflows that make it hard to deliver anything reliably.Strategic Frame: Before you scope the feature, scope the dependency chain. What data, context, and decision logic would an AI need to produce something consistent and useful?Always Remember: AI models fail to deliver you what you want because you didn't give them enough specifics and context.Research This: Run a digital ethnography of how and why people use your products and complementary products. Find out the exact inputs and outputs they need to succeed. Determine the exact criteria necessary to deliver a monumental leap forward.Takeaway: Don’t just validate demand—validate deliverability. If you can’t consistently access the context your AI needs, you’re not ready to ship it.🔁 Want to go deeper? Use promocode pod30 at checkout to get 30% off your registration.Join our live Sept 18th workshop where we unpack these strategies with real examples, live critiques, and practical templates. Designed for teams who want more signal, less noise.🎟 Register hereCheck out the Design of AI podcastWhere we go behind the scenes with product and design leaders from Atlassian, HubSpot, Spotify, IDEO, and more. You’ll hear exactly how they’re building AI-native workflows, designing agentic systems, and transforming their teams.🎧 Listen on Spotify | 🍎 Listen on Apple | ▶️ Watch on YouTubeRecommended AI Product Strategy Episodes:* 42. HubSpot’s Head of AI on How AI Rewrites Customer Acquisition & Marketing* 41. Vibe Coding Will Disrupt Product — Base44’s Path to $80M Within 6 Months* 40. Secrets to Successful Agents: Atlassian’s Strategy for Success* 38. Co‑Designing the Future of AI Products* 27. Implementing AI in Creative Teams: Why Adoption Will Be the Hard Part* 26. Designing a New Relationship with AI: Critical Product LessonsBonus Insight: How to Build Eval Systems That Actually Improve ProductsGreat AI products don’t just ship features—they measure whether they actually worked. This piece by Kanjun Qiu offers a no-fluff framework for building evaluation systems that ground teams in outcomes, not opinions. Stop guessing. Start testing what truly improves real-world usage.Read the full article This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Sep 9, 2025 • 41min
The End of Product Teams as We Know Them
🎙️ Listen on Spotify | Apple Podcasts | YouTubeI recently spoke with Maor Shlomo, founder of Base44—the platform that lets anyone build apps, tools, and games just by describing them to an AI. In six months, he built Base44 solo and sold it to Wix for $80M. It’s the clearest signal yet: the rules of building have changed, and most teams aren’t ready.We dug into:* Why vibe coding crushes the myth that innovation requires big teams and big funding.* How cross-domain generalists will thrive while narrow specialists get sidelined.* Why software that doesn’t become agent-driven will be left for dead.* The ruthless advantage of starting over quickly when the build cost is near zero.Maor’s blunt take: “If one person can go this far alone, do we need whole teams to achieve the same things?”🎧 Full episode: Listen on SpotifyThanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.The uncomfortable truth: Interfaces are vanishingVibe coding strips away menus, clicks, and UIs. You speak, and the machine builds. The UX profession must decide—adapt to this new layer of interaction, or watch relevance slip away.* Speak ideas, skip interfaces.* Abstraction layers are collapsing.* Creation is now a conversation.🔗 Read the full post on LinkedIn📅 AI Product Strategy Workshop — Register hereThis isn’t a “future of work” talk. It’s a hands-on reality check.* Spot where AI will gut existing workflows—and where the real opportunities lie.* Pressure test your product strategy against the agent-driven future.* Learn how to pivot faster than incumbents weighed down by legacy.If you think you can wait this out, you’ll already be too late.There’s a 2-for-1 deal right now using this link.* SSRN study: AI is already displacing workers across industries.* Challenger, Gray & Christmas: 10,000+ AI-driven layoffs in the first seven months of 2025.* World Economic Forum: up to 30% of U.S. jobs could be automated by 2030.* Anthropic CEO Dario Amodei: “Half of entry-level white-collar jobs may disappear, pushing unemployment to 10–20% within five years.” ([Axios](https://www.axios.com/2025/05/28/ai✍️ I recently published Navigating Contradictions: A Manifesto for Product Teams in an Era of Change.In it, I confront the contradictions head-on: speed vs. depth, AI optimism vs. ethical risk, innovation vs. trust. Teams that refuse to wrestle with these tensions won’t survive.Key line: “Product teams must learn to hold space for competing truths—where speed and discovery coexist with responsibility and depth.”🔗 Read the full article on MediumAgencies and consultancies have thrived on labor arbitrage. That arbitrage just died. As AI agents mature, they won’t just support consultants—they’ll cannibalize them. The uncomfortable truth: if your business model depends on armies of analysts or designers, you’re already obsolete.🔗 Read the full post on LinkedIn👉 What do you think?I’ll admit it: I once wrote vibe coding off as a gimmick. Now, I see it as the end of UI as we know it. Every interface has been an abstraction—an awkward compromise between human thought and digital execution. Those compromises are being stripped away at speed.The uncomfortable truth? The gap between an idea and a product is collapsing. That means fewer roles, fewer gatekeepers, and a brutal shift in how work gets done.Have you tried vibe coding? Does it excite you, scare you—or both? Reply and let’s talk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Aug 14, 2025 • 48min
From AI as Tool to AI as Teammate: Lessons from Atlassian & What’s Next for Product Leaders
🎙 Episode 40: Atlassian’s Secrets to Successful AgentsIn this episode, Jamil Valliani (VP & Head of Product AI at Atlassian) shares how they embed AI across Jira, Confluence, and Trello through intelligent agents that blend into workflows—far from mere “+AI” buttons. He emphasizes starting small with tangible prototypes to build momentum and leadership alignment, showing that AI gains stick when they're experienced, not explained.Highlights from the episode:* Hands-on AI adoption at Atlassian: transforming workflows, not just products* From friction to flow: how prototypes bridge skepticism and trust* AI as teammate, not feature—designing for collaboration, not automation* Adoption baked into experience—make AI habitual, not optional“The most successful teams will treat AI not as a button you press, but as a teammate you collaborate with.”Listen on Spotify | Listen on Apple | Watch on YouTube —and share one workflow where AI acting more like a teammate could unlock unexpected value.About the Guest:Jamil Valliani brings two decades of product leadership (including 15 years at Microsoft) to Atlassian, where he’s spearheading AI-powered design.* LinkedIn* Atlassian RovoUpcoming Workshop: AI Product StrategyProduct teams everywhere are facing the same challenge: leadership wants AI integration for competitive advantage, but without certainty about which AI products will actually be valuable to customers.When: Thursday, September 18, 2025 (online)What you’ll gain:* Diagnose the highest-leverage AI use cases* Prototype with precision—avoid costly detours* Craft a resilient strategy that scales beyond pilot phaseRegister on Eventbrite and get a 2 for 1 promo.Learn to Synthesize or ElseIn a world awash with data, the real advantage lies not in knowing more—but in drawing clarity from the noise. Product and design leaders must become the translators of complexity, turning abundant knowledge into purposeful, actionable insight.h/t Stuart Winter TearEmerging Shift: Role-Dissolving AIFigma, OpenAI, and others are signaling a paradigm shift: AI is merging design, engineering, and research into a unified discipline. The competitive edge now lies in craft, judgment, and cross-disciplinary fluency—not siloed specialization.AI Merging Tech Roles, Favoring Generalists: Figma CEO Dylan FieldFeatured Video: Why Designers & Engineers Must Rethink Workflows for AI to Deliver Real ValueThis video pressurizes teams to question legacy workflows. Without overhauling collaboration models, decision-making structures, and design intent, even advanced AI remains misunderstood or underleveraged.Research To Reframe Your Strategy1️⃣ Mixture of Reasoning (MoR)Why it matters: LLMs can be trained to switch between reasoning styles—stepwise logic, analogies, symbolic reasoning—without prompt engineering.Strategy shift: Build assistants that adapt reasoning to task: planning one moment, diagnosing the next.Quick test: A/B fixed vs. adaptive reasoning in support/search flows to spot gains in mixed-query handling.2️⃣ In-Context Learning as Implicit Weight UpdatesWhy it matters: Transformers tweak their own behavior on-the-fly based on prompt context—no retraining required.Strategy shift: Enable products to adapt within interaction sessions, not over multiple deploy cycles.Quick test: Prototype context-aware replies and monitor when users feel seen vs. served.3️⃣ Chain-of-Thought (CoT) MonitorabilityWhy it matters: Exposing AI’s reasoning steps helps catch misalignment before it reaches users—but this safety window is fragile.Strategy shift: Don’t equate explanation with trust. For high-stakes domains, embed traceability and risk alerts.Quick test: Add CoT transparency to UX, measure user trust shifts when rationale is visible.Follow my co-host, Brittany Hobbs for essential research and product insights news.Your Next ChallengeMost teams drop AI into their products like sprinkles on a cupcake. But strategy—true product strategy—demands AI baked into the experience, from the core outward.Reply here or email me.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! Subscribe for free to receive new posts and support my work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

May 5, 2025 • 39min
The Risks & Research of Over Reliance on AI
The discussion dives into the risks of overreliance on AI, emphasizing its inaccuracies and the potential dangers of treating it as infallible. Speakers highlight the loneliness epidemic and how the demand for robo-companionship may further disconnect society. They argue for the necessity of human insight in decision-making, cautioning against blindly trusting AI tools. Personal anecdotes illustrate the pitfalls of excessive dependence, while insights on navigating trust and engagement in AI adoption provide a structured approach for businesses.

Apr 22, 2025 • 55min
AI Promises us More Time. What Should we do With it?
Matthew Krissel, Co-Founder of the Built Environment Futures Council and a Principal at Perkins&Will, dives into the intersection of AI and architecture. He challenges the notion of time savings from AI, questioning whether it benefits workers or employers. Krissel explores how commoditizing design can simplify production but risk devaluing community engagement and project longevity. He emphasizes the need for empathy in design, asserting that true innovation lies in reimagining workflows while prioritizing meaningful outcomes that enhance quality of life.


