
The AI podcast for product teams What Happens to Your Product When You Don’t Control Your AI?
AI was supposed to help humans think better, decide better, and operate with more agency. Instead, many of us feel slower, less confident, and strangely replaceable.
In this episode of Design of AI, we interviewed Ovetta Sampson about what quietly went wrong. Not in theory—in practice. We examine how frictionless tools displaced intention, how “freedom” became confused with unlimited capability, and how responsibility dissolved behind abstraction layers, vendors, and models no one fully controls.
This is not an anti-AI conversation. It’s a reckoning with what happens when adoption outruns judgment.
Ovetta Sampson is a tech industry leader who has spent more than a decade leading engineers, designers, and researchers across some of the most influential organizations in technology, including Google, Microsoft, IDEO, and Capital One. She has designed and delivered machine learning, artificial intelligence, and enterprise software systems across multiple industries, and in 2023 was named one of Business Insider’s Top 15 People in Enterprise Artificial Intelligence.
Join her mailing list | Right AI | Free Mindful AI Playbook
Why 2026 Will Force Teams to Rethink How Much AI They Actually Need
The risks are no longer abstract. The tradeoffs are no longer subtle. Teams are already feeling the consequences: bloated tool stacks, degraded judgment, unclear accountability, and productivity that looks impressive but feels empty.
The next advantage will not come from adding more AI. It will come from removing it deliberately.
Organizations that adapt will narrow where AI is used—essential systems, bounded experiments, and clearly protected human decision points. The payoff won’t just be cost savings. It will be the return of clarity, ownership, and trust.
This is going to manifest first with individuals and small startups who were early adopters of AI. My prediction is that this year they’ll start cutting the number of AI models they pay for because the era of experimentation is over and we’re now entering a period where deliberate choices will matter more than how fast the model is.
Read the full article on LinkedIn.
Do You Really Need Frontier Models for Your Product to Work?
For most teams, the honest answer is no.
Open-source and on-device models already cover the majority of real business needs: internal tooling, retrieval, summarization, classification, workflow automation, and privacy-sensitive systems. The capability gap is routinely overstated—often by those selling access.
What open models offer instead is control: over data, cost, latency, deployment, and failure modes. They make accountability visible again. This video explains why the “frontier advantage” is mostly narrative:
Independent evaluations now show that open-source AI models can handle most everyday business tasks—summarizing documents, answering questions, drafting content, and internal analysis—at levels comparable to paid systems. The LMSYS Chatbot Arena, which runs blind human comparisons between models, consistently ranks open models close to top proprietary ones.
Major consultancies now document why enterprises are switching: predictable costs, data control, and fewer legal and governance risks. McKinsey notes that open models reduce vendor lock-in and compliance exposure in regulated environments.
Thanks for reading Design of AI: Strategies for Product Teams & Agencies! Subscribe for free to receive new posts and support my work.
What Happens When “Freedom” Becomes an Excuse Not to Set Boundaries?
We’ve confused freedom with capability. If a system can do something, we assume it should. That logic dissolves moral boundaries and replaces responsibility with abstraction: the model did it, the system allowed it.
When no one owns the boundary, harm becomes an emergent property instead of a design failure.
What If AI Doesn’t Have to Be Owned by Corporations?
We’re going to experience a rise in AI experts challenging the expectations that Silicon Valley should control AI.
What if AI doesn’t need to be centralized, rented, or governed exclusively by corporate interests?
On-device models and open ecosystems offer a different future—less extraction, fewer opaque incentives, and more meaningful choice.
Follow Antoine Valot as him and Postcapitalist Design Club explore new ways of liberating AI.
Are We Using AI for Anything That Actually Matters?
Much of today’s AI usage is performative productivity and ego padding that signals relevance while eroding self-trust. We’re outsourcing thinking we are still capable of doing ourselves.
AI should amplify judgment and creativity. Use this insanely powerful technology to make you achieve greater outcomes, not deliver a higher amount of subpar work to the world.
If We Know the Risks Now, Why Are We Still Acting Surprised?
The paper “The AI Model Risk Catalog” removes the last excuse.Failure modes are documented. Harms are mapped. Blind spots are known.
Continuing to deploy without contingency planning is no longer innovation—it’s negligence. If a team can’t explain how its system fails safely, who intervenes, and what happens next, it isn’t ready for real-world use.
If Guardrails Don’t Work, What Actually Protects Us?
Every AI model and product is at risk of a major attack and exploit.
AI systems are structurally vulnerable. The reason we haven’t seen a catastrophic failure yet isn’t safety—it’s limited adoption and permissions.
Guardrails fail under pressure. Policies collapse at scale. The only real protection is limiting blast radius: constraining autonomy and refusing to grant authority systems can’t safely hold.
Why Should Teams Decide Before They Build?
The Decision-Forcing AI Business Case Canvas from Unhyped is essential for planning how to leverage AI in your products.
Before discussing capabilities, teams must answer:
* Who is accountable when this fails?
* What judgment must remain human?
* What harms are unacceptable—even if the system works?
This canvas offers alignment on vision, responsibility, and impact isn’t bureaucracy.It’s baseline design discipline.
Consider the Tradeoffs
The conversation with Ovetta Sampson challenges a belief that shaped the last phase of AI adoption: that faster is always better, and that dependence on OpenAI, Google, or Anthropic is inevitable.
That belief works during experimentation.It breaks the moment your product starts to matter.
As teams scale, speed stops being the constraint. Trust, cost predictability, and accountability take its place. The question shifts from How fast can we ship? to What are we tying our business to—and what happens when it fails?
One path optimizes for immediate momentum and simplicity. The other requires more upfront effort, but fundamentally changes where risk, data, and control live.
This isn’t a technical choice. It’s a business one.As usage grows, externalized risk stops being abstract and starts showing up in margins, contracts, and customer trust.
As that pressure builds, the impact becomes visible in the product experience itself.
Latency creeps in. Costs compound quietly. Outputs vary in ways teams struggle to explain. What once felt powerful starts to feel fragile. Teams spend more time managing side effects than delivering value.
At that point, you realize you didn’t just choose a model.You chose a UX trajectory.
Frontier models feel impressive early, but often lead to expensive, inconsistent experiences over time. Smaller, tuned models trade spectacle for reliability—and reliability is what users actually trust.
Eventually, the conversation moves from UX to business fundamentals.
Token pricing that felt negligible becomes material. Vendor updates change behavior you didn’t choose. Security and compliance questions become harder to abstract away. You realize that outsourcing intelligence also outsourced leverage.
This final image makes the tradeoff explicit. Paid frontier models buy speed and simplicity. Open or self-managed approaches buy independence, cost control, and long-term defensibility. Pretending these lead to the same outcomes is the mistake.
This transition, from novelty to ownership, is exactly where Right AI Now is focused. Through her consultancy, Ovetta helps teams redesign AI decisions around outcomes that actually matter at scale: customer trust, data sovereignty, operational stability, and long-term value creation.
These are also the themes we hear most consistently from the Design of AI audience. Founders and product leaders aren’t asking for more tools—they’re asking for clearer decisions. They want to know why AI products succeed and fail.
We’ll be going deeper on this shift throughout 2026, including a rebrand of the podcast, name and all.
Improve Your AI Product
If your organization is at the inflection point where AI needs to deliver real value without eroding trust, this is where I can help you. I’ve worked with teams at Microsoft, Spotify, and Mozilla to help leaders decide what to build, how to deliver value, and prioritize roadmaps.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com
