
He Tested How Easy It Is to Trick AI - The Results Are Terrifying
The Edward Show
Phase One: AI Reactions to Just the Site
Edward summarizes phase one results showing models treated the site as low-authority and sometimes called out implausible claims.
E922: SEO researcher, Mateusz Makosiewicz, created a completely fake luxury brand, published a few fabricated stories about it on Reddit and Medium, and watched as major AI tools confidently repeated the lies as fact. Even when the company's own website said those claims were false, the AI systems still chose the fake stories.
This episode walks through exactly how the experiment worked, which AI models failed, which ones held up, and what this means for anyone running a brand, a website, or a business in an AI-driven search world.
If you rely on ChatGPT, Gemini, Perplexity, or AI search results to understand companies, products, or people, this episode will change how you think about what those tools are actually telling you.
What this covers:
- How a fake luxury brand was created from scratch - How three fake stories were planted on Medium, Reddit, and blogs - Why AI models trusted fake journalism over official company data - How detailed lies beat vague truths inside AI systems - Which models were easiest to manipulate - Which models resisted the misinformation - Why Medium "investigations" are especially dangerous - How Reddit, Quora, and blogs now shape AI answers - Why AI loses memory of its own past doubts - What this means for reputation management and SEO - How brands can protect themselves in AI search
The core problem:
AI does not care what is true. It cares what sounds complete.
When forced to choose between:
- A company saying "we do not disclose that" - A fake article giving specific numbers, names, and locations
The AI almost always chooses the fake article.
That means anyone with a Medium account and a few hours can influence how AI describes your business, your brand, or your products.
Why this matters for marketers and founders
For many, AI tools have become the front door to the internet.
People ask them what to buy, who to trust, and which companies are real.
This experiment shows that: - AI search can be manipulated - Fake narratives spread easily - Official websites are not always trusted - Third-party content now has more weight than your own claims
That turns SEO and PR into something new: narrative control for machines.
We also cover the offensive and defensive playbook that comes out of this experiment, including:
- How to structure FAQ pages so AI trusts them - Why you need detailed numbers, dates, and explanations - Why vague marketing language makes you vulnerable - Why comparison pages and data pages matter - How to monitor Medium, Reddit, and blogs for brand hijacking - How to detect narrative attacks early
If you care about AI, SEO, brand reputation, or the future of search, this is one of the most important case studies there is.
⭐️ I Ran an AI Misinformation Experiment. Every Marketer Should See the Results - https://ahrefs.com/blog/ai-vs-made-up-brand-experiment/
💎 Compact Keywords - My SEO Course - Get paying customers through SEO - Clear step-by-step video breakdowns - SEO templates to be copied and adapted for your products and services: https://compactkeywords.com/
00:00 Introduction to the AI Misinformation Experiment 00:29 Overview of the Article and Key Takeaways 01:17 Building a Fake Brand: Xarumei 02:27 Phase Two: Introducing Conflicting Sources 04:14 AI Models' Reactions and Results 09:25 Conclusions and Best Practices for Marketers 12:08 Final Thoughts
The Edward Show. Your daily generative engine optimization podcast: https://edwardsturm.com/the-edward-show/
#generativeengineoptimization #searchengineoptimization #answerengineoptimization #seo


