The Edward Show

AI & LLM Visibility: A Practical Guide for Ranking in AI Results

5 snips
Nov 10, 2025
David Quaid, an SEO consultant and founder of Primary Position, dives into the intricate world of large language models (LLMs) and their impact on webpage visibility. He explains how LLMs retrieve real-time data, debunks myths about schema's importance, and discusses the significance of PageRank in search results. Key insights include understanding query fan-outs, prompt drift, and how to detect LLM-driven impressions using Google Search Console. David offers practical steps for optimizing content to enhance visibility in LLMs like ChatGPT and Claude.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Fetch The Web In Real Time

  • LLMs do not store a cached copy of the web like a search engine does.
  • They fetch pages in real time and rely on external search engines for results.
INSIGHT

Query Fan-Outs Drive Which Pages Are Cited

  • LLMs break user prompts into multiple search queries, a behavior called the query fan-out.
  • Different LLMs and different times produce different fan-outs, changing which pages get surfaced.
INSIGHT

Rank In Google To Appear In LLM Answers

  • Perplexity and similar tools fetch live search results and synthesize them for the user.
  • If you rank in Google, you can appear in those LLM results within minutes.
Get the Snipd Podcast app to discover more snips from this episode
Get the app