Filmfolk had page-one Google rankings for "corporate videographers London." Strong backlink profile. Well-optimized content. By every conventional SEO metric, they were doing well.

When we ran their baseline AI audit, they were cited in zero of 63 tracked prompts across ChatGPT, Claude, Perplexity, and Gemini. Not once.

This isn't unusual. It's the norm. And it points to a fundamental misunderstanding about how AI systems retrieve information — one that most SEO practitioners haven't fully internalized yet.

Two Different Retrieval Systems

Google's search engine is a document ranking system. It crawls pages, indexes content, evaluates authority signals (backlinks, E-E-A-T, technical quality), and returns a ranked list of URLs. The goal is to surface the most relevant document for a given query.

AI language models with retrieval capabilities — ChatGPT, Claude, Perplexity, Gemini — work differently. They don't return documents. They generate answers. And to generate accurate answers, they retrieve from a knowledge layer that is structured around entities, not pages.

The distinction matters enormously:

  • Google ranks pages. AI systems retrieve entities.
  • Google rewards backlinks and content volume. AI systems reward entity clarity and source trust.
  • Google measures clicks and rankings. AI systems measure citation rate and brand mention frequency.
  • Google's signals are relatively stable. AI retrieval signals decay as models update.

What AI Systems Actually Look For

When a buyer asks Perplexity "who are the best corporate video agencies in London," the model doesn't crawl Google's index. It draws on its training data and, in real-time retrieval systems, on a curated set of sources it has been configured to trust.

What determines whether your brand gets cited? Three things:

1. Entity Confidence

The model needs to understand who you are, what category you belong to, and what role you play. This is entity confidence — the clarity with which an AI system can place your brand in the right context. If your brand-role-category relationship is ambiguous or inconsistent across sources, the model won't cite you with confidence. It will cite a competitor whose entity signals are cleaner.

2. Source Trust Hierarchy

Not all sources are weighted equally in AI retrieval. Owned content (your website) carries some weight. Earned coverage (press, reviews, third-party mentions) carries more. Structured data (schema markup, knowledge graph entries) carries significant weight. Expert citations (academic papers, industry reports) carry the most.

Traditional SEO focuses heavily on owned content and backlinks. AI retrieval rewards a weighted hierarchy of source types — and most brands have significant gaps in the higher-trust tiers.

3. Freshness and Consistency

AI models weight newer signals more heavily than older ones. A brand that was well-represented in training data two years ago may have decayed in retrieval priority as the model has been updated with newer information. Maintaining consistent, fresh signals across all source types is an ongoing requirement — not a one-time optimization.

The core insight: SEO optimizes for a document ranking system. AI visibility requires engineering a brand's entity representation in a retrieval system. These are different problems that require different solutions.

Why Good SEO Can Actually Mislead You

There's a dangerous assumption that strong SEO performance implies strong AI visibility. It doesn't — and in some cases, SEO-optimized content can actively work against AI retrieval.

SEO content is often written to rank for keywords. It's structured around search intent, not entity clarity. It may be technically excellent but entity-ambiguous — meaning an AI model reading it can't confidently extract a clear brand-role-category relationship.

Filmfolk's website was well-optimized for Google. But the content didn't clearly establish: Filmfolk is a London-based video production agency specializing in corporate event coverage and brand storytelling for enterprise clients. That level of entity specificity — consistently reinforced across multiple trusted source types — is what AI retrieval requires.

What Actually Works

AI visibility engineering requires a different methodology:

  • Prompt Reality Audit — test the actual prompts your buyers use and measure your current citation rate across all major AI models
  • Entity Signal Mapping — identify gaps in brand-role-category clarity across all source types
  • Brand Memory Construction — build a curated, weighted knowledge base that AI retrieval systems can read and trust
  • Source Trust Reinforcement — strengthen signals in the higher-trust tiers (earned coverage, structured data, expert citations)
  • Continuous Measurement — rerun the same prompts, measure citation delta, adapt as models update

After applying this methodology to Filmfolk, their citation rate went from 0% to 81% across 63 tracked prompts in 90 days. Their Google rankings didn't change. Their AI presence transformed completely.

The Practical Implication

If you're investing in SEO and assuming it covers AI visibility, it doesn't. The two disciplines can coexist and reinforce each other — but they require separate, deliberate work.

The good news: AI visibility is an engineering problem, not a content volume problem. It's measurable, improvable, and — unlike Google rankings — not subject to opaque algorithm updates. The signals are more transparent, the methodology is more precise, and the results are directly attributable.

About the author

Bora Kurum is the founder of RAG Signal and a Ph.D. researcher at Istanbul Bilgi University, where his work focuses on LLM retrieval behavior and brand representation in generative AI systems. Read more →

See Filmfolk case study → Get your AI audit