China AI visibility · Category reference
What is generative engine optimization?
Generative engine optimization (GEO) is the discipline of getting your brand cited, quoted and recommended inside the answers AI assistants return — rather than ranked in a list of blue links. This is the full definition, the underlying mechanism, and what the published research actually measures.
Last reviewed 2026-05-09. Citations to peer-reviewed papers and industry studies throughout.
The one-sentence definition
GEO is the practice of optimising web content, infrastructure and off-site source presence so that generative AI engines — ChatGPT, Claude, Perplexity, Gemini, Google AI Overview, DeepSeek, Qwen, Doubao — surface, cite and recommend a brand when answering user prompts.
The term was coined in the 2024 KDD paper "GEO: Generative Engine Optimization" by Aggarwal and colleagues at Princeton and IIT Delhi (n=10,000 queries across 9 measured tactics). It has since absorbed adjacent vocabulary: AEO (answer engine optimization), AI search optimization, LLM SEO and AI visibility. The terms are not strictly synonymous — AEO is narrower (answer-focused), GEO is broader (covering any generative surface) — but in industry use they are interchangeable for most purposes.
The mechanism: a two-stage process, not one
The clearest published model is Zhang Kai & Yao Jingang's 2026 measurement framework (arXiv:2604.25707v1), which separates generative search into citation selection and citation absorption. A page can be selected — pulled into the engine's source pool — without being absorbed, where "absorbed" means the page's language, structure or facts actually shape the answer the user reads.
Tw93's 2026 instrumentation of ChatGPT made this concrete: the engine retrieves roughly 100 pages per query, but only ~15% appear in the answer. The other 85% are selected but not absorbed — present in the candidate pool, invisible in the output. Selection ≠ absorption ≠ user-visible mention. GEO is the discipline of optimising all three layers, in order.
The citation pyramid
Each layer is a prerequisite for the next:
- Search-engine indexed. Bing for ChatGPT search; Google for Google AI Overview. If your pages aren't in the underlying index, none of the rest matters.
- AI-crawler reachable. Granular
robots.txtrules that allow search/retrieval bots (OAI-SearchBot, Claude-SearchBot, PerplexityBot) — not just training crawlers. - AI-parseable. Clean HTML, semantic URLs,
llms.txt, Markdown alternates. Vercel's 2025 crawler study confirmed GPTBot, ClaudeBot and PerplexityBot fetch raw HTML and do not execute JavaScript. - Selection-worthy. Relevance, specificity, dated facts, named entities, length appropriate to the question.
- Absorption-worthy. Quotable phrasing, evidence density, structured comparisons that the engine can extract as a chunk.
- Third-party validated. Cited from Reddit, Hacker News, Zhihu, Wikipedia, vertical-industry pubs. The published research finds third-party citation is roughly 6.5× more effective than self-citation alone.
Signals the published research has measured
GEO is one of the few digital-marketing disciplines with peer-reviewed measurement. The headline numbers come from Aggarwal et al. (KDD 2024, Princeton + IIT Delhi):
| Tactic | Citation lift |
|---|---|
| Adding authoritative citations to your page | +115% |
| Adding direct quotes from credible sources | +43% |
| Adding relevant statistics with named sources | +33% |
The KDD paper tested nine tactics across 10,000 queries. Of the nine, only the three above produced statistically reliable lifts. Notably absent from the tested set: FAQ format, FAQPage schema, generic "increase content length" advice. The absence is itself a finding — these are the heuristics the academic measurement work has not validated.
Length sweet spot
Cross-study consensus places the sweet spot at 1,000–3,000 words per page with 10+ headings. Below 500 words, pages function as snippets that rarely match a substantive prompt. Above 3,000 words, marginal value falls and the editorial cost of keeping the page accurate compounds. Low-cited pages average 170 words in published samples; high-cited pages average ~2,000 — a more-than-10× gap.
Specificity beats fluency
The strongest single predictor across studies is semantic similarity between page content and user query. Pages with real numbers, dated comparisons, named entities and clear definitions are cited 50%+ more than vague pages making the same claim. Step-structured content (numbered procedures, decision trees) outperforms prose summaries of the same material.
Encyclopedia-style explainer pages outperform news
Wikipedia-style "what is X / how does X work" pages have roughly 3× the influence per citation of news pages in published samples. The mechanism: an explainer page is reusable across many prompts; a news page is locked to a single window of relevance. Brands that invest in canonical reference pages on their own domain compound returns across every prompt that touches the topic.
Why off-site work dominates on-site work
The single highest-leverage signal in the published research:
This is the structural reason GEO is a brand-visibility play, not a content-marketing play. The infrastructure work (robots.txt, llms.txt, sitemap, IndexNow) is a one-hour layer. The content work (1,000–3,000-word reference pages) is a multi-week layer. The third-party source-graph work — Wikipedia, Reddit, Hacker News, vertical media, and in China the Baike / Zhihu / Xiaohongshu / SMZDM stack — is a multi-quarter layer that compounds.
What the research says does not work
The most-cited published anti-patterns in 2025–2026:
- FAQ-formatted pages and FAQPage schema. SE Ranking's 129,000-domain × 216,524-page analysis (covered in Search Engine Journal, 2025) found FAQ-schema pages averaged 3.6 ChatGPT citations versus 4.2 without. The Aggarwal KDD 2024 paper did not include FAQ among its nine measured tactics. Williams-Cook's DUCKYEA controlled test (Feb 2026) confirmed FAQPage schema confers no extraction advantage over visible Q&A copy.
- JSON-LD as a universal AI signal. SearchVIU's 2025 5-system × 8-scenario × 10-query study found 0 of 5 systems extracted price data placed exclusively in JSON-LD. Williams-Cook's fake-schema test showed ChatGPT and Perplexity tokenise
<script type="application/ld+json">as plain text without structural parsing. Bing/Copilot is the confirmed exception — Fabrice Canel (Microsoft, SMX Munich March 2025) publicly confirmed Bing uses schema for Copilot grounding. Keep schema for that bonus; do not invest engineering hours expanding it as your primary AI-citation tactic. - User-Agent sniffing to serve different content to bots. This is cloaking. Google penalises it.
<meta name="ai-content-url">and similar speculative meta tags. No spec, no major LLM supports them.- Generic "increase content length" advice. Length only helps when each additional chunk is independently useful. Padding a page with redundant material lowers signal-to-noise per chunk and reduces citation rates.
China is a separate generative-search surface
Generic GEO frameworks target ChatGPT, Claude, Perplexity, Gemini and Google AI Overview. None of these are how a Mainland-Chinese consumer asks an AI for a product recommendation. The Mainland surface is DeepSeek, Qwen (Alibaba's Tongyi) and Doubao (ByteDance), with secondary engines including Yuanbao, Kimi and Baidu's ERNIE Bot.
Top-15 source overlap (Jaccard) between the three Chinese engines is 0.20–0.30 in our 540-call panel (May 2026). Overlap between Western engines and Chinese engines is lower still. A GEO playbook built for ChatGPT cannot be ported to DeepSeek without rebuilding the source-substrate model from scratch — different language, different prompt style, different community platforms, different encyclopedic anchors (百度百科 not Wikipedia, 知乎 not Reddit, 小红书 not YouTube).
Eastbound's research and consultancy work focuses on the China surface specifically. For the deeper treatment, see China AI visibility for global brands, the per-engine playbooks (DeepSeek, Qwen, Doubao), and the measurement methodology.
See what your brand looks like inside an AI answer
The free Eastbound audit runs your URL against a stratified zh-CN consumer prompt panel across DeepSeek, Qwen and Doubao, and reports per-engine selection, absorption and brand-mention scores. No login.