Reference · Updated May 2026

How Eastbound measures China AI visibility.

A reference page on the methodology behind Eastbound's audits and research — the two-stage citation framework, our stratified zh-CN prompt panels, the engine endpoints we query, the reliability discipline we apply, and the labelling system we use for every recommendation.

The two-stage citation framework

Generative search has two stages, not one. The framework Eastbound uses is grounded in Zhang Kai & Yao Jingang's 2026 GEO measurement paper (arXiv:2604.25707v1), which separated citation selection (whether a page enters the engine's source pool) from citation absorption (whether the page actually shapes the answer language). The user-visible mention is a third stage downstream of both — a page can be selected often but absorbed weakly; a brand can be absorbed but mentioned only in a long-tail position.

Most generic AI-visibility tools collapse the three stages into one number. We do not, because the fix for each stage is different and a single score hides which layer is actually limiting your visibility. Our reports break out:

Prompt-panel design

We run stratified zh-CN consumer-voice prompt panels — Mainland-Chinese natural-language questions a real consumer would ask their AI assistant in your category. The panel is stratified at two levels:

L1 — broad category

Questions a consumer asks at the category level: "best moisturiser for sensitive skin", "carry-on luggage under 1.5kg", "which mechanical-keyboard switch for typing". L1 prompts surface broad-category brand recommendations and capture how your category is mapped at the highest level.

L2 — positioning niche

Questions a consumer asks at your specific positioning niche: "Korean-style hyaluronic moisturiser for over-thirties", "polycarbonate hardshell with TSA lock under HK$1,500", "linear switches with pre-lubed stems for office use". L2 prompts capture whether your brand surfaces inside the more specific frame your positioning targets.

Each prompt is repeated multiple times per engine to control for run-to-run variance, and consumer-voice prompts are kept distinct from developer/B2B prompts because the source-mix patterns differ materially between consumer and developer queries on DeepSeek specifically.

Travel and hospitality categories use multi-turn panels (first turn is "where to go", follow-ups dig into accommodation, authenticity, payment, language), because single-shot prompt panels under-report how recommendation funnels actually unfold for these categories.

Engine endpoints and provider notes

We measure the live API endpoints of each engine, not scraped chat-product output. Provider details matter for reproducibility:

EngineAPI endpointModel ID convention
DeepSeekDeepSeek API (api.deepseek.com)deepseek-chat (default), deepseek-reasoner (R1) when explicitly tested
QwenDashScope international (dashscope-intl.aliyuncs.com/compatible-mode/v1)qwen-plus (default), qwen-max for high-reasoning runs
DoubaoBytePlus ModelArk international (ark.ap-southeast.bytepluses.com/api/v3)Model IDs logged at session start and end

Two practical caveats we publish loudly:

  1. Provider labels are commonly confused. Qwen runs on DashScope (Alibaba's API surface). Doubao runs on BytePlus ModelArk (ByteDance's). They are different engines on different infrastructure. Findings on one do not transfer to the other.
  2. Neither endpoint exposes pinned-version handles. We log the model IDs at session start and at session end and report them in every readout, but we cannot guarantee identical model snapshots across runs over weeks. Test-retest reliability runs (described below) are how we control for this.

Reliability discipline

Any AI-visibility number is only as good as its reproducibility. We re-run identical prompt panels at controlled intervals and report multiple reliability statistics in every readout — not just the headline number that flatters the result.

Test-retest reliability we report

A reliability table that reports only κ_top-5 (where everyone scores 1.00) and hides κ_top-15 (where Doubao shows the granular instability) is reporting selectively. We disclose Doubao's κ_top-15 = 0.46 even though it is the harder story to tell, because anyone paying for our work deserves to know.

Sample-size discipline

A 5-niche probe with 125 calls and a 540-call panel are not the same evidence base. We always report n, panel coverage and the categories the sample was drawn from. We do not generalise findings from one category panel to others without separate measurement; for example, the SMZDM 72% mention rate we observed on a handbag panel does not transfer to watches or luggage, and within handbags it collapses at the ultra-luxury price tier (33% in our re-cut).

Recommendation labelling — measured / hypothesis / intervention

Every public recommendation Eastbound makes is labelled as one of three states. We do not collapse the three because the evidence cost of each is materially different:

  1. Measured evidence. We observed this in our own panel. We state n, panel structure, and the engines tested. We disclose limitations (single-LLM probe vs multi-LLM, descriptive vs causal, category coverage).
  2. Prior-knowledge hypothesis. Consistent with published research (Aggarwal et al. KDD 2024, Zhang Kai & Yao Jingang arXiv 2604.25707v1, the geo-citation-lab dataset, Tw93's practitioner article, etc.) but Eastbound has not measured it directly. Cited with attribution; framed as hypothesis.
  3. Planned intervention test. We expect the change to help but the only evidence that proves it is before/after measurement on your own brand. We design the test, set the measurement date, and report the result honestly — including null and negative outcomes.

When a vendor claims "AI visibility lift in 7 days" or "guaranteed mentions", they are almost always conflating these three categories. Marketing pressure tends to convert cited research into "we proved this", and untested intervention into "this works". Eastbound's edge is the discipline of refusing those conversions.

What we do not claim

For completeness, the things we deliberately do not claim:

Why this matters operationally. Anti-overclaim is not a brand voice — it is a methodology requirement. Once a measurement framework starts conflating measured / hypothesis / intervention, the numbers stop being useful internally. Clients should pressure Eastbound to label every recommendation; we welcome the pressure.

Run the audit

The free Eastbound audit applies the methodology described above on a smaller prompt panel for your specific URL, across DeepSeek + Qwen + Doubao. It returns the per-stage selection / absorption / mention scores plus the highest-leverage fixes.

Run free audit

Or read the China AI visibility pillar, our research index, or the free AI visibility audit.