For most of the search era, a recommendation was static. Type "best design software" into Google in the morning and again at night, and the answer didn't move much. Marketers built brand audits on that assumption — measure the rank, optimize, measure again. Generative AI broke the assumption. The same engine, asked the same question by the same user, returns different brands depending on signals the marketer doesn't always see. Buyer intent is the strongest of those signals.

What "buyer intent in AI search" means

Buyer intent in AI search is the set of signals embedded in a user's prompt — funnel stage, budget pressure, price tier, and use-case specificity — that change which brands an AI engine recommends, even when the underlying product question is identical. The same prompt at "I'm exploring" and "I have to choose now" returns different brand classes. Brand audits that test only un-tagged queries miss the pivot.

This piece isolates four intent signals embedded in everyday consumer prompts and measures how each one moves the recommendation set on DeepSeek, Qwen, and Doubao. The data isn't subtle. Brands drop out. Specialist tools enter. The product class itself reframes. And the patterns are stable enough across reps and across sessions that they are unlikely to be noise.

Four intent signals reshape the brand class an engine recommends Four intent signals → engine reshelves the brand class Funnel stage explore → choose Budget price pressure added Price tier ¥500 vs ¥6,000/night Use case specificity of purpose DeepSeek · Qwen · Doubao re-rank the recommendation set A different brand class surfaces brands enter, brands drop, taxonomy can shift
Figure 1 — The four intent signals isolated in this study.

Funnel stage is the strongest intent signal in AI brand recommendations

The strongest single intent signal in the study is buying-stage commitment. We asked the same prompt at three stages — awareness ("I'm exploring"), consideration ("I'm comparing top three"), and decision ("I have to choose now") — holding everything else constant. The recommendation set shifts substantially at every category × engine cell we measured.

The cleanest example is Qwen on design software. At awareness, Qwen surfaces a mass-market shelf: Canva, Jisheji (即时设计), MasterGo, and Brandfolder. At decision, all four are gone. Figma Enterprise, Frontify, Notion, and Zeroheight enter. The engine reads "I have to choose" as a commitment signal and reshelves toward enterprise-grade specialist tools — design-system documentation tools (Zeroheight), brand-asset management (Frontify), team-knowledge surfaces (Notion). Mass-market design tools that anchor the awareness shelf simply aren't on the decision shelf.

Design software

EngineBrands held across stagesDrops at decisionEnters at decision
DeepSeekCanva, FigmaAffinity Designer 2, Excalidraw, Frontify— (Figma deepens its hold)
QwenFigmaBrandfolder, Canva, MasterGo, Jisheji (即时设计)Figma Enterprise, Frontify, Notion, Zeroheight
DoubaoFigma, MasterGoCanva, Notion, Lanhu (蓝湖)Figma Enterprise, Jisheji (即时设计)

Travel and hospitality loyalty

Travel shows the same shape, with a twist: alliance frameworks emerge only at decision. DeepSeek's awareness shelf names individual programs (Hilton, Singapore Airlines, Emirates, IHG). Its decision shelf names alliance-level constructs — Star Alliance, Air China PhoenixMiles (国航凤凰知音), World of Hyatt CN (凯悦天地) — together with Marriott Bonvoy. The engine reads commitment and surfaces the program-of-programs that a frequent traveler would consolidate around.

EngineHeldDrops at decisionEnters at decision
DeepSeekMarriott Bonvoy CN (万豪旅享家)Emirates, Hilton, IHG One Rewards CN (IHG优悦会), Singapore AirlinesMarriott Bonvoy, Air China PhoenixMiles (国航凤凰知音), World of Hyatt CN (凯悦天地), Star Alliance CN (星空联盟)
QwenMarriott Bonvoy CN (万豪旅享家), China Eastern Miles (东方万里行), Air China PhoenixMiles (国航凤凰知音)China Eastern Miles full (东航东方万里行), Huazhu (华住会)IHG One Rewards CN (IHG优悦会), Marriott Bonvoy
DoubaoMarriott Bonvoy CN (万豪旅享家)Huazhu (华住会), Huazhu Gold (华住会金会员), China Southern Airlines (南方航空), China Southern Sky Pearl Silver (南航明珠银卡)Cathay Marco Polo Club (国泰航空马可孛罗会), oneworld Emerald (寰宇一家绿宝石), Huazhu Platinum (华住会铂金), IHG Rewards Club

Overseas tertiary education

In tertiary education, the shift is from generalist marquee names at awareness to program-specific tracks at decision. Qwen drops HBS, LBS, NUS, and Wharton at decision and brings in HEC Paris, Wharton MBA (the explicit MBA program rather than the general school), Williams College, and NUS MSc Quantitative Finance. The engine moves from "the famous schools" to "the specific programs that fit a real applicant."

EngineHeldDrops at decisionEnters at decision
DeepSeekHBS, INSEAD, Stanford GSBLondon Business School, WhartonHarvard University, NUS MFE
QwenINSEADHarvard Business School, London Business School, NUS, WhartonHEC Paris, NUS MSc Quantitative Finance, Wharton MBA, Williams College
DoubaoINSEAD MBAHBS Full-Time MBA, LBS MBA, NUS MQuantFin, Stanford GSB Full-Time MBAINSEAD, NUS MSc Quant Finance, Wharton MBA, Oxford (牛津大学) PPE

Qwen recommends Canva when you're exploring. It recommends Frontify, Figma Enterprise, and Zeroheight when you have to choose. The engine reads commitment in the prompt and reshelves accordingly.

The budget keyword displaces foreign brands in Chinese AI answers

Adding a budget constraint mid-conversation produces a more uniform shift than funnel stage: foreign brands drop, domestic substitutes enter. Across all three categories. Across all three engines. The pattern is among the most reproducible in the study.

The mechanism is intuitive once you see it. Foreign brands carry a higher implicit price tag in the engine's mental model — a paid Figma seat, an INSEAD tuition, a Marriott points-night, an AmEx Centurion fee. The moment "budget" or "predominantly within ¥X/month" enters the prompt, the engine swaps the brand class. Domestic alternatives that the engine knows are cheaper, locally accessible, and locally supported take the slots.

Does adding "budget" to a prompt change AI brand recommendations? In every category × engine cell we measured: yes, materially. The table below shows which brands drop and which enter the moment the budget keyword arrives.

Category × EngineDrops when budget addedEnters when budget added
Design × DSFigmaCanva (China), Canva CN (可画), MasterGo, Jisheji (即时设计)
Design × QwenFrontify, Notion, ZeroheightCanva China (中国版), Pixso, Gaoding Design (稿定设计), Feishu (飞书)
Travel × DSAmEx Centurion, Marriott Bonvoy, Marriott Bonvoy CN (万豪旅享家), Star Alliance CN (星空联盟)Air China Phoenix Miles, Huazhu, Huazhu CN (华住会), Jin Jiang Hui (锦江荟)
Travel × QwenIHG One Rewards CN (IHG优悦会), Marriott Bonvoy, China Eastern Miles (东方万里行), Air China PhoenixMiles (国航凤凰知音)China Eastern Miles full (东航东方万里行), Huazhu (华住会), Shenzhen Airlines Phoenix Club (深航凤凰俱乐部), Jin Jiang WeHotel (锦江WeHotel)
Education × DSCambridge Judge, HBS, INSEAD Exec Ed, Stanford GSBHKU Exec Ed, HKUST Business School, NUS MBA
Education × QwenCEIBS EMBA, HBS SELP, HEC Paris, INSEAD, Wharton MBACEIBS+HEC Dual Degree, CEIBS FMBA, Fudan-BI Norwegian MBA, HKUST MBA, NUS MBA

The implication for foreign brands is uncomfortable but precise. Their AI visibility is conditional on the user not mentioning price. Default queries surface them; budget-tagged queries do not. Brand audits that test only the un-tagged query report a positive that the engine's pivoted answer doesn't support.

How price tier reframes the entire product class AI recommends

The third signal is more subtle than the first two: when the prompt's price tier moves to the high end, the engine doesn't just rerank the same brand class — it changes which class is even relevant. This is the most novel finding in the study, and the one most likely to be missed by audit-as-usual.

DeepSeek travel makes the cleanest case. Ask about a budget trip (¥500-1,000/night, three-star hotel, economy class) and DS surfaces hotel and airline loyalty programs: Marriott Bonvoy, Marriott Bonvoy CN (万豪旅享家), Air China Phoenix Miles, Air China PhoenixMiles CN (国航凤凰知音). Ask about a luxury trip (¥6,000+/night, five-star, first class) and DS stops talking about loyalty programs entirely. AmEx Centurion and AmEx Centurion/Platinum take over the top picks. The relevant vehicle for a luxury traveler, in the engine's framing, isn't a points program — it's a top-tier credit-card status.

Doubao does something parallel. At its budget tier, Doubao surfaces individual mid-tier status cards: China Eastern Silver (东方航空银卡), Huazhu Platinum (华住会铂金), China Eastern Miles Silver (东方航空万里行银卡). At its luxury tier, those drop and alliance-top status takes their place — oneworld Emerald (寰宇一家绿宝石), Hyatt Globalist (凯悦天地环球客), Marriott Titanium (万豪钛金), United 1K (美联航环球客), Global Hotel Alliance Black (GHA黑卡). Same engine, same category, two intent tiers, two completely different product taxonomies.

EngineBudget tier — top picksLuxury tier — top picksWhat changed
DS travelMarriott Bonvoy CN (万豪旅享家), Marriott Bonvoy, Air China Phoenix Miles, Air China PhoenixMiles CN (国航凤凰知音)AmEx Centurion, AmEx Centurion/PlatinumLoyalty programs → credit cards
Doubao travelChina Eastern Silver (东方航空银卡), Huazhu Platinum (华住会铂金), China Eastern Miles Silver (东方航空万里行银卡), Huazhu Platinum Member (华住会铂金会员), IHG Rewards Cluboneworld Emerald (寰宇一家绿宝石), Hyatt Globalist (凯悦天地环球客), Marriott Titanium (万豪钛金), United 1K (美联航环球客), GHA Black Card (GHA黑卡)Mid-tier status → alliance-top status

The strategic implication is unfamiliar to most brand teams. A brand can be invisible in its own category at a different intent tier. Hilton at ¥6,000/night isn't ranked low — it's not being recommended at all, because at that intent the engine isn't recommending hotel programs anymore. Brand-content investments calibrated to "we are in the luxury hotel program category" miss the engine's reframe entirely.

Ask DeepSeek about luxury travel and it stops talking about hotels.

How use-case wording surfaces specialist tools instead of category leaders

The last signal is the most nuanced, and the most actionable for second-place brands. When the use-case word in the prompt sharpens — "for product design" → "for brand asset management" → "for design-system documentation" — engines substitute specialist tools for category leaders, even at the same price tier and same buying stage.

The clearest example is Qwen on design software. Hold the price tier (team ¥500-2,000/month) and the buying stage (decision) constant; vary only the use case. For "initial product design," Qwen surfaces Figma. For "brand asset management" at the same price tier, Qwen surfaces Frontify — and Figma drops out entirely. Frontify is the specialist tool for that use case, and the engine substitutes the right specialist when the prompt is sharp enough.

Engine"Initial product design""Brand asset management"What changed
DSFigmaFigma + CanvaAdds Canva for asset organization
QwenFigmaFrontify (Figma drops)Specialist replaces category leader
DoubaoMasterGoMasterGo + Jisheji (即时设计)Adds Jisheji (即时设计) for asset workflows

In a parallel cell, asking Qwen for tools for "enterprise design system" (versus "enterprise dev handoff" at the same price) brings Notion and Zeroheight into the top picks — both specialist documentation tools that don't surface for the dev-handoff use case. The engine knows which use case calls for which specialist.

The optimistic reading: brands that are not category leaders have a real path to AI visibility — by owning a use-case keyword. Frontify on "brand asset management." Zeroheight on "design system documentation." Penpot on "open-source design tool." The engine respects the use-case framing and surfaces the right specialist when the framing is sharp.

What this means for your China AI visibility strategy

Three takeaways follow from the four-signal pattern.

Awareness brand-tracking metrics overestimate brand value. A brand that wins at "exploring" loses ground at "choosing"; awareness investment that ranks at the top-of-funnel doesn't necessarily convert to share at decision-stage. The Qwen design example is the cleanest: Canva, MasterGo, and Jisheji (即时设计) sit at the awareness shelf and disappear from the decision shelf entirely. If a brand is measuring AI visibility on un-staged prompts, it's measuring an attribute that doesn't reach the buying moment.

Foreign brands are visible only when budget is unspoken. Once price intent enters the prompt, domestic substitutes systematically replace foreign brands across all three studied categories. Brand audits that test only un-tagged queries report a positive that doesn't survive the budgeted query. For foreign brands serious about Mainland AI visibility, the audit dimension that matters most is the constraint-conditioned one.

Specialist tools beat category leaders on use-case framing. The path for non-flagship brands is not to compete category-wide but to own a specific use-case keyword. The engine respects sharper framings and surfaces the right specialist tool. Frontify, Zeroheight, Penpot — narrow tools that beat Figma in a specific lane.

Where Eastbound comes in

Eastbound runs intent-conditioned brand visibility studies on DeepSeek, Qwen, and Doubao. The engagement starts with a stage-and-constraint diagnostic — we test your brand under default, budget, localization, and price-tier prompts, and show you exactly where your visibility breaks.

If your team is sitting on the green-audit / red-engine gap — every checklist passed, recall still missing — run the free China AI visibility audit on your domain or book an intro call.

Methodology

  • Sample. 3,024 calls (66 segmentation prompts × 3 categories × 3 reps × 4 turns × 3 engines, plus a 20% retest 24–48 hours later). Three categories: design / collaboration software, overseas tertiary education for Mainland-Chinese applicants, premium travel and hospitality loyalty.
  • Engines. DeepSeek (deepseek-chat); Qwen-Plus on DashScope international (dashscope-intl.aliyuncs.com); Doubao Seed-2-0-Pro on BytePlus ModelArk international (ark.ap-southeast.bytepluses.com). Temperature 0.7. Coded by DeepSeek-Chat at temperature 0.0 against a fixed codebook.
  • What we measured. Brand mention rate per (prompt × stage × rep) cell; which brands drop and which enter as a single prompt variable changes (stage, budget, price tier, use-case wording). This is descriptive measurement of LLM recommendation behavior; it is not a causal claim about training data, retrieval, or human conversion.
  • What we did not measure. Sales, conversion, attributable revenue. ChatGPT / Claude / Gemini / Perplexity / ERNIE / Yuanbao — not in this panel. Chat-with-Search-ON browsing surfaces — separate study.
  • Reliability. Pooled cross-run top-5 brand-set Jaccard at the engine level — DeepSeek 0.27, Qwen 0.34, Doubao 0.24. Per-rep matched cross-run is much lower (0.12–0.17), which is why every claim above is a panel-level aggregate, not a per-prompt fact. The coding model is DeepSeek-Chat — DS-vs-Qwen and DS-vs-Doubao contrasts may be slightly inflated by coder alignment with the DS surface.
  • Limitations. Persona depth is exploratory only (2 prompts per category × 3 reps); persona findings are triage, not publishable conclusions. Neither DashScope nor BytePlus exposes pinned model-snapshot handles; Run-1 and Run-2 launched within 24–48 hours and the model IDs returned at run-open and run-close were consistent in all six probes, but identical training cuts cannot be guaranteed.

Per-record JSONs and replication scripts available on request.