AI visibility score
Definition
The AI visibility score is a single composite number — typically on a 0–100 scale — that summarizes a brand's standing across AI assistants (ChatGPT, Claude, Gemini, Perplexity, Grok, AI Overviews) by blending mention frequency, citation rate, ranking position, sentiment, and AI-referred traffic. It is the executive-friendly headline metric for Generative Engine Optimization (GEO) programs.
How it works
An AI visibility score blends five weighted sub-metrics into a single 0–100 number:
-
Mention frequency (typically 25% weight): how often your brand appears in AI-generated responses across a tracked prompt set.
-
Citation rate (25%): how often your content is cited as a source in those responses.
-
Ranking position (20%): where your brand or content sits in source lists or inline mentions.
-
Sentiment (15%): polarity and intensity of how the model describes your brand.
-
AI-referred traffic (15%): real visits clicked through from AI assistant citations.
Each sub-metric is normalized to 0–100 against your tracked prompt set and tracked competitors, then weighted and summed. Weights are configurable — agencies often emphasize mention and citation more heavily; in-house brand teams often weight sentiment higher.
The score is computed per platform and rolled up into a composite. Per-platform scores reveal which AI assistant to prioritize next; the composite is the executive headline.
Composite score vs sub-metrics
The composite is for executives. The sub-metrics are for operators.
An AI visibility score of 62 tells the CMO whether the program is on track. It does not tell the SEO team what to fix. For that, the breakout matters: 62 with high mentions and low citations means an on-page extraction gap; 62 with high citations and low ranking means a first-mention prominence gap; 62 with high everything and low traffic means a click-through gap (poor titles, weak intent match).
Reporting only the composite hides the levers. Reporting only the sub-metrics overwhelms executives. A mature program publishes both, with the composite as the headline and a one-line interpretation of which sub-metric drove the change.
0–100
Standard AI visibility score scale
Indexly framework
5
Sub-metrics that compose the score (mentions, citations, ranking, sentiment, AI-referred traffic)
Indexly framework
Weekly
Recommended computation cadence for AI visibility score, reported on rolling 30-day windows
Indexly best practice
Why it matters
AI visibility score gives boards, executives, and cross-functional partners a single number to track over time without losing the operational nuance underneath. It makes the results of a GEO program legible to people who don't read citation logs.
It also forces consistency. A composite score requires a defined prompt set, a defined platform list, defined sub-metric weights, and a defined refresh cadence. The discipline of computing the score reliably is half the operational value — programs that compute it weekly tend to ship the underlying improvements weekly too.
How to calculate it
Five steps to compute a credible AI visibility score:
-
Lock the prompt set. 100–500 buyer-language prompts that don't change between measurement periods. Adding prompts mid-quarter invalidates trend comparisons.
-
Run prompts on every tracked AI platform on a recurring schedule. Weekly is typical; daily for high-volume brands.
-
Compute each sub-metric per platform. Mentions (count), citations (rate), ranking (mean position), sentiment (−100 to +100 normalized to 0–100), AI-referred traffic (sessions over period).
-
Apply weights and aggregate. Default weights: mentions 25%, citations 25%, ranking 20%, sentiment 15%, traffic 15%. Adjust deliberately, document the change, restate prior periods.
-
Report on rolling 30-day windows. Single-day snapshots are too noisy. 30-day rolling makes the trend legible without overreacting to short-term variance.
Frequently asked questions
What's a good AI visibility score to target?
Depends on category and tracked prompt set. Category leaders in mature B2B segments often hit 60–80; emerging brands typically start at 10–25. The trend matters more than the absolute number — a 6-month climb from 18 to 32 is a stronger signal than a static 60.
How is AI visibility score different from share of model?
Share of model is one input — the percentage of relevant prompts where your brand appears at all. AI visibility score blends share of model with citation rate, ranking position, sentiment, and AI-referred traffic into a single composite. Share of model is one widget; the visibility score is the dashboard.
Should the score weight every sub-metric equally?
Default weights (25/25/20/15/15) work for most B2B programs. Brands prioritizing demand generation often tilt weight toward AI-referred traffic; brands managing a sensitive narrative often tilt weight toward sentiment. Document weight choices and restate prior periods if weights change.
Can I compare my AI visibility score to a competitor's?
Only if the prompt set and weights match. Two scores computed on different prompt sets are not comparable — one of them is implicitly easier. Most programs compute competitor scores on their own tracked prompt set to preserve apples-to-apples comparability.
How often does the score change?
AI responses are stochastic, so daily score deltas are mostly noise. Weekly computation reported on 30-day rolling windows surfaces genuine shifts. Material score changes (5+ points in 30 days) almost always have a diagnosable cause — a competitor launch, a Wikipedia update, a content refresh, a sentiment swing.
AI search visibility
AI search visibility is the umbrella metric capturing how often, how prominently, and how favorably your brand appears across AI assistants — ChatGPT, Claude, Perplexity, Gemini, Grok, and Google AI Overviews. It bundles mentions, citations, ranking position, sentiment, and AI-referred traffic into the executive-level read of a brand's standing in AI search.
AI share of voice
AI share of voice is your brand's proportion of mentions in AI-generated responses relative to competitors, measured across a defined set of prompts and platforms. It adapts the traditional share of voice metric for AI search — where visibility lives inside chat answers and AI Overviews rather than ranked links or media impressions.
Share of model
Share of model is the percentage of relevant AI-generated answers in which your brand appears, measured across a defined set of prompts and platforms. It is the AI-search equivalent of share of voice and the headline metric for tracking GEO performance.
AI brand mentions
AI brand mentions are the instances of your brand name appearing inside responses generated by AI assistants — ChatGPT, Claude, Gemini, Perplexity, Grok, and Google AI Overviews. Unlike traditional brand monitoring across social and press, AI mentions surface inside the answer a buyer is reading, making them a high-leverage demand signal for Generative Engine Optimization (GEO).
Citation probability
Citation probability is the likelihood that an AI system will cite a specific URL when generating a response to a target prompt. Unlike share of model, which measures brand visibility across a prompt set, citation probability is a per-URL metric — it tells you how strong an individual page is at earning citations.
Sentiment monitoring
Sentiment monitoring is the practice of continuously analyzing the tone AI assistants use when describing your brand — positive, neutral, or negative — across ChatGPT, Claude, Gemini, Perplexity, and Grok. Unlike social-media sentiment, the audience is the AI model itself, and a negative skew can shape how millions of buyers hear your brand described before they ever visit your site.