Resource Hub

How to Measure Brand Visibility in AI Search

Written by Rod Russell, Managing Partner, ADMATIC | May 7, 2026 12:43:57 AM

Your AI search ranking isn’t fixed. Learn how to measure brand visibility, citation rate, and AI authority the right way.

You check your AI search ranking on Monday morning. Your brand appears third. You check again Tuesday. Fifth. Wednesday, you're not there at all. Thursday, you're back at second.

This isn't a bug. This is how AI works.

AI language models are non-deterministic. Ask ChatGPT the same question twice in a row and you'll get different answers both times. Different brands. Different order. Sometimes you appear, sometimes you don't. There's less than a 1 in 100 chance that ChatGPT will give you the same list of brands across 100 identical queries.

So when a marketing manager checks "do we appear when someone asks for the best accountants in Auckland" and the answer is yes, that tells them almost nothing. They might not appear in the next ten responses.

The measurement instinct is right. The method is wrong.

You can't take a snapshot. You have to take an average. One reading gives you noise. Fifty readings over a month starts to give you signal.

Traditional ranking positions don't work in AI search because the responses vary every time. A single measurement gives you false certainty. What actually works is tracking three numbers consistently over time.

Mention Rate tracks how frequently your brand appears across multiple AI responses to the same query set. If you run 20 prompts and your brand appears in 14 responses, your mention rate is 70%. This is your baseline visibility number.

Citation Rate measures whether AI attributes information to your sources when mentioning your brand. A mention without a URL builds brand familiarity. A citation drives traffic. They're measuring different things. You track this through GA4 referral traffic from chatgpt.com, perplexity.ai, and gemini.google.com.

Answer Position tracks where your brand appears within AI responses. First mention converts differently to fifth mention. When AI leads with your brand, it's signalling how the model weights you relative to competitors. First mention tends to correlate with the brands AI considers most authoritative for that specific query.

Pick 15 to 25 prompts that represent the questions your customers are actually typing into AI. Not brand queries like "tell me about [your company]" but category and problem queries. "Best accountant for a small construction business in Wellington." "Who should I use for employment law advice?" These are the moments where you either appear or you don't.

Run those prompts weekly across ChatGPT, Perplexity, Gemini, and Copilot. Log three things for each response: did your brand appear at all, did the response link back to your website, and if you appeared, where in the answer did you show up (first mention, third, fifth).

A simple spreadsheet handles all of this. The value isn't in any single week. It's in watching those numbers move over time. If your mention rate climbs from 40% to 65% over three months, something is working. If citation rate stays flat whilst mention rate grows, AI knows who you are but isn't pointing people to your site. That tells you something specific to act on.

The gap between mention rate and citation rate usually comes down to where AI is finding information about you versus whether it trusts your own site enough to send people there.

AI systems pull from trade publications, review sites, industry forums, Reddit, third-party directories. If your brand is being mentioned across those sources but your own website content is thin, outdated, or poorly structured, the AI has enough confidence to name you but not enough to cite you. It's saying "I've heard of these people" without being willing to recommend your domain specifically.

The fix is two-pronged.

First, make sure your own site gives AI something worth citing. That means content that directly answers the questions your customers are asking, structured clearly, updated regularly, with genuine factual density rather than marketing language. AI citations tend to go to pages that answer questions, not pages that describe your services in vague terms.

The page AI won't cite says "we're a passionate team of experts delivering innovative solutions tailored to your unique needs." There's nothing in that sentence an AI can extract and use as a fact in an answer.

The page AI will cite has specifics. "We work with New Zealand businesses turning over between two and ten million dollars, primarily in construction and professional services, and our clients typically reduce their compliance time by around a third in the first year." Now there's something there. A sector. A size range. An outcome. A timeframe.

Second, look at whether your third-party presence is doing the heavy lifting your own site should be sharing. If you're appearing because of a Stuff article from 2023 or a listing in an industry directory, that's fragile. You want AI to be drawing from multiple owned and earned sources simultaneously. Your site, recent press coverage, industry mentions, customer reviews. That cross-web consistency is what builds citation confidence over time.

The traditional SEO work that translates well is anything that makes your content clear, factually rich, and widely referenced. Good site structure. Well-organised pages. Content that directly answers questions rather than dances around them. All of that helps AI just as much as it helps Google.

Backlinks are where it gets more nuanced. In traditional SEO, a backlink from a high-authority domain moves your rankings in a fairly direct way. In AI search, an unlinked mention on a trusted platform carries almost as much weight as an actual link. AI isn't crawling the web and counting links the way Google's algorithm does. It's absorbing the broader context of who talks about you, where, and in what terms.

A mention in a Stuff article, a reference in an industry newsletter, a recommendation in a Reddit thread. These all contribute to the cross-web consensus that makes AI confident enough to cite you. The PR and earned media work that many SEO practitioners treat as a nice-to-have is actually central to AI visibility.

Where it diverges most is in the technical optimisation work. Canonical tags. Crawl budget management. Redirect chains. That whole layer of traditional SEO has very little bearing on whether AI cites you. The AI has already processed most of that content. What it's weighing now is authority and relevance, not technical hygiene.

There's no industry-wide definition of what a good mention rate looks like yet. You can't buy a tool that spits out a single number the way Semrush spits out a domain authority score.

Focus on two things instead. Your trend and your competitors.

Your own trend is the most honest benchmark you have right now. If your mention rate was 30% in month one and it's 55% in month three, something is working. The direction and the rate of change tell you more than the number itself at this stage of the market.

The competitor comparison is where it gets more useful. Run the same prompt set and log where your competitors appear alongside you. If you're appearing in 55% of responses and your main competitor is appearing in 80%, that gap is your real benchmark. You're not trying to hit an abstract target. You're trying to close a specific distance from the brand AI currently considers more authoritative in your category.

In a smaller New Zealand market with fewer established players, the competitive threshold for entering the consideration set is lower than global categories. That's an advantage for local businesses willing to put the work in early.

The single biggest implementation mistake is doing it once and drawing conclusions.

Marketing managers run the prompts, see they appear in 60% of responses, put it in a slide, and move on. Three months later someone asks how AI visibility is tracking and there's nothing to compare it against because the baseline was a one-off.

The whole framework only works as a recurring practice. The non-deterministic nature of AI that makes single readings unreliable doesn't go away once you've decided to start measuring. It means every reading is unreliable. You need the volume and the consistency to average it out into something meaningful.

Fifteen to twenty-five prompts weekly is the minimum that starts to produce a reliable signal. Managers who test with five or six prompts are essentially still taking a snapshot and calling it data.

Treat this like any other recurring marketing metric from day one. Build it into a weekly or monthly reporting rhythm before you have anything interesting to report. The discipline of the cadence is what makes the data trustworthy later.

The brands that will have the most defensible AI visibility data in two years are the ones that started logging consistently in 2025 and 2026, when it felt too early.

Traditional SEO rankings were always somewhat transactional. You could outspend a competitor on link building, publish more content, fix your technical issues, and move up the rankings in a matter of months. Rankings shifted, sometimes quickly.

AI visibility is stickier for a more structural reason. The models that power AI search have a training layer and a retrieval layer, and they work on different timescales. The retrieval layer can pick up new content relatively quickly, within weeks in some cases. But the training layer, which shapes how confidently a model associates your brand with a category or topic, absorbs changes over months and years, not days.

Getting that association established early means you're baked into how the model thinks about your category, not just appearing in a recent piece of content it happened to crawl.

There's also a compounding effect. When AI consistently recommends a brand, that brand gets more coverage, more mentions, more third-party validation, which in turn reinforces the cross-web consensus that made AI confident about recommending it in the first place.

Brands that dominated category conversations in their market for years built a mental availability that was genuinely hard for a newcomer to dislodge, even with a significant budget. AI search is building a similar dynamic, just faster and with less transparency about how the associations are being formed.

Three months in, your mention rate is climbing. Your CEO asks "what's the ROI on this?"

The honest answer is that you're measuring a leading indicator, not a lagging one. Brand awareness never had a direct ROI calculation either. Neither did share of voice, or PR coverage, or sponsorship. That didn't make them worthless. It made them harder to justify to people who wanted a revenue number.

AI visibility sits in the same category for now, with one important difference. The measurement is getting more tractable, not less. Citation rate connects directly to GA4 referral traffic, which connects to sessions, which connects to conversions if your attribution is set up properly. That thread exists even if it's not yet clean enough to put in a board deck as a cost-per-acquisition number.

AI search influences the research phase of the customer journey, before someone has identified you as a vendor, before they've visited your site, before they've converted. That influence is real and it compounds over time, but it doesn't show up neatly in a last-click attribution model.

What you can say honestly is this. Your mention rate is climbing, which means more potential customers are encountering your brand during the research phase than three months ago. Your citation rate tells you how many of those encounters are turning into site visits. The brands that built this presence early are the ones that will be hardest to displace when the measurement catches up with the channel.

Frame these metrics as leading indicators sitting alongside the existing dashboard, not replacing it. Your SEO metrics tell you what happened after someone searched. These metrics tell you something about what's happening earlier, during the research phase before the comparison search even begins.

If you're not tracking AI visibility, you're not absent from AI search. You just don't know how you're performing in it. Given that AI search usage is growing and the brands building consideration set presence now are the ones that will be hardest to displace later, choosing not to measure isn't a conservative position.

For more context on how AI search ranking variability works, see Why Your AI Search Ranking Changes Every Time You Check. For definitions of key terms used in this framework, visit the AEO and GEO Glossary for Marketing Managers. 

Source: Rod Russell, Managing Partner, ADMATIC ,  7th May 2026