
Most businesses tracking their AI search performance are measuring the wrong thing.
They’re watching their citations climb. Peak dashboards, Profound reports, more and more prompts where their site shows up as a source. Numbers moving in the right direction. Phone still not ringing.
This happens more than you’d think. And the fix isn’t complicated once you understand what’s actually going on.
Being cited and being recommended are two entirely different outcomes. One means your content helped train an AI response. The other means your brand ended up as the answer a human actually sees. Only one of those drives revenue.
What AI Models Actually Do With Your Content
Here’s the technical bit, because it matters.
When someone types a prompt into ChatGPT, Perplexity, or Google’s AI Mode, the model doesn’t just reach into its memory and write a reply. It runs a process called query fan-out: it breaks your prompt into several separate sub-searches, pulls candidate pages from its index, reads through potentially hundreds of sources, and then synthesises a response.
The pages it reads = citations.
The brand it names as the best option = the recommendation.
Those are not the same list.
Your website can appear in the first group consistently without ever making it into the second. If you’re helping the AI understand a topic but not ending up as its suggested solution, you’re essentially writing research material that benefits whichever competitor has better-positioned content for the final step.
Query fan-out in practice:
| What the AI does | What it produces | What matters for revenue |
|---|---|---|
| Runs 3–6 sub-searches from original prompt | Cited sources (what it read) | No direct commercial value |
| Evaluates pages for decision-stage signals | Named recommendations | Drives clicks, leads, bookings |
| Extracts verifiable facts and credibility signals | Quoted claims in response | Only works if brand is explicitly positioned |
Citation volume is a vanity metric. Consistent named recommendations for decision-stage prompts is the outcome worth building toward.
The Ceiling on Citation-Chasing (and Why Most Agencies Won’t Tell You This)
Okay so the standard advice right now is: find the pages being cited for your target prompts, get your brand mentioned on those pages, and you’ll start showing up in AI responses.
Not wrong. But limited.
You reach out to the top 20 articles being cited in your niche. Some respond. Most don’t. Response rates hover around 2-5% in our experience, and the easy placements (Reddit comments, directory listings, industry blogs with a contact page) fill up fast. More importantly, even when you land them, a lot of those placements don’t contain a recommendation. They contain a mention. There’s a difference.
A cited article explaining how to evaluate hospitality operators doesn’t automatically put your lodge in the answer. It means your content helped shape the AI’s understanding of the topic. Someone else’s brand still ended up in the actual recommendation.
For businesses that exist to generate bookings, leads, or sales, that gap is where revenue disappears.

So What Actually Gets You Recommended?
Three things need to line up:
- The AI has to encounter your brand during its fan-out search (not just in training data)
- Your content has to explicitly position you as a solution, not just a relevant source
- The prompt triggering the search has to be decision-stage, not informational
That third one trips people up constantly.
If someone asks “what is a Ngorongoro Crater safari” that’s an informational prompt. Getting cited there doesn’t move anything commercially. But “best safari operators for families in Tanzania” or “cost of a private Serengeti safari 2025” — those are decision-stage prompts. Someone’s about to book something. The AI is looking for a specific, nameable answer, not a general overview.
Build content targeting those prompts. Lead with your brand. Give the model something concrete to extract and relay. That’s when AI SEO actually starts producing results beyond impressions.
Q: What Does “Something Concrete to Extract” Actually Mean?
Good question. This is the fact density piece.
AI models are drawn to content packed with specific, verifiable information. Numbers. Dates. Awards. Pricing ranges. Operational details. Years in business. Quantified outcomes.
A page that says a safari operator has run expeditions for over 20 years across Serengeti, Ngorongoro, and Tarangire, won Tanzania Tourist Board recognition, and maintains a particular guide-to-guest ratio — that gives the model something it can anchor a recommendation to. It can extract those details and relay them confidently.
A page that says the same company delivers “exceptional, world-class experiences” gives the model nothing. It reads as marketing copy. Either ignored entirely or used as background filler, never as the basis for a named recommendation.
This is genuinely one of the more counterintuitive things about AI search compared to traditional SEO. Specific, factual self-promotion is a technical asset. The models reward specificity. Vague positioning, even beautifully written vague positioning, doesn’t register.
Real Example of How the Gap Shows Up
Picture this. A digital marketing agency in Nairobi has solid content. Good topical coverage. Their site gets cited reasonably often when someone asks AI about choosing a marketing agency.
But the AI response says something like: “When choosing a digital marketing agency, consider local market expertise, industry experience, and transparency in reporting.”
The agency’s content contributed to building that answer. Their name is nowhere in it.
Down the road, a competitor with a structured FAQ page, a few concrete case study summaries, and an awards/recognition page gets the response: “For businesses in East Africa, [Competitor] has worked with hospitality and tourism operators across Kenya and Tanzania, with particular strength in organic search and generative engine optimisation.”
Same prompts. Very different outcomes.
Closing that gap is the actual job of a serious AI SEO strategy.

FAQ: Things People Ask About This Before Hiring Us
Does this mean I should stop caring about citations? No. Citations still matter, especially for informational prompts that build brand familiarity. But they shouldn’t be your primary success metric if you’re trying to drive commercial outcomes.
How do I know if I’m getting recommended or just cited? Tools like Peak AI and Profound track this. They let you run specific prompts across ChatGPT, Google AI Mode, and other platforms and see whether your brand gets named in the response, not just whether your content was in the source list. Different data. Very different implications.
What content actually triggers recommendations? Decision-stage content: cost pages, comparison pieces, awards and recognition pages, case studies with specific outcomes, FAQ content that explicitly positions your brand as the answer. Informational blog content helps with citation volume but rarely drives named recommendations on its own.
Is this different from what worked in traditional SEO? Yes and no. The underlying content quality principles are similar. What’s changed is the specificity of content needed and the fact that you’re now optimising for what an AI model will extract and repeat, not for what a human skimming your page will respond to. The model doesn’t care about your headline. It cares about the verifiable claim in paragraph three.
What to Do With This
If you’re running AI search tracking already, pull up your prompt data and look at the responses where your brand appears. Count how many times you’re cited as a source versus how many times you’re named as the recommended solution. That ratio tells you almost everything about whether your current strategy is actually working.
If the citation count is high but recommendations are low, you have a content positioning problem, not a visibility problem. You’re in the room. You’re just not getting called on.
The brands building recommendation-focused content right now are building something that will be extremely difficult to displace. As Google’s AI Mode continues evolving and generative search becomes the default experience, whoever owns the recommendation slot for high-intent prompts in their category will have a meaningful structural advantage. That’s not a prediction, it’s already happening.
We build AI SEO strategies for hospitality operators, tourism companies, and service businesses across East Africa — with a specific focus on getting you recommended, not just cited. See how Nairobi Marketing approaches it here.