Local SEO

AI Visibility: The Challenge of Measuring the Unmeasurable

Antoine Frankart · Product Management, Local SEO, and Esports Consultant

AI Visibility: The Challenge of Measuring the Unmeasurable

In my last article, I discussed how ChatGPT is challenging Google on Local SEO with structured results and maps. It’s clear that AI is becoming a search engine in its own right.

Naturally, the market is reacting. We are seeing the emergence of "Generative Engine Optimization" (GEO) techniques and tools promising to track your rankings inside these AI models.

But before we rush to optimize for these platforms, we need to understand how they actually work. A fascinating piece of research from SparkToro just highlighted a technical reality: unlike traditional search engines, LLMs are not designed to be consistent.

The Stability Problem

In traditional SEO, we are used to stability. If you search for "Best Product Management Consultant in Paris" on Google, the algorithm runs a complex but deterministic calculation. If you repeat the search five minutes later, the result is usually identical.

LLMs (Large Language Models) like Gemini or ChatGPT function differently. They are probabilistic.

The research shows that when asked for a list of brands or products, AI recommendations can vary wildly:

  • Prompt Sensitivity: A slight variation in phrasing changes the entire list.
  • Temporal Volatility: The same prompt asked at different times generates different results.
  • Ranking Flux: A brand can be #1 in the morning and unlisted in the afternoon.

This isn't a bug, it's a feature of the technology. These models are designed to generate creative, human-like responses, not to retrieve static database rows.

Deterministic vs. Probabilistic: A Product Shift

This distinction is crucial for our strategies.

For the last 20 years, we have optimized for deterministic algorithms (Google). The goal was to reverse-engineer the rules to climb the ladder.

Today, we are facing probabilistic models. Trying to "track" a daily ranking on ChatGPT is like trying to measure the shape of a cloud. It changes constantly.

So, are the new "AI Tracking" tools selling us a dream? Not necessarily, but they are trying to apply an old metric (Rank Tracking) to a new paradigm where it doesn't quite fit.

How to Adapt Your Strategy?

Does this mean we should ignore AI visibility? Absolutely not. But we need to change what we measure.

Instead of obsessing over a volatile "Top 3" ranking, we should focus on the signals that feed the AI's confidence over the long term:

  • Brand Authority: AIs are trained on the web. The more your brand is associated with specific keywords across the web (press, reviews, directories), the higher the probability of being cited.
  • Information Consistency: As I applied to the travel planner redesign, having clear, structured data helps machines understand who you are, reducing the "hallucination" rate.
  • Real User Trust: Reviews and ratings remain a strong signal that permeates through the noise.

Conclusion

We are moving from an era of "hacking the algorithm" to an era of "building brand consensus."

The tools will eventually mature, but for now, take specific "AI Ranking" infos with a grain of salt. The goal isn't to be #1 in a random dice roll, it's to be the obvious answer that the AI naturally selects most of the time.

Product · Local SEO · Esports

Need a consultant to help you?

With 18+ years of experience, I support founders and teams in building high-performance digital projects. Whether you need to structure your product, boost your local SEO, or launch an esports project, let's discuss your needs.