Your brand might be invisible to ChatGPT even if it dominates Google. Most marketers assume AI search works like traditional search, so they never check whether AI systems can actually read their site. The fix starts with a 30-second diagnostic that surfaces the specific reasons AI models skip over your content.
Why AI Search Is a Different Game
AI search is not a ranking problem, it is a readability problem. ChatGPT, Perplexity, and Google AI Overviews do not rank your pages against competitors. They read your site, decide if they understand it, and either cite you or move on.
If an AI system cannot cleanly extract what your brand does, who it serves, and why it matters, it will recommend a competitor instead.
Traditional search gives you a second chance. If you rank on page two, you still exist in the results. AI search does not work that way. When a user asks ChatGPT "what are the best marketing automation platforms," the model constructs an answer from a handful of sources it trusts. You are either in that answer or you are invisible.
This is the shift most AI marketers are underestimating. According to Semrush's analysis of AI citations across 230,000 prompts, LLMs draw from a surprisingly narrow pool of sources, and the sources they cite are not always the ones that rank highest on Google. A site can hold the top Google position for its category and still never appear in a single AI-generated answer.
The reason is structural. AI models do not read sites the way search engines do. They need content that is machine-readable, clearly classified, and consistently described. Most sites fail on at least one of those three fronts, and marketers never find out because their traffic dashboards do not show the citations they are losing inside ChatGPT.
[IMAGE BRIEF: concept: AI search vs Google search comparison / format: Split layout / title text: Two Discovery Engines, Two Different Rules / key elements: left column "Google" with ranking icon, right column "AI Search" with citation icon, arrows showing different evaluation criteria / accent colour: orange on the AI Search column / takeaway line: Ranking well does not mean being cited / style: NotebookLM minimal infographic]
The AI Visibility Checker: What It Actually Tests
To give marketing teams a concrete starting point, I built a free tool that checks the ten technical factors that determine whether AI systems can read, classify, and cite a website. You enter a URL and get a score out of 100 in under 30 seconds.
You can run it here: victoriaolsina.com/is-your-website-invisible-to-ai
The score is not an SEO audit. It is a machine-readability diagnostic. It tells you whether the raw HTML of your site gives AI models enough to work with, and whether the signals you are sending are consistent enough to produce a confident classification.
The factors it checks fall into four groups: access, structure, content, and signal consistency. Each one maps to a specific reason AI systems hedge or ignore brands. Here is what each group actually means for a marketer.
Group 1: Can AI Access Your Site at All?
Before any AI system can cite your content, it needs permission and direction to find it. Three small files control this and most sites have at least one of them wrong.

robots.txt is the file that tells AI crawlers whether they are allowed to read your site. Most sites have it. Fewer have checked what it actually allows. The common failure is blocking AI-specific crawlers like GPTBot, PerplexityBot, or ClaudeBot while leaving Googlebot open. When that happens, you are visible on Google and invisible everywhere else.
Sitemap.xml is the map that tells crawlers which pages exist. A missing or incomplete sitemap means AI systems discover your pages by following links, which means blog posts, documentation, and product pages can be missed entirely.
llms.txt is the newest signal on the list. It is a plain text file at the root of your domain that acts as a table of contents for AI crawlers, guiding them to your most important content. Adoption is low right now, which makes it a real differentiation opportunity for brands that implement it first.
llms.txt is the fastest-growing AI visibility signal of 2026 and most brands have not implemented it yet.
Group 2: Is Your Site Readable Once AI Gets There?
Once an AI crawler accesses your site, it reads the raw HTML. Not what a user sees in a browser. Raw HTML. This is where most modern marketing sites fall apart.
The biggest problem is JavaScript rendering. If your site is built in React, Vue, or Framer and content loads client-side, that content does not exist in the HTML that AI crawlers receive. The page looks perfect to a human visitor. To ChatGPT, it looks empty.
If your content is rendered by JavaScript, it does not exist to AI search systems. They read HTML, they do not execute scripts.

The checker measures this two ways. The script tag ratio compares how much of your page is scripts versus actual content. A ratio above 50% means your page is mostly code with very little extractable information. The crawlable word count measures how many words actually appear in the raw HTML. Under 300 words on a homepage is usually too little for an AI model to form a confident understanding of what your brand does.
The fix is server-side rendering or static generation for your most important pages. This is a developer task, but it is not a large one, and it is usually the single highest-impact change a marketing team can request.
Group 3: Does Your Site Structure Actually Mean Something?
This is where marketing fundamentals intersect with AI readability. Three factors sit in this group and all of them relate to how clearly your pages communicate what they are about.

Title tags are the first thing an AI system sees when deciding if a page is relevant. A title like "Home | Brand" or "Platform | Brand" communicates nothing. AI models skip pages they cannot classify from the title alone. The rule is simple: every important page title should state the category, the primary function, and ideally the audience, in under 60 characters.
| Weak title | Strong title |
| Home | Gnosis | Gnosis: Decentralised Financial Infrastructure for Web3 Teams |
| Platform | HubSpot | HubSpot: Marketing, Sales, and Service CRM for Growing Companies |
| Welcome to Stripe | Stripe: Payment Processing and Financial Infrastructure for Businesses |
Meta descriptions function as the page's pitch to AI models. A specific, factual, constraint-aware description significantly increases the chance of accurate citations. A generic tagline does the opposite. Aim for 130 to 155 characters that state what the page covers in plain language.
H1 tag structure is where modern sites frequently break. Every page should have exactly one H1 tag that tells machines what this specific page is about. Multiple H1s create ambiguity, and the model cannot determine which one represents the page's actual topic. The common failure pattern in modern marketing sites is animation frameworks like Framer that split a headline into multiple elements, each tagged as H1. The site might have nine H1 tags on the homepage, each containing a single word. To an AI model reading raw HTML, that looks like nine competing page topics with no coherent meaning.

Group 4: Are Your Signals Consistent?
The final group is about trust. AI models build confidence in a brand when multiple signals point to the same conclusion. They lose confidence when signals conflict.

Schema markup is the highest-impact factor in this group. Schema is a vocabulary of HTML tags that labels your content for machines. It tells an AI system explicitly: this is an organisation, this is a product, this is an article, this is an FAQ. Without schema, the model infers everything from unstructured text, which leads to misclassification and hedged language in AI-generated answers.
The four non-negotiables for most brands are Organisation schema on the homepage, Product schema on product pages, FAQPage schema on any Q&A content, and Article schema on blog posts. A fifth, Person schema on author pages, significantly increases trust signals in sectors where credibility matters, which is most of them in 2026.
Schema markup converts AI guesswork into confirmed classification. It is the single highest-impact one-time technical fix for AI visibility.
Metadata alignment is the last factor and the most overlooked. Your title tag, your H1, and your meta description should all describe the same thing using consistent language. When they tell three different stories, AI models receive conflicting signals and cannot settle on a confident classification. The result is reduced citation frequency, even if each individual element is well-written.
What Your Score Actually Means
The checker returns a score out of 100 along with a list of the specific factors that are failing. Here is how to read the result.
Below 50 means fundamental structural problems that prevent AI systems from reading and classifying your site. These are eligibility failures. The site is not in the game yet. Fix these before investing anywhere else in AI visibility.
Between 50 and 70 means partially readable with significant gaps. AI systems can find some content but will hedge, misdescribe, or underrepresent the brand in generated answers. This is the most common range, and the gap between 55 and 80 is usually three or four focused technical fixes.
Above 70 means the technical foundation is solid. At this point, the work shifts from technical fixes to content, authority, and reinforcement, which is where most brands generate sustained AI visibility growth.
The technical layer is Layer 1 of a four-layer framework I cover in full in Mastering AI Search for Crypto & Web3 Brands. While the book focuses on Web3, the technical fundamentals apply to any brand that needs to be discoverable in AI search. Fixing the technical layer is necessary. It is not sufficient on its own, but nothing above it works without it.
Why This Matters for AI Marketers Specifically
If you are leading AI marketing at a company, you are probably already thinking about how to use AI tools internally. Personalisation, content generation, workflow automation, all of it. The question this post is asking is the mirror image: can AI models see your brand clearly enough to represent you to the audiences they now influence?
That question is becoming harder to avoid. Users increasingly start research inside ChatGPT or Perplexity rather than Google. B2B buyers use AI agents to build vendor shortlists before a human ever visits a website. If your brand is not in the initial shortlist an AI model constructs, you never get the chance to compete on the features and pricing that would otherwise win the deal.
The checker gives you a concrete starting point. Run it, look at the specific factors that are failing, and hand them to your development team with a clear priority order. Most of the fixes are one-time technical changes, not ongoing work. The difference between a score of 45 and a score of 80 is usually a week or two of focused implementation.
What to Do Next
Run the free AI Visibility Checker at victoriaolsina.com/is-your-website-invisible-to-ai and get your score. Look at the specific factors that are failing and prioritise fixes in this order: access, structure, content, signal consistency.
Frequently Asked Questions
What is an AI visibility score and how is it calculated?
An AI visibility score measures how well a website is structured for AI readability. It covers technical factors like crawl access, schema markup, JavaScript rendering, H1 structure, and metadata alignment. Each factor is weighted by its impact on whether AI systems can extract, classify, and cite the site's content. A score of 100 means no structural barriers to AI readability. Most brands score between 40 and 70 on their first run.
An AI visibility score measures machine readability, not search ranking. A site can rank on Google and score under 50 on AI visibility.
Is AI search really replacing Google for my audience?
AI search is not replacing Google entirely, but it is rapidly becoming the dominant starting point for certain types of queries, particularly research, comparison, and recommendation queries. A brand optimised only for Google rankings is leaving AI-driven discovery to competitors, and this matters because AI search is where high-intent research now happens.
AI search is not replacing Google, it is fragmenting discovery across multiple AI-powered interfaces that most brands are not structured for.
Why does JavaScript affect my visibility in ChatGPT?
AI systems do not execute JavaScript. They read raw HTML. If your content is rendered by React, Vue, or Framer components, it does not exist in the HTML that AI crawlers receive, regardless of how the page looks in a browser. Server-side rendering or static generation for key pages solves this by ensuring content appears in the HTML before any scripts run.
JavaScript-rendered content is invisible to AI models. Without server-side rendering, your content does not exist to AI search systems.
What is llms.txt and should my brand have one?
llms.txt is a plain text file at your domain root that guides AI crawlers to your most important content, functioning like a table of contents for AI systems. Adoption is still low, which makes it a meaningful differentiation opportunity right now. The file takes under an hour to create and upload, and most competitors have not implemented it yet.
llms.txt is one of the highest-leverage, lowest-effort AI visibility signals available in 2026.
How many H1 tags should a page have?
Exactly one. The H1 tells machines what a specific page is about. Multiple H1 tags, a common result of modern animation frameworks like Framer, create conflicting signals that AI models cannot resolve. The fix is a single descriptive H1 containing the page's core topic, with animations handled via CSS and span elements rather than repeated heading tags.
One H1 per page, describing what the page is, never a brand tagline, never an animated headline split across multiple tags.
Which schema types should every brand implement?
The four non-negotiables are Organisation schema on the homepage, Product schema on product pages, FAQPage schema on any Q&A content, and Article schema on blog posts. A fifth, Person schema on author pages, adds significant trust signals in sectors where credibility matters, particularly finance, healthcare, and B2B technology. Without schema, AI models infer your category and function from unstructured text, which produces imprecise descriptions in generated answers.
Schema markup turns AI guesswork into confirmed classification and is the single highest-impact one-time technical fix for AI visibility.





