LovedByAI
Diagnostic Guide

10 Ways to Test Whether Your Website Is Invisible to AI Search

Run these 10 tests in ChatGPT, Perplexity, and Google AI to find out whether AI tools know your business exists — and what to do when they do not.

12 min read
By Jenny Beasley
Test Your AI Search Visibility
Test Your AI Search Visibility

Most business owners discover their AI search problem by accident. They type their business name into ChatGPT, and the answer is either wrong, vague, or they simply do not appear when they should.

The tests in this guide let you find out exactly where you stand before a potential client finds out for you. All 10 tests use live AI tools — ChatGPT, Perplexity, Gemini — that you can open in your browser right now. Eight are conversational prompts you can run in two minutes each. Two are quick technical checks that require no expertise.

Roughly 53% of all web traffic now happens in channels that are increasingly difficult to attribute — a category SparkToro has called "dark traffic" — and AI referrals are one of the fastest-growing parts of that category. The businesses getting that traffic are not necessarily better. They have just given AI tools something to work with.

Here is how to find out what the tools are saying about you.


How to run each test

For each test below:

  1. Open ChatGPT, Perplexity, or Gemini — whichever is listed
  2. Paste the prompt exactly as written, with your actual business details filled in
  3. Read the response and compare to the "passing" and "failing" descriptions
  4. Note which tests you fail — there is a scoring guide at the end

Test 1: Does ChatGPT know your business exists?

Tool: ChatGPT (GPT-4o or better)

Prompt:

"What can you tell me about [your exact business name] in [your city]?"

What passing looks like: ChatGPT returns a coherent paragraph describing your business — your category, what you do, where you are located, and ideally some specific detail (years in business, specialty, a service or product you are known for). It should get the basic facts right.

What failing looks like: The response says it cannot find information about that business, hedges heavily ("I'm not sure if this is the right business"), describes a different business with a similar name, or produces a generic description with no specific details.

What this tells you: AI tools build entity profiles from cross-referenced web data. A business with no entity profile — one that has only a website but little else — will either be missing or described with low confidence. This is the most direct test of whether AI tools have enough data about you to say anything reliable.


Test 2: Can ChatGPT describe what you actually do?

Tool: ChatGPT

Prompt:

"What services does [your business name] offer? What kind of clients do they work with?"

What passing looks like: ChatGPT lists your actual services or products with some specificity, and describes the type of client or situation you serve. If you are a family law attorney, it should mention family law, not just "legal services." If you are a roof replacement contractor, it should say roof replacement, not just "construction."

What failing looks like: The response is generic ("they provide professional services to businesses and individuals"), wrong (lists services you do not offer), or incomplete to the point of being useless. The AI is synthesizing your signal — not reading your website in real time — so a weak response means your service descriptions have not been crawled and indexed with enough specificity.

What this tells you: Generic website copy produces generic AI responses. AI tools cite and summarize specific language — and they tend to cite what appears near the top of a page, not what is buried after three paragraphs of background. If your homepage says "we help businesses grow" but your service pages say "we run Google Ads campaigns for e-commerce stores with a monthly spend between $10k–$50k," the specific version is what gets cited. Better still: service pages that open with a direct answer ("We handle storm damage roof replacement in [city] — here's what the process looks like") and include explicit Q&A sections ("How long does a roof replacement take? Typically 1–3 days for a standard residential roof") give AI tools citation-ready passages they can pull verbatim. If specific service page content is missing from AI answers, those pages are either too generic, not indexed by AI crawlers, or buried too deep in your site for crawlers to prioritize.


Test 3: Do you appear when someone asks for "[your type of business] in [your city]"?

Tool: Perplexity

Prompt:

"What are the best [your business category] in [your city]? Give me 3–5 specific recommendations."

What passing looks like: Your business name appears in the list, with a description that is accurate and specific. Bonus points if Perplexity cites a source next to your name — that citation icon means the information came from a directory or review platform, which is the best possible signal.

What failing looks like: You do not appear. Your competitor appears instead. Or Perplexity returns an answer but says it cannot provide specific business recommendations in your area.

What this tells you: This is the most direct simulation of a potential client query. If you do not appear here, a real person asking that exact question about your service category is not finding you. Check who does appear — looking at why a competitor appears is often the fastest diagnostic for what is missing in your own profile.


Test 4: Does every AI tool agree on your basic facts?

Tool: ChatGPT, then Perplexity, then Gemini

Prompt (run in all three):

"What is the address and phone number for [your business name] in [your city]?"

What passing looks like: All three tools return the same address and phone number — and both are correct.

What failing looks like: The tools return different phone numbers, different addresses, or one says it cannot find contact information. Any discrepancy between tools is a NAP consistency failure — the tools are drawing from different sources that contradict each other, which erodes their confidence in your business.

What this tells you: NAP inconsistency (name, address, phone number) is the most common foundational GEO problem. It typically happens when a business moves, changes phone numbers, or is listed differently across Google, Yelp, industry directories, and the website. AI tools lower their confidence in businesses they cannot corroborate. Fix the mismatches first.


Test 5: Does Perplexity cite any trusted third-party source for your business?

Tool: Perplexity

Prompt:

"Is [your business name] in [your city] reputable? What do reviews say about them?"

What passing looks like: Perplexity cites at least one external source — Google reviews, Yelp, an industry directory, a news mention — and the citations appear as numbered references. The response summarizes what those sources say about your reputation.

What failing looks like: Perplexity says it cannot find review information, or it describes your business without any citations. No citations means AI tools have nothing to corroborate their description of you with — they are synthesizing from your own website at best, which carries lower weight than independent third-party sources.

What this tells you: AI tools trust third-party citations more than your own website. A business that only exists on its own website is much weaker in AI recommendations than one that is mentioned, reviewed, and described across directories, review platforms, and external articles. Citations from Yelp, Google reviews, industry directories, and press mentions all contribute.


Test 6: Does an industry-specific AI query find you?

Tool: ChatGPT

Prompt (customize for your industry):

For attorneys: "Who is a reliable estate planning attorney in [your city] who handles [specific service]?" For contractors: "What roofing company in [your city] handles insurance claims for storm damage?" For healthcare: "Is there a physical therapist in [your city] who specializes in post-surgical recovery?" For accountants: "Who in [your city] does tax preparation for freelancers and self-employed people?"

What passing looks like: Your business appears by name, with a description that references the specific service mentioned in the prompt. The response mentions your location accurately.

What failing looks like: A competitor appears instead, or ChatGPT says it recommends searching Google or a directory rather than naming a specific business.

What this tells you: General category queries ("best accountant in [city]") are the hardest to win — lots of competition for a broad term. Specific service queries are much more winnable, especially for businesses with service-specific pages. If you fail this test, the fastest fix is creating or rewriting the specific service page that matches the query.


Test 7: Does ChatGPT give a coherent answer about your reputation?

Tool: ChatGPT

Prompt:

"What do people say about working with [your business name]? Are they trustworthy?"

What passing looks like: ChatGPT provides a balanced, specific answer — references what the business is known for, mentions the general sentiment from reviews across platforms, and does not hedge or refuse.

What failing looks like: The response says it cannot evaluate trustworthiness, gives an extremely vague answer ("they have been in business for a while and have some customer reviews"), or describes a different business.

What this tells you: AI tools weight sentiment across review platforms heavily when asked about trust. If your Google reviews are strong but you have nothing on Yelp, Trustpilot, or an industry directory, the AI's picture of your reputation is thinner than it should be. Multi-platform review presence (even a modest one) produces much richer, more confident AI responses.


Test 8: Can Gemini find your business through a conversational referral query?

Tool: Gemini

Prompt:

"A friend recommended [your business name] but I cannot remember what they do. What kind of business is it and what do they help with?"

What passing looks like: Gemini accurately identifies your business category, describes what you do in plain language, and confirms your location. It should feel like a knowledgeable local answering the question.

What failing looks like: Gemini says it does not have information about that specific business, returns very generic information, or asks for clarification. This means no clear entity profile exists for your business in Google's knowledge layer — which also affects how Gemini and Google's AI Overviews handle your brand.

What this tells you: Gemini draws more heavily from Google's entity knowledge layer than other AI tools. Failing this test while passing ChatGPT and Perplexity tests often means your Google-side signals need work — specifically Google Business Profile completeness and your Knowledge Panel visibility.


Test 9: Check whether AI crawlers are allowed on your website

Tool: Your browser (no AI tool needed)

What to do:
  1. Type yourdomain.com/robots.txt into your browser address bar (replace with your actual domain)
  2. Look at what appears

What passing looks like: Either the robots.txt is empty (which is fine), or it says Allow: / for all bots (or for GPTBot, PerplexityBot, and ClaudeBot specifically), or you see no Disallow: / rule that would block all robots.

What failing looks like: You see Disallow: / for User-agent: *, or you see specific Disallow entries for GPTBot, ClaudeBot, or PerplexityBot. These entries block AI tools from reading your website entirely — all the other work you do on signals is invisible to them.

What this tells you: AI crawler blocking often happens accidentally — a security plugin that added broad bot-blocking rules, or a past developer decision to block all bots during a site redesign that was never reversed. This is the most consequential technical issue on the list, and it is a five-minute fix once you have found it.


Test 10: Check whether your website has schema markup

Tool: Schema.org Validator (free, no account required)

What to do:
  1. Go to validator.schema.org
  2. Paste your homepage URL into the URL field and run the test
  3. Look at what schema types it detects

What passing looks like: The validator returns at least a LocalBusiness (or a more specific type like LegalService, MedicalBusiness, HomeAndConstructionBusiness) with your name, address, phone number, and business category. Ideally it also returns FAQPage schema if you have FAQ sections, and Person schema for individual practitioners.

What failing looks like: The validator returns nothing, or returns only a very thin WebPage type with no business information. No schema means AI tools have to infer everything about your business from unstructured text — which they do less reliably than when you tell them directly.

What this tells you: Schema markup is the machine-readable layer of your website. It tells AI tools exactly what your business is, what it does, and where it is located, in a structured format that requires no interpretation. A business with accurate, complete schema markup consistently receives more specific, confident AI descriptions than one without it.


Scoring your results

Count the tests you passed and failed:

AI Visibility Score — what your test results mean and which fixes to prioritize
AI Visibility Score — what your test results mean and which fixes to prioritize
ScoreWhat it means
9–10 passingStrong AI visibility. Focus on maintaining and deepening your directory and review presence.
7–8 passingFunctional AI visibility with gaps. Fix the failing tests in order of impact (Tests 9, 10 first; then 4, 3, 6).
5–6 passingModerate gaps. You are appearing in some AI answers but missing from higher-intent queries. The fixes at this level typically take 3–6 hours of work.
3–4 passingSignificant visibility gaps. You are likely invisible to potential clients researching your service category in AI tools. Prioritize foundational fixes: bot access, schema, NAP consistency, directory presence.
0–2 passingYour business effectively does not exist in AI search. Start with Test 9 (bot access) and Test 10 (schema), then work through the directory and review signals before revisiting the conversational tests.

What to fix first

The order matters. Technical blockers (Tests 9 and 10) prevent everything else from working. Fix those before spending time on content or directories.

After the technical baseline:

  1. Fix NAP inconsistencies (Test 4 failures) across Google Business Profile, Yelp, and your primary industry directory
  2. Improve directory presence and citations (Test 5 and 6 failures)
  3. Rewrite service pages that are producing generic AI responses (Test 2 failures)
  4. Build review presence across at least two platforms (Test 7 failures)
  5. Check Google Business Profile completeness for Gemini-specific gaps (Test 8 failures)

Most business owners who run through this list find 3–5 failing tests. The good news is that the highest-impact fixes — bot access, schema, NAP accuracy, Avvo/Yelp profile completion — are all achievable without a developer and without an ongoing content budget.


If you want a complete picture before you start fixing, the LovedByAI GEO checker runs an automated scan across these signals — bot access, schema completeness, NAP consistency across directories — and returns a prioritized fix list specific to your site.

Jenny Beasley

Jenny Beasley is Head of GEO at LovedByAI. With 7+ years as SEO Director at Salesforce and 3 years pioneering LLM optimization, she developed the GEO framework delivering a 200% median increase in AI citations within 60 days.

Frequently asked questions

Run the prompts in this guide. Start with Test 1 and Test 3 — if ChatGPT can describe your business accurately and include you in a 'best in [city]' query, your AI visibility is functional. If the answers are vague, wrong, or you are absent, you have specific gaps to fix.

Very common, but not inevitable. SparkToro estimates that 53% of web traffic now happens in channels that analytics tools cannot measure — 'dark traffic' that includes AI referrals. Most small businesses have significant AI visibility gaps simply because no one has checked. The businesses that appear consistently are usually not better businesses — they have just given AI tools more to work with.

No. The first eight tests are conversational — you paste a prompt into ChatGPT or Perplexity and read the response. Tests 9 and 10 require opening your website's robots.txt file and pasting a URL into a free validation tool. Neither requires any technical knowledge.

Most business owners find 3–5 failing tests. Prioritize Tests 9 and 10 first if they fail — bot access and schema are foundational. Then work through conversation tests with poor results: NAP accuracy (Test 4), Avvo/directory citation (Test 6), and service detail depth (Test 2) are the highest-leverage fixes after the technical baseline.

After you make fixes, wait 4–6 weeks and run the full battery again — AI tools typically take that long to recrawl and update their knowledge base. Run the full battery every 3–6 months as part of routine business maintenance.

Ready to optimize your site for AI search?

Discover how AI engines see your website and get actionable recommendations to improve your visibility.

Free · Instant results

Check GEO Score