Ten years ago, a buyer researching a new vendor would open Google, type in a question, and scan ten blue links. The winners were the brands that showed up on page one. The rest were invisible.
Today, the same buyer opens ChatGPT and types the same question. They get one answer, three names, and a paragraph of reasoning. If your brand is one of those three, you get considered. If you're not, you never enter the set.
The shortlist happens before you hear about it
The number we can't stop thinking about: over 400 million people ask AI questions about products and vendors every week. Most buyers now narrow their options in AI before they ever land on a marketing site. By the time a demo request hits your inbox, the shortlist is already written.
That shortlist is sticky. Once a brand is named consistently in a category, it keeps getting named — because the models are leaning on the same patterns they were trained on and the same sources they've learned to trust. Getting in early compounds. Missing it for a year is a year of pipeline you can't easily reclaim.
It's not one model. It's five.
ChatGPT is the one most people talk about, but it isn't the whole story. Claude is excellent at reasoning and gets cited heavily by technical buyers. Perplexity leans on public sources and comparison content. Gemini pulls from Google's index in specific ways. Copilot is increasingly embedded in the tools your buyers already use at work.
Each of them has its own preferences — which sources it trusts, which formats it cites, where it goes when it's unsure. Brands that want to be the answer can't optimize for one and ignore the rest. We scan all five, every week, for every client.
What moves the numbers
There are four things we see move the needle, in roughly this order:
- Content in the formats AI cites most — comparison pages, long-form answers, specific how-tos, help docs.
- Presence on the sources models reach for — Reddit, G2, Capterra, YouTube, industry publications.
- Clean, well-structured metadata and schema so AI crawlers can read your site without guessing.
- Third-party authority — podcasts, expert roundups, citations in the places AI has learned to trust.
Why SEO habits don't fully transfer
A lot of this looks like SEO from a distance, and some of it is. Good content helps both. But the winning formats aren't the same. Google rewards pages that rank for a keyword; AI rewards content that answers a question cleanly, attributes well, and makes it easy to quote.
Same raw ingredients, different recipe. Teams that try to win AI answers with their existing SEO playbook usually see some lift, then stall. The missing layer is writing for extraction, not for clicks.
What to do this quarter
- Pick 20 questions your best buyers actually type into AI. Not keywords — questions.
- Scan your current mention rate across ChatGPT, Claude, Perplexity, Gemini, and Copilot.
- Ship one comparison page, one long-form answer, and one help-doc answer per month, minimum.
- Track how often you're named every week. Kill anything that isn't moving the number.
This is exactly the loop we run for our clients. We're happy to walk you through your version of it on a demo call — you get a full audit either way.
See where your brand actually stands in AI answers.
We'll run a full custom audit before the call. You keep the report regardless.
How to see if ChatGPT (and friends) recommends you — a self-audit you can run today
A practical, 45-minute check you can do yourself before you ever talk to an agency. Includes the exact prompts we use on our demo audits.
The content AI actually cites — a formats primer
Not all content wins in AI answers. Here are the formats that consistently get cited across ChatGPT, Claude, Perplexity, Gemini, and Copilot — and why.
What to measure when SEO no longer predicts pipeline
If your organic traffic is flat and your pipeline is moving the wrong way, the metric that used to explain things isn't explaining anymore. Here's what to track instead.