Production-ready prompts, scripts, frameworks and AI agents for Google Ads professionals. No payment required.
A new entrant just appeared in auction insights, your CPC jumped 30 percent, and the client wants a competitive teardown before Friday. Most of these end up as a list of who else runs ads on the same keywords. This goes deeper: what they actually promise, where the message match breaks, and the gaps you can credibly own.
Every competitor signal classifies into one of four buckets by adoption rate. Only the bottom two are actionable, and only after they pass the ownability test.
| Adoption | Classification | What it means |
|---|---|---|
| Over 60% | Table stakes | Have it, cannot win on it |
| 30 to 60% | Emerging standard | First-mover advantage still available |
| Under 30% | Potential differentiator | Investigate, run ownability test |
| 0% | White space | Investigate, run ownability test |
White space is only valuable if you pass all three. Fail any gate and it is empty space, not opportunity.
| Overlap rate | Threat level | Action |
|---|---|---|
| Over 50% | Primary | Full deep analysis (competitor-landscape-analysis, competitor-messaging-analysis) |
| 20 to 50% | Watch list | Monitor via competitor-alert-agent and competitor-auction-insights-export |
| Under 20% | Secondary | Quarterly check-in |
| Score | Definition | Implication |
|---|---|---|
| HIGH | Ad promise reflected in LP hero plus CTA | Formidable. Differentiate elsewhere. |
| MEDIUM | Related but imperfect | Vulnerability. Outflank on consistency. |
| LOW | Clear misalignment | Capture their disillusioned shoppers. |
Most competitive analyses catalog what each competitor says, produce a 30-page table, and never tell the client what to do about it. This framework replaces cataloging with diagnosis: map the relationships between positions instead of the positions themselves, quantify adoption rates so you can tell table stakes from differentiators, and pressure-test every white space opportunity before recommending it. The most valuable finding is never what competitors are doing. It is what they are not doing that buyers care about and that you can credibly own.
Competitive intelligence is ecosystem mapping. When one competitor drops prices, everyone above them suddenly looks more credibly premium. When every competitor claims “award-winning service,” the award becomes table stakes and stops differentiating anyone. White space is only valuable if buyers care and you can deliver, and that has to be tested with three explicit gates, not assumed. Adoption rate thresholds replace subjective observation: above 60 percent adoption is table stakes (have it, cannot win on it), 30-60 percent is an emerging standard with first-mover advantage, below 30 percent is a potential differentiator, and 0 percent is white space. And every finding has to answer “so what does this mean for us” or it is surveillance, not strategy.
Use this before launching a new account, when positioning feels stuck, when CPCs spike without an internal cause, when impression share drops and you need to know if a new entrant explains it, and as the foundation for any ad copy or landing page redesign that needs differentiation. It calibrates by vertical: ecommerce drops to product-level analysis (titles, price points, Shopping feed quality); lead gen centers on trust signal density and conversion mechanism comparison; B2B SaaS spans 20-50 alternatives globally and weights pricing model, integration ecosystem, and content authority; local services is geo-bounded and reputation-driven (reviews, response time, Google Business Profile presence); high-value verticals add compliance as a baseline dimension. It does not apply when no competitors are visible in the auction. That is usually low search demand, not green field. Verify keyword volume and analyze indirect competitors and non-purchase alternatives (“we’ll just keep using spreadsheets”) because those are often the real competition in B2B.