Home / Publications / Research / The State of AEO and GEO in 2026

The State of AEO and GEO in 2026

A field report on Answer Engine Optimization and Generative Engine Optimization in 2026: a literature review of the AEO/GEO discourse, ForIntel data across 15 verticals, and the gap between vendor consensus and what independent data actually supports.

By ForIntelPublished 2026-04-2643 min read

Abstract

Answer Engine Optimization and Generative Engine Optimization — AEO and GEO — have moved in roughly twenty-four months from an academic research paper to a saturated vendor category with a dominant reference case study, a four-acronym taxonomy dispute, and a widening gap between what the thought-leadership content asserts and what independent data supports. This report surveys the state of that landscape as of April 2026.

The analysis proceeds in three parts. Part 1 is a literature review of what the current AEO/GEO discourse is actually saying, built on a corpus of 111 YouTube transcripts across 125 channels drawn from eight seed searches, and triangulated with public web content including the HubSpot April 2026 case study that has become the category's default benchmark. Part 2 presents findings from ForIntel research conducted in April 2026 across 15 commercially distinct verticals. Part 3 synthesizes the findings into a strategic framework usable by practitioners, and names the measurement practices a serious operator should adopt.

Four results shape the report's argument. First, buyer-side AEO and GEO search terminology is essentially absent from US Google search as of April 2026: five of seven tested AEO/GEO/LLM-visibility search stubs returned no detectable keyword completions, indicating the discourse is vendor-led and consultant-led, not buyer-led. Second, tool-vendor displacement of vertical benchmark SERPs is heterogeneous rather than universal: 12 of 15 tested vertical queries had zero tool vendors in the top 10 organic results. Third, AI Overview prevalence is near-total on informational queries but the information payload of these Overviews is frequently empty, creating the optical impression of competitive saturation without the substance. Fourth, the most widely cited "AEO lever" — domain authority and branded mentions — is supported by independent data at very large effect size, corroborated by Ahrefs' August 2025 study of 75,000 brands, but the causal pathway remains ambiguous: both signals may be proxies for the same upstream variable rather than independent levers.

The report's central conclusion is directional rather than triumphant. The shift toward answer-engine visibility is real. The consensus playbook is partially correct. The measurement infrastructure for verifying either is still twelve to eighteen months immature. Independent research, not vendor-produced case studies, is the appropriate evidentiary standard for strategic decisions in this space. This report is offered in that spirit.


Key Statistics at a Glance

  • YouTube corpus: 176 videos surfaced across 8 AEO/GEO/SEO-disruption search terms, 111 transcripts captured, 125 unique channels, 348,144 total words analyzed
  • Discourse convergence: 0.272 mean cosine similarity across the corpus — interpretable as "shared vocabulary, not shared conclusions"
  • Dominant discourse cluster: the "ai + content + search" topic appeared in 38 of 111 documents (34%)
  • Buyer-side AEO search terminology: 5 of 7 tested AEO/GEO/LLM-visibility search stubs returned no detectable keyword completions in US Google search (April 2026)
  • Tool-vendor displacement: 12 of 15 tested vertical benchmark SERPs showed zero tool-vendor domains in the top 10 organic positions
  • AI Overview prevalence on informational queries: 55% on one 20-query sample (11 of 20), 53% on a 15-query sample (8 of 15), 100% on a small verification sample on consultant help-intent queries
  • LLM citation predictor (independent research): domain rating shows a large effect size (Cohen's d = 1.12) as a correlate of LLM citation, while raw referring-domain count shows a negligible effect (d = 0.09) — the contrast suggests the AEO signal is brand authority embedded in training, not the link economy itself
  • Ahrefs independent corroboration (Aug 2025): branded mentions show the strongest correlation with AI Overview visibility across a 75,000-brand sample, with a follow-up December 2025 study extending the analysis to ChatGPT and AI Mode
  • HubSpot's reported results (April 2026 case study): 1,850% lift in qualified leads, 3x conversion vs traditional search, 27% year-over-year organic decline, 92% of industry-solutions pages cited by answer engines
  • Survey counter-signal: 79.8% of Americans preferred traditional search over AI search in February 2025; the same study's August 2025 follow-up wave shows the figure fell to 66.9%, daily AI tool usage doubled, and ChatGPT's share of general searches tripled (HigherVisibility, two-wave 2025 study)
  • Vendor category maturity: named entrants XFunnel (HubSpot's partner), Profound, Conductor, Semrush AI Visibility Toolkit, and AthenaHQ; no peer-reviewed methodology validation is available in the sources reviewed for this report

Methodology

The report combines three evidence streams.

Literature scan. A YouTube research scan on April 20, 2026, pulled 176 videos across eight seed search terms: "Traditional SEO is dead," "AEO," "GEO," "LLMs," "Answer Engine Optimization," "Generative Engine Optimization," "Search Engine optimization is dead," and "SEO is dead." Transcripts were successfully captured for 111 of the 176 videos (63%), drawn from 125 unique channels. The captured corpus totals 348,144 words. Analysis included LDA topic modeling, frequent-phrase extraction, cosine-similarity convergence measurement, and temporal lexical analysis. Individual videos were then reviewed in the original transcript for consensus claims, quantitative assertions, and counter-voices. Public web content — the HubSpot April 2026 AEO product page and Spring 2026 Spotlight launch announcement, the Aggarwal et al. paper on GEO (arXiv:2311.09735; Princeton University, IIT Delhi, and independent researchers) as the earliest formal academic framing, the Ahrefs August 2025 75,000-brand study, and representative posts from Conductor, Semrush, Frase, Enrich Labs, and CXL — was triangulated with the video corpus.

ForIntel research evidence. A research program conducted during April 18–19, 2026 produced 42 queries across 15 commercially distinct verticals, drawing on search-demand, SERP composition, AI Overview presence, and related intelligence surfaces. Each reported finding carries its sample size and its counter-signals.

Prior LLM citation research. Referenced at one point in Part 2: research conducted earlier in 2026 testing 720 query-model probes across four leading answer engines (ChatGPT, Claude, Gemini, Perplexity) — 60 queries × 4 models × 3 repeats — of which 199 produced threshold-passing citations to seven distinct cited domains. Probes were issued via APIs covering LLM response capture, SERP analysis, and backlink summary data. Domain-level effect sizes for citation correlates were computed against a non-cited comparison pool of 458 domains drawn from parallel SERP research across the same 60 queries. Effect sizes use standard Cohen's d classification and are reported only where the underlying comparison passes accepted statistical-validity thresholds for descriptive (Tier 2) analysis. Logistic-regression-based inference was not attempted — the cited-domain count is below the events-per-variable threshold for that level of analysis.

All findings are tied to their source evidence. Effect sizes use Cohen's d classification. Findings that are directional rather than inferentially validated are labeled as such. Counter-signals, alternative interpretations, and data gaps accompany every finding with equal structural weight.


Part 1 — The AEO/GEO Discourse: What the Literature Actually Says

Email me the State of AEO and GEO White Paper

One email. Sent to your inbox. Unsubscribe anytime.

1.1 Taxonomic Inventory

The field's first characteristic is its acronym proliferation. The YouTube corpus includes videos titled "SEO vs AIO vs GEO vs AEO — What's the Real Difference? (Terms Explained)" and "A Complete Guide to AI SEO in 2026 (AEO, GEO, LLMO)." The abundance of terms is itself a signal: the discipline does not yet have settled vocabulary.

The taxonomy as it is currently used across the 111-transcript corpus sorts into three operational definitions, plus two less-settled ones.

AEO (Answer Engine Optimization) is used to mean optimizing content so it is cited inside the answers produced by ChatGPT, Claude, Gemini, Perplexity, Google's AI Overview, and related surfaces. The unit of success is being named — shown as a citation in the generated answer — not being clicked through to.

GEO (Generative Engine Optimization) is used in two incompatible ways. The first meaning, originating in the Aggarwal et al. (2024) paper and preserved by Ahrefs ("The New SEO Playbook for AI Search: Top GEO Ranking Factors," November 2025), treats GEO as a specific subset of techniques for improving visibility in generative engines — effectively a technical research methodology. The second meaning, used by Exposure Ninja, Semrush, Hostinger Academy, Vendasta, SMA Marketing, and most of the mid-discourse YouTube content, treats GEO as effectively synonymous with AEO, or with a broader "optimize for the generative search era" umbrella. The first use is narrower and older; the second is broader and dominant in 2025–2026 marketing discourse.

LLMO (Large Language Model Optimization) is used by a minority cluster in the corpus — including "A Complete Guide to AI SEO in 2026 (AEO, GEO, LLMO)" — to refer to direct optimization for inclusion in LLM training and retrieval-augmented-generation contexts. It is the least-used term of the four.

AIO (AI Optimization) appears occasionally as an umbrella term covering all the above.

SEO continues to refer to click-earning organic search visibility on traditional ranked-results pages, and is invoked throughout the corpus both in contrast to AEO/GEO and in synthesis with them.

A useful framing that emerges from the most convergent documents in the corpus — the HubSpot AEO Playbook talk (November 2025, 81,499 views), Lenny's Podcast interview with Ethan Smith of Graphite (September 2025, 154,505 views), Marketing Against the Grain's "SEO vs. AEO: The New Rules" episode (December 2025, 20,756 views), and the Surfer Academy "How to Dominate AI Search Results in 2026" tutorial (May 2025, 121,815 views) — is that SEO, AEO, and GEO are increasingly distinguished not by the content practices they prescribe, which overlap substantially, but by the measurement surface they optimize toward. SEO measures clicks from ranked results. AEO measures citations in generated answers. GEO, in its broader marketing use, measures both plus share-of-voice across AI surfaces.

This is the taxonomy the rest of the report uses. It is not an assertion about which definition is correct. It is a description of the most workable consensus available as of April 2026.

1.2 Points of Consensus

The corpus shows moderate vocabulary convergence — mean cosine similarity of 0.272 across the 111 documents — with a cluster of documents in the 0.38–0.41 range that define what might be called the core AEO/GEO consensus. Those documents are "SEO vs. AEO: The New Rules for Winning on Google and AI Models" (Marketing Against the Grain), "How Answer Engine Optimization (AEO) Works + AEO Playbook with Bernard Huang" (Leveling Up with Eric Siu), "AEO for Agencies: How to Price, Sell & Deliver AI Visibility Audits and AEO Services," "SEO vs AIO vs GEO vs AEO — What's the Real Difference?" and "Generative Engine Optimization (GEO) Explained Like You're 5" (Vendasta).

Across these and the broader corpus, five points of consensus emerge with high frequency.

Consensus 1 — Traditional ranked-results visibility is no longer sufficient. Every reviewed source except a small skeptical cluster agrees on this point. The specific metric cited by Neil Patel in "The New Rules of SEO (2026)" (July 2025) — that Google handles roughly 13.7 billion searches per day but accounts for only 27% of total search activity across the internet — is representative of the framing. The contested question is not whether the shift is real but how fast it is moving and what to do about it.

Consensus 2 — Citation, not click-through, is the new first-order metric for AEO. Ethan Smith, interviewed by Lenny Rachitsky, makes the point crisply: in a ranked-results world the winner is the site that ranks first; in an answer-engine world, "the LM is summarizing many citations and so you need to get mentioned as many times as possible." HubSpot's Asia Forest reports similar framing, and the language propagates through Conductor, Semrush, and the agency-vertical content.

Consensus 3 — The content practices that serve AEO heavily overlap with the content practices that serve traditional SEO. Matt Canyon of Surfer Academy states this directly: "ranking in AI search pretty much just boils down to doing good SEO with a few important nuances that can make it easier for tools like ChatGPT to index and present your website." The convergent-document cluster endorses versions of this position. The nuances most commonly cited are FAQ schema markup, machine-parseable heading structure, fact-density in the first 100–200 words, explicit answer formatting, and claim attribution with citations.

Consensus 4 — Third-party and community mentions matter more than they did under pure SEO. The Ahrefs August 2025 study of 75,000 brands is the most widely cited empirical source for this claim: branded mentions across credible sites show the strongest correlation with AI Overview visibility (Spearman 0.664), stronger than backlinks (0.218), referring domains, or domain rating. A follow-up Ahrefs study in December 2025 extended the analysis across ChatGPT, AI Mode, and AI Overviews and reported the same directional result, with YouTube mentions emerging as an even stronger correlate (~0.737). HubSpot's "three-pillar AEO strategy" cites community engagement and Reddit/forum mentions as one of the three core pillars, and the Graphite/Webflow-style case studies in Lenny's Podcast interview emphasize the same point.

Consensus 5 — LLM-referred traffic, when it exists, tends to convert at higher rates than traditional search traffic. HubSpot reports 3x conversion versus traditional search from AI visitors. Ethan Smith reports Webflow seeing a 6x conversion rate difference between LLM traffic and Google search traffic. Conductor's 2026 Benchmarks Report states the same directional result at lower magnitude. This is cited as the economic justification for AEO investment, even when LLM-referred traffic volumes remain small.

These five points describe the center of gravity in the AEO/GEO discourse as of April 2026. They are not universally held — Part 1.3 treats the disagreements — but they are the consensus that a practitioner reading the space will encounter as default framing.

1.3 Points of Divergence

Beneath the consensus surface, the corpus disagrees on four material questions.

Divergence 1 — Is traditional SEO "dead"? The "SEO is dead" framing is common — the corpus was seeded with it precisely because the phrase is a prominent search term — but the actual position of the most-watched and most-convergent videos is more measured. Neil Patel's 259,854-view "The New Rules of SEO (2026)" argues that Google-ranking-centric strategy is obsolete but that search is expanding, not dying. Surfer Academy's 121,815-view tutorial argues "traditional SEO is not dead, but in fact is more important than ever" because the same fundamentals now feed AEO surfaces. The HubSpot talk argues that Google remains the dominant traffic source today but projects that ChatGPT will overtake it "by 2028." CXL and a subset of SEO practitioners argue the "SEO is dead" framing is perennial clickbait that recurs every few years with a new acronym. The honest literature-review conclusion: the phrase is a rhetorical frame, not an analytical consensus. The underlying question — at what rate does attention migrate from ranked results to answer engines, and at what point does the economics invert — remains genuinely open.

Divergence 2 — Does schema markup matter for AEO? A substantial subset of the consensus playbook recommends FAQ schema, Article schema, and Organization schema as core AEO practices. Google's own documentation, however, explicitly states that no special schema is required for AI Overview inclusion, and that the same structured-data practices that serve traditional search are sufficient. Ahrefs' Xarumei experiment, discussed in the corpus, found that fabricated Reddit and Medium narratives about a made-up brand outperformed schema-rich commercial pages in generating AI Overview mentions. The honest read: schema markup is recommended by many and rigorously measured by few. It is not harmful; its claimed causal role is not well supported by independent evidence.

Divergence 3 — Does LLM referral traffic matter in absolute terms? Conductor's 2026 Benchmarks Report measures AI referral traffic at roughly 1.08% of total web visits, growing at approximately 1% per month. Similarweb's 2026 GenAI Brand Visibility Index (published March 3, 2026, and reported via EMARKETER) finds that major publishers including Reuters and The Guardian receive less than 1% of referral traffic from AI platforms despite being frequently cited. HubSpot's own head of content Aja Frost has publicly cautioned operators not to chase LLM referral traffic. Against this, the HubSpot case study and the Ethan Smith / Webflow case cite sharp conversion-rate premia on the small absolute traffic volumes. The honest synthesis: citation and referral are separate currencies, and the category's most common analytical mistake is conflating them.

Divergence 4 — Is this a buyer-led shift? Here the discourse is least evidenced. The consensus voices imply that buyers — purchasers of SaaS tools, services, and products — are rapidly moving their evaluation behavior to AI surfaces. HubSpot cites an internal January 2026 statistic that 42% of CRM software buyers now use AI search as part of their evaluation process. Ethan Smith repeats versions of this framing. But these statistics come from the vendors who stand to benefit from the framing being true, and independent survey data tells a more mixed and rapidly evolving story: the HigherVisibility 2025 search behavior study found 79.8% of Americans preferring traditional search engines for general information in its February 2025 wave; the August 2025 follow-up wave of the same study reported the figure had fallen to 66.9%, with daily AI tool usage more than doubling (14% to 29.2%) and ChatGPT's share of general information searches tripling (4.1% to 12.5%) over those six months. The honest summary is that buyers are shifting, the shift is accelerating, and "buyers have already shifted" still overstates the picture even if "buyers haven't shifted" is no longer defensible either. Zero-click behavior is also more nuanced than headline figures suggest: standard US Google zero-click sits around 58–65% (SparkToro 2024 clickstream study), rises to 80–83% on queries triggering AI Overviews (Semrush AI Overviews Study, December 2025), and reaches approximately 93% specifically within Google's dedicated AI Mode conversational interface (Semrush, September 2025). The 93% figure is often quoted without that context. The claim that buyers have already shifted may be directionally correct but is not empirically settled.

1.4 The HubSpot Case Study, Examined

Because it is the most prominent single reference in the 2026 AEO landscape, the HubSpot April 2026 case study deserves direct treatment. It is cited across the convergent-document cluster, anchors the HubSpot AEO product page ($50/mo standalone, launched April 2026), and is the source for several of the most quoted statistics in the corpus.

The headline claims, as published by HubSpot in April 2026 and reiterated in the "How to Show Up in ChatGPT" talk by Asia Forest at Grow Europe (November 2025):

  • 1,850% increase in qualified leads attributed to AEO strategy
  • 42% of CRM software buyers now use AI search as part of their evaluation (HubSpot internal, January 2026)
  • 58% of marketers say visitors referred by AI tools convert at higher rates than traditional organic
  • 92% of industry-solutions pages cited by answer engines
  • 642% increase in citations for software-comparison posts
  • 49% lift in AI visibility from industry-solutions content
  • Organic traffic for HubSpot customers down 27% year over year
  • AI visitors convert at 3x the rate of traditional search visitors
  • XFunnel is the measurement partner used to produce these figures

Each of these is plausible. None has been verified by a third party. The 1,850% qualified-lead figure, in particular, is the kind of percentage-of-small-base claim that obscures more than it reveals: a move from 100 leads to 1,950 leads is a 1,850% increase, as is a move from 10,000 to 195,000. The case study does not disclose which.

More importantly, every measurement in the list is an attribution claim. Attributing a lead, a conversion, or a citation to "AEO strategy" requires a model that separates the AEO contribution from everything else HubSpot did during the measurement window — including product launches, paid media investment, partnership marketing, and brand-compounding effects that predate the AEO push. The case study does not expose that attribution model. The measurement tool, XFunnel, is HubSpot's partner; no peer-reviewed methodology validation is available in the sources reviewed for this report.

The honest treatment of the HubSpot case study is therefore: it is the category's dominant reference point, its directional claims are probably correct, its specific headline figures should be treated as vendor-produced internal attribution numbers rather than independently verified empirical findings, and any practitioner using them as a benchmark for their own investment decisions is implicitly trusting HubSpot's attribution methodology without being able to audit it.

This is not a unique failing of HubSpot. It is the common condition of marketing case studies. The correction is not to dismiss the study but to calibrate confidence in its specifics against the strength of the evidentiary standard — which is marketing, not research.

1.5 What the Literature Leaves Unverified

Five claims recur throughout the discourse without rigorous empirical backing.

First, the claim that "citation share" or "share of voice" in answer engines is a stable, measurable quantity. The category's named vendors — XFunnel, Profound, Conductor, Semrush AI Visibility Toolkit, and AthenaHQ — have defined the terms as product features. None of these vendors publishes peer-reviewed methodology validation in the sources reviewed for this report. "Citation share" measured by one vendor and "citation share" measured by another may differ materially in sampling, query selection, and model version — and no standard exists for reconciliation.

Second, the claim that AEO investment produces repeatable return. The handful of published case studies — HubSpot, Webflow, several agency self-reports — are vendor-produced and cover vendors with exceptional existing brand equity and content depth. Whether a mid-market vertical operator with a $50,000 content budget sees the same returns is unknown.

Third, the claim that a given tactic — schema markup, FAQ structure, Reddit seeding, press mentions — produces a measurable visibility lift. The tactics have plausible mechanisms, and the Ahrefs Xarumei experiment provides preliminary evidence that brand mentions and third-party placements affect AI Overview inclusion. But causal inference in the AEO space is rare. Most tactics are recommended on the basis of pattern-matching to observed correlates, not controlled experiments.

Fourth, the claim that 2026 is an inflection year. The HubSpot talk projects ChatGPT overtaking Google for demand-gen traffic by 2028. The Ethan Smith / Graphite framing implies the inflection is immediate. Neither projection is grounded in the public data available as of April 2026; both are directional extrapolations of early-category growth curves.

Fifth, the implicit claim that the vendor category itself is trustworthy. The tools that measure AEO performance are sold by companies that benefit commercially when operators conclude AEO matters. This is not an accusation; it is a structural condition of the category's current maturity. The honest practitioner posture is to triangulate vendor-produced measurement against independent sources — SERP-level data, web-analytics, first-party conversion data — rather than to treat a single vendor's visibility dashboard as ground truth.


Part 2 — Proprietary Evidence: What the Data Shows

Part 1 describes the discourse. Part 2 reports what ForIntel's research has found across six completed query runs, April 18–19, 2026, and what that evidence does to the consensus claims in Part 1.

Every finding below is cited to its source evidence and includes the sample size. Findings are accompanied by their counter-signals, alternative interpretations, and data gaps, each with the same structural weight as the finding itself.

2.1 How the Evidence Was Produced (High-Level)

The ForIntel research program uses a multi-stage analysis process: query composition, data collection across search-demand, SERP, citation, and related intelligence surfaces, followed by structurally separate interpretation and independent review before any finding is synthesized. The independent review is produced without visibility into the primary interpretation, and the synthesis step preserves counter-signals and alternative interpretations with equal weight to the findings themselves. Sample sizes, data gaps, and the limits of each finding are reported alongside the finding itself rather than segregated into endnotes.

The methodology's design goal is structural resistance to confirmation bias — research output that surfaces what disconfirms a favored hypothesis as readily as what confirms it. This report's treatment of its own findings reflects that discipline. Readers who want to stress-test any finding will find the counter-signals, alternative interpretations, and data gaps for each result immediately below its primary statement.

2.2 Finding A — Buyer-Side AEO/GEO Terminology Is Essentially Null

Source: ForIntel discovery research, April 18, 2026. Seven buyer-side AEO/GEO/LLM-visibility search stubs were tested against US Google search-completion data. Five of seven stubs returned no detectable keyword completions.

In an April 2026 discovery run testing buyer-side search vocabulary for AEO/GEO/LLM-visibility concepts, five of seven tested stubs returned zero keyword suggestions from Google's US search-data surface. The failing stubs were: "AI Overview visibility for," "LLM citation for," "ChatGPT visibility for," "AEO benchmarks for," and "AI search ranking for."

This is the report's single strongest finding because it is a null result that contradicts the implicit claim of buyer-led demand. The consensus discourse in Part 1 presupposes an audience of buyers actively searching for AEO and GEO services, tools, and guidance. The keyword-suggestions endpoint, which enumerates the actual search completions Google observes in its US surface, returns nothing for these phrasings. If buyers were searching in these terms at any meaningful volume, the endpoint would return them.

Counter-signal (high severity). The null result does not prove buyers aren't searching. It proves buyers aren't searching in this vocabulary. Alternative phrasings — "get cited by ChatGPT," "show up in AI answers," "rank in AI Overview" — may capture the same intent with different language and were not tested in this run. Absence of evidence is not evidence of absence. The independent review of this finding explicitly noted that treating null-completion results as evidence of absent demand likely overestimates market readiness for sophisticated AI SEO terminology.

Alternative interpretation. The discourse itself may be ahead of buyer vocabulary. Practitioners and vendors adopt new terminology faster than search behavior migrates — an established pattern in every SEO sub-category. The absence of buyer-side AEO terminology in April 2026 may reflect lag rather than absence of interest.

Data gap. We did not test colloquial or plain-language phrasings of the same intent. A follow-up testing "how do I get mentioned by ChatGPT," "why isn't my site showing up in AI search," and similar natural-language queries would distinguish "buyer vocabulary lag" from "buyer demand absence."

What the finding supports, given the caveats. The AEO/GEO vendor discourse in April 2026 is running ahead of buyer-side search demand in US Google search. Operators planning content strategies around keyword volume should expect near-zero direct traffic from AEO/GEO-labeled queries in the current window. Operators planning brand-mention and citation strategies should treat the category as pre-mainstream rather than established.

2.3 Finding B — Tool-Vendor Displacement Is Heterogeneous, Not Universal

Sources: ForIntel SERP composition research, April 18, 2026. Fifteen neutral vertical-benchmark queries across five verticals were tested for tool-vendor presence in the top 10 organic results; 12 of 15 queries returned zero tool-vendor domains. A separate run across five sophisticated B2B verticals (sample size 110) found tool-vendor displacement present in Software/QA but absent in Healthcare, Finance, Higher Education, and Nonprofit. A third run across five local-service verticals found that "SEO benchmarks" queries were measurably more vendor-crowded than "marketing statistics" queries at equivalent intent.

A pervasive background assumption in the SEO discourse is that major tool vendors — Semrush, Ahrefs, Moz, HubSpot, Hootsuite, Buffer — have come to dominate the organic top-10 for vertical-adjacent commercial queries. The vertical-content playbook is commonly framed as a response: because tool vendors own the top of the funnel, vertical operators must go elsewhere.

The data does not support this as a uniform pattern. Across 15 tested vertical benchmark queries spanning five verticals, 12 of 15 returned zero tool-vendor domains in the top 10 organic results. Across a separate run of five sophisticated B2B verticals, the displacement pattern varied: Software/QA showed measurable tool-vendor presence (three or more tool vendors in the top 10), but Healthcare, Finance, Higher Education, and Nonprofit did not. Across a third run of five local-service verticals, "SEO benchmarks" queries were more vendor-crowded than "marketing statistics" queries — a sub-query distinction that substantially changes the competitive posture.

The generalized "tool vendors have saturated everything" framing is directionally wrong. Displacement is real in specific vertical × query-pattern combinations but absent in most others.

Counter-signal (medium severity). The displacement measurement uses a threshold-based classification — three or more tool-vendor domains in the top 10 counts as displaced. This is a blunt instrument. A vertical with one tool-vendor domain ranking #1, #2, or #3 may be more effectively displaced than a vertical with four tool-vendor domains ranking #8–#10. Ranking position matters; the current classification collapses position into a count.

Alternative interpretation. Tool vendors may be moving toward content formats that the current benchmark queries don't capture. The measurement surface is "vendor displacement on specific vertical benchmark queries," not "vendor visibility in the vertical category broadly." A vendor could be absent from benchmark queries while owning the top of the funnel on educational queries in the same vertical. Adjacent evidence across the same research window indicates that vertical-specific B2B software and service vendors are increasingly using guide-format content as top-of-funnel anchors — a form of displacement that benchmark-specific measurements underweight.

Data gap. Historical SERP trending is not in the current dataset. Whether tool-vendor presence in these verticals is stable, rising, or falling cannot be determined from the April 2026 snapshot alone.

What the finding supports. The "tool vendors own everything" framing, commonly used to justify consulting services and "how to compete with the big tools" content, is largely an artifact of the categories where it is most frequently observed. In most verticals, in most query patterns tested, the top 10 is not tool-vendor-dominated. The strategic implication is the inverse of the consensus framing: there is room to rank in most verticals, and the barrier to entry is lower than tool-vendor-dominance narratives suggest.

2.4 Finding C — AI Overview Prevalence Is High, Payload Is Often Empty

Sources: ForIntel SERP composition research, April 18–19, 2026. Two informational-query runs (samples of 20 and 15 respectively) plus a two-query sanity check on franchise-adjacent SERPs (samples of 23 and 13 top-10 position inspections).

AI Overviews are now a default presence on informational and commercial queries in US Google search. Across the informational samples, 11 of 20 queries (55%) and 8 of 15 queries (53%) showed an AI Overview at the top of the page. On consultant help-intent queries in a prior verification sample, AI Overview prevalence reached 87.5–100%.

A less-discussed finding emerged from the franchise-adjacent sanity check. On "multi-location seo," the AI Overview returned a full, substantive answer — the kind of summary that displaces clicks to the top-ranking organic result. On "franchise seo agency," the AI Overview slot was present at position 1 but its content was opaque or empty. This is not a single-case oddity. Across the wider research window, AI Overviews on commercially adjacent queries return thin, partial, or inconsistent content at measurably higher rates than on informational queries.

The optical impression — AI Overview everywhere — and the competitive implication — AI Overview takes the traffic — diverge. On informational queries, the Overview is substantive and does displace clicks. On commercial queries adjacent to purchase intent, the Overview is often a vestigial UI element that returns no useful content, effectively ceding the top of the page back to organic ranked results.

Counter-signal (medium severity). This report's sample is a single window in time across US Google search. Google's AI Overview rendering is known to be dynamic: the same query may produce different Overview payloads across minutes, user sessions, and regional variations. The "empty payload on commercial queries" finding is a pattern across the sample but not a stable law. A sample taken two months later may show the pattern reversed.

Alternative interpretation. Empty AI Overviews may be a transitional state: Google is rendering the Overview shell while its retrieval-augmented-generation system has not yet learned to populate commercial queries confidently. As RAG methods improve, the empty slots may fill. The current advantage to organic rankers on commercial queries may be temporary.

Data gap. We do not have query-level click-through-rate data segmented by AI-Overview-present-with-content vs AI-Overview-present-empty. Without CTR data, "AI Overview displaces clicks" is a reasonable inference on informational queries with substantive Overviews, and a weaker inference on commercial queries with thin Overviews.

What the finding supports. The "optimize for the AI Overview" practice, widely recommended across the corpus, is less universally applicable than the practice suggests. On informational queries, the AI Overview is a first-order visibility surface and appearing in its citations is a legitimate goal. On commercial queries adjacent to purchase intent, the Overview is often effectively absent even when present, and optimization for classical SERP ranking retains first-order importance. The implication: AEO investment allocation should be query-type-aware rather than applied uniformly.

2.5 Finding D — Search Volume for Vertical Marketing Intelligence Keywords Is Largely Null

Sources: ForIntel keyword-demand research, April 18–19, 2026. Three runs testing vertical-marketing-intelligence keyword patterns returned very high null rates: 11 of 15 (73%), 11 of 13 (85%), and 16 of 17 (94%) of exact-match keywords returned null search-volume data.

Tested keywords of the form "{vertical} marketing statistics," "{vertical} competitor analysis," "{vertical} SEO audit," and similar vertical-marketing-intelligence phrasings return null search-volume data from Google Ads for the overwhelming majority of cases. Across the three runs, between 73% and 94% of tested keywords returned null.

Two interpretations are possible. The null result may indicate genuinely low demand — buyers in these verticals are not searching for vertical-specific marketing intelligence in these phrasings. Or the null result may be a Google Ads data artifact: for low-volume queries, Google Ads returns null rather than estimated volumes, and null values can include any query below roughly 10 monthly searches.

In practice, both are partly true. The vertical-marketing-intelligence category has real but small demand, and the category's demand is split across many specific vertical × intent combinations, each too low to generate reliable volume signal.

Counter-signal (high severity). As the independent review of this finding explicitly noted, when the overwhelming majority of tested keywords return null volume data, the resulting "weak signal" conclusion is itself statistically fragile: absence of evidence is not evidence of absence. The null-return pattern reflects a measurement-instrument behavior (Google Ads returns null rather than an estimate for low-volume queries), not a demand measurement.

Alternative interpretation. Buyers in these verticals may search using different vocabulary — plain-language questions rather than the tightly-structured commercial phrasings this research tested. The same pattern identified in Finding A (the AEO/GEO vocabulary absence) may apply more broadly to vertical marketing intelligence.

Data gap. Conversion data for the low-volume keywords that did return data is not in the current dataset. Whether low search volume correlates with high conversion (specialized intent, narrow audience) or low conversion (noise, poor fit) cannot be determined from the research alone.

What the finding supports. Buyers in most verticals are not searching for marketing intelligence in the structured commercial keywords that vendors use to describe the category. The implication for content strategy is to lead with problem-language rather than product-language. The implication for demand measurement is to triangulate Google Ads volumes with other evidence — community signal, practitioner surveys, qualitative inspection — rather than treating them as the primary demand signal in this category.

2.6 Finding E — Domain Authority and Branded Mentions: Contested Proxies

Sources: Prior ForIntel research on LLM citation, completed earlier in 2026. The research tested 720 query-model probes across four leading answer engines (ChatGPT, Claude, Gemini, and Perplexity) — 60 queries × 4 models × 3 repeats — of which 199 produced threshold-passing citations to seven distinct cited domains. Domain-level effect sizes for citation correlates were computed against a non-cited comparison pool of 458 domains drawn from parallel SERP-derived research. Triangulated with Ahrefs' August 2025 75,000-brand study and its December 2025 follow-up extending the analysis to ChatGPT and AI Mode.

Our prior research identified domain rating as the strongest measurable correlate of LLM citation across the comparison, with a large effect size (Cohen's d = 1.12). Raw referring-domain count, by contrast, showed a negligible effect (d = 0.09). The contrast is itself the more informative finding: the AEO signal that separates cited from non-cited domains is concentrated in domain authority as a composite signal, not in the underlying link-economy variable that traditional SEO operators most directly control.

Ahrefs' August 2025 study — drawn from a much larger sample of 75,000 brands — reports that branded mentions across credible sites show the strongest correlation with AI Overview visibility (0.664), stronger than backlinks (0.218), referring domains, and domain rating. The December 2025 follow-up extended the analysis to ChatGPT and AI Mode and found YouTube mentions to be an even stronger correlate (~0.737) across all three AI surfaces.

The two findings are not contradictory. They are likely views of the same underlying phenomenon. Domain rating measures link authority; branded mentions measure named-entity frequency across the web; both are proxies for "this brand is well-known and widely referenced," which may be the true upstream driver of both organic ranking and AI citation. The measurement difference matters, but the strategic implication is similar: brand compounding, built over time through press coverage, third-party writeups, community mentions, and backlinks from authoritative sources, is the most durable AEO and GEO investment available to a commercial operator.

Counter-signal (high severity). Correlation is not causation. Large effect sizes describe how strongly domain rating co-occurs with LLM citation; they do not prove domain rating causes citation. The signal may be a proxy for a latent variable — brand recognition already embedded in the LLM's training data — and the practical implication for an operator may be closer to "the LLM already knows your brand from its training" than to "building domain rating will cause future citations."

Counter-signal (high severity, sample-structural). The cited-domain pool in our research consists of seven distinct domains. The Cohen's d figures are computed at the domain level against a non-cited comparison pool of 458 domains. Seven is a small enough cited count that no inferential analysis (logistic regression, hypothesis testing) is appropriate — the events-per-variable threshold required for that level of analysis is not met. The d = 1.12 figure is a descriptive effect size, not a tested predictor. A reader who treats it as a predictor strength misreads the analysis.

Alternative interpretation. The Ahrefs Xarumei experiment, widely discussed in the corpus, found that fabricated Reddit and Medium content about a made-up brand produced AI Overview mentions, while schema-rich commercial pages did not. If the experiment generalizes, then the causal driver is not domain rating itself but the presence of brand-name tokens in specific content surfaces the LLMs sample during retrieval — a pathway that rewards new brands more readily than domain-rating-based analyses suggest.

Data gap. Neither our research nor the Ahrefs study includes longitudinal data — whether domain rating improvements cause citation improvements in a measurable window. Without temporal data, the causal direction remains underdetermined.

What the finding supports. Domain authority and branded mentions matter for LLM citation; they are the strongest observed correlates across two independently produced studies. They are probably not directly controllable levers in the short term — building domain rating and brand mention density both take many months — but they are the signals most worth optimizing for in medium-to-long-term AEO strategy. The practitioner who cannot influence these signals in the current quarter should treat content-structure levers (FAQ schema, heading hierarchy, comprehensive coverage) as useful but second-order.

2.7 What the Evidence Does Not Show

The research reported above establishes certain patterns with reasonable confidence. It also leaves substantial gaps. The honest summary of what this report's proprietary evidence does not cover:

  • Geographic generalization. All reported evidence is drawn from US-market Google search. Patterns in the UK, Canada, Australia, non-English markets, and region-specific search systems are not tested.
  • Longitudinal trending. Most findings are drawn from April 2026 snapshots. Whether AI Overview prevalence, tool-vendor displacement, or keyword demand is rising, falling, or stable requires repeated measurement across time.
  • Conversion metrics. The research measures demand signal, SERP composition, and citation patterns; it does not measure what any of these convert into for operators. The relationship between AEO visibility and revenue-relevant conversion is assumed in the literature and not confirmed in our data.
  • Cross-platform coverage. The LLM citation research covered ChatGPT, Claude, Gemini, and Perplexity. It did not cover You.com, Copilot, Grok, or smaller AI-search products. Citation dynamics may differ materially across platforms not tested, and relative weights of the correlates found for the four tested platforms may not transfer to others.
  • Query representativity. Research queries were constructed to probe specific hypotheses. The coverage is deep on the tested questions but not exhaustive. Important patterns in the category may not be surfaced because the queries that would reveal them were not run.

These limitations do not invalidate the findings. They bound what the findings support. A reader making strategic decisions on the basis of this report should treat the findings as directionally supported within those bounds and invest in triangulating them against first-party evidence in their own context before committing substantial resources.


Part 3 — A Strategic Framework

Parts 1 and 2 describe the landscape. Part 3 is the synthesis: given what the literature says and what the data shows, how should a practitioner think about AEO, GEO, and SEO in 2026, and what measurement practices best survive the category's current immaturity?

The framework is organized around three distinctions.

3.1 Three Disciplines, Not One

SEO, AEO, and GEO are often used as nested terms — SEO is the umbrella, AEO and GEO are the new frontiers. The consensus discourse sometimes treats them as effectively interchangeable. The proprietary evidence and the rigorous literature both suggest a more useful framing: three measurably distinct disciplines that share content practices but optimize toward different surfaces.

SEO is the click-earning discipline. It optimizes content so that it ranks in ranked organic results, earns clicks, and produces traffic that can be measured, attributed, and converted. The unit of success is position and click-through rate. The economic model is sessions-to-conversion. The infrastructure is mature: ranking is measurable, attribution is well-understood (within limits), and the category has four decades of accumulated methodology.

AEO is the answer-extraction discipline. It optimizes content so that it is cited inside answers produced by ChatGPT, Claude, Gemini, Perplexity, Google AI Overview, and equivalent surfaces. The unit of success is named citation, not click-through. The economic model is brand compounding through citation — a practitioner whose brand is named in 1,000 AI answers per day accrues brand value even if a fraction of those citations convert to clicks. The infrastructure is new: citation measurement is vendor-produced, methodology is unstandardized, and the attribution of citation to downstream revenue is underdeveloped.

GEO is the synthesis-and-recommendation discipline. In its narrower, research-origin sense, it refers to specific techniques for improving visibility in generative engines as identified in the Aggarwal et al. academic work. In its broader, marketing sense, it refers to entity-level recommendation optimization — the discipline of being the brand that an AI assistant recommends when asked an open question about a category. The unit of success is share-of-recommendation across the AI surfaces. The economic model is closer to category leadership than to direct-response marketing. The infrastructure is even newer than AEO's; almost no published peer-reviewed work measures GEO outcomes.

The three overlap substantially in their content practices — all three benefit from comprehensive coverage, structured formatting, citations, and authoritative sourcing — but they do not overlap in what they measure or what they optimize toward. A practitioner who attempts to "do all three" without distinguishing them will default to measuring whichever surface is easiest (traditional SEO position) while claiming progress on the others.

A useful diagnostic: for each content asset you produce, ask which of the three disciplines it is primarily optimizing for. Assets optimized for SEO will be built around target keywords and measured by rank and click-through. Assets optimized for AEO will be built around question-patterns and measured by citation presence across LLM surfaces. Assets optimized for GEO will be built around entity-level positioning and measured by recommendation frequency in open-ended AI conversation. The same asset can serve more than one, but intent should be explicit.

3.2 Three Currencies, Not One

The AEO/GEO discourse conflates three distinct measurement currencies that behave independently of one another. The confusion is the category's most common analytical error and the most expensive.

Citation currency. Named mention in AI-produced answers. Measured by querying the LLM surfaces repeatedly across a defined query set and counting brand name appearances. Produced by: brand authority, content presence in sources the LLM samples, third-party mention frequency. Does not directly produce traffic. Accrues value through brand-compounding mechanisms over time.

Referral currency. Clicks from AI-produced answers to the origin site. Measured by web-analytics-traffic segmentation — identifying sessions whose referrer is an AI surface. Produced by: citation with a link, high-relevance match between the AI answer's context and the destination page, user willingness to click through rather than accept the AI answer. Produces traffic. Is empirically small in absolute terms (~1% of total traffic in 2026, per Conductor).

Revenue currency. Conversion, retention, and revenue attributable to AI-origin visitors. Measured by attribution models that connect AI-referred traffic to conversion events. Produced by: referral currency (existing visitors) combined with offer-market fit, conversion-rate optimization, and the compositional quality of the AI-referred audience. Per HubSpot and Webflow case studies, may convert at 3x–6x the rate of traditional-search traffic — though these multiplier figures are vendor-reported and not independently validated.

The three are independent in an important sense. A brand can accrue heavy citation currency (ubiquitous citation) while producing minimal referral currency (users consume the answer without clicking through). A brand can produce meaningful referral currency while producing no revenue currency (AI-referred visitors bounce because the destination page does not match AI-search context). A brand can produce revenue currency on a tiny referral currency base because the small traffic volume converts at rates that justify the disproportionate investment. Each combination is observed in the corpus.

The operational implication is that serious AEO measurement requires tracking all three independently, reporting them separately, and resisting the pull toward a single composite visibility score. Vendor dashboards increasingly offer composite scores. Composite scores collapse three underlying independent currencies into one number, which hides whether a given campaign moved the currency that matters for the operator's economics.

A practical framing for an operator's dashboard:

  • Citation currency dashboard: queries tracked, citation presence by surface (ChatGPT, Claude, Gemini, Perplexity, AI Overview), share-of-voice by surface, trend over time.
  • Referral currency dashboard: AI-segment traffic by surface, landing page distribution, time-on-page and bounce rate.
  • Revenue currency dashboard: conversion rates on AI-referred sessions, revenue per visitor by surface, retention and repeat-behavior of AI-origin customers.

Each dashboard reports its own metric in its native unit. Compounded scores across the three are reported as derived views, not as primary metrics. This discipline is uncommon in April 2026. It is the most valuable single measurement practice a serious operator can adopt.

3.3 Three Horizons of Action

Not all AEO investment takes effect on the same timeline. Across the literature and the research reported in Part 2, three horizons emerge.

Tactical horizon (weeks to 90 days). Content-structure interventions — FAQ schema, heading hierarchy refactoring, fact-density increases, comprehensive-coverage revisions on existing content. These interventions produce measurable changes in AI Overview citation within 30–90 days for existing pages. They are the most controllable and the fastest-moving AEO lever. They are also, per Part 1.3, the lever whose causal role is most contested: schema markup in particular is widely recommended and rigorously measured by few.

Authority horizon (6–12 months). Domain-rating compounding, referring-domain diversification, citation-density improvement. These interventions produce measurable changes in LLM citation across 6–12 month windows and require sustained investment in publishing, earned media, and link economy. Per Finding E, they are the strongest single correlate of AI citation but they are also the most resource-intensive and the least immediately visible. Operators who cannot sustain the investment should plan around the tactical horizon rather than attempt a partial authority strategy.

Category horizon (12–36 months). Brand-mention-density accrual across the surfaces LLMs sample — Reddit, Medium, industry press, podcast mentions, YouTube channel presence. This is the lever the Ahrefs Xarumei experiment highlights: brand-token presence in the content the LLMs retrieve from. It accrues slowly but compounds; a brand that has spent 18 months building mention density across ten credible surfaces outranks a brand of equivalent product quality that has not, and the gap is hard to close quickly.

Allocation across the three horizons is the category's under-asked question. Vendor playbooks default to the tactical horizon because it is most sellable and produces the fastest visible progress. Serious operators should treat the tactical horizon as necessary maintenance, the authority horizon as the default content investment, and the category horizon as the strategic differentiator that separates operators who win in 2028 from operators who win in 2026 and then lose ground.

3.4 What This Implies for Practitioners

Four recommendations follow from the framework above. Each is grounded in Parts 1 and 2 rather than asserted as general wisdom.

Recommendation 1 — Diagnose before prescribing. Before committing to an AEO, GEO, or integrated content strategy, measure which of the three disciplines your current content is actually optimizing for. Most operators measure SEO because SEO is easiest to measure, then claim AEO progress without measuring citation currency independently. A 30-day diagnostic — 20–40 tracked queries probed across at least three LLM surfaces, segmented referral-traffic analysis, revenue-attribution audit — produces the baseline against which subsequent investment can be evaluated.

Recommendation 2 — Allocate across the three horizons, not just the tactical one. The default AEO engagement, from vendors and agencies, is a content refactor for schema markup and heading structure. This is the tactical horizon. It is worth doing, and it should not be the whole strategy. As a starting heuristic — calibrated to the operator's capital position and adjusted as measurement data accrues — an allocation split of roughly 40% tactical / 40% authority / 20% category is a more balanced posture than the tactical-heavy engagement vendors most commonly offer.

Recommendation 3 — Triangulate vendor-produced measurement against independent sources. The vendor category — XFunnel, Profound, Conductor, Semrush AI Visibility Toolkit, AthenaHQ — is 12–18 months immature. Methodologies are proprietary, comparability between vendors is limited, and every vendor's economic incentive is to show the buyer that visibility has improved. Serious measurement triangulates vendor-produced dashboards against first-party web-analytics segmentation, manual probe sweeps across LLM surfaces, and qualitative inspection of citation context. Where the three disagree, it is informative. Where they agree, the measurement is more trustworthy than any one source alone.

Recommendation 4 — Treat 2026 claims with calibrated skepticism. The category is growing fast and the evidence base is thin. Headline claims — 1,850% lead lifts, 3x conversion rates, 642% citation increases — are directionally informative but specifically unverifiable. A practitioner who treats them as aspirational rather than operationally targeted will invest more prudently than one who treats them as benchmarks. The cost of delayed investment in a growing category is moderate; the cost of misallocated investment in an immature measurement category can be severe.

3.5 The Knowledge Gap, and How Independent Research Closes It

The discourse in Part 1 and the proprietary evidence in Part 2 share a common shortcoming: most of the published material in this category comes from parties with a commercial stake in specific conclusions. HubSpot produces case studies that validate HubSpot's AEO product. Tool vendors produce benchmarks that validate their own measurement frameworks. Agency case studies, even when honestly reported, cover engagements the agency is paid to make visible. This is not corruption; it is the structural condition of an early commercial category. The correction is not accusation but standards: the category needs more independent research, more published methodology, more replicable findings, and more openness about what the data does and does not show.

Foragentis is a research-and-products company. ForIntel, our research program, exists to produce findings of the kind this report demonstrates — findings constrained by sample-size discipline, accompanied by counter-signals, and written to the same standards we would apply to academic work in the organizational epistemology and applied-AI fields where the company's research foundation was built. The program is built around structural resistance to confirmation bias: every finding is reviewed independently before synthesis, and no finding is published without its counter-signals, alternative interpretations, and data gaps carrying equal weight. This is how an independent research practice differs from a vendor dashboard.

The gap in the market is for research that is competently executed, evidentially careful, independent of the products being evaluated, and useful to operators making real decisions. That gap is what ForIntel is designed to fill.


Limitations

This report has five meaningful limitations worth naming.

First, the YouTube corpus is a convenience sample weighted by YouTube's own recommendation and search algorithms. Videos surfaced are those that rank well for the seed terms used; the corpus over-represents high-visibility voices in the AEO/GEO discourse and under-represents quieter practitioners, contrarian perspectives with limited reach, and non-English-language discussion.

Second, the proprietary evidence reported in Part 2 is US-market-only. The demand patterns, SERP compositions, and AI Overview prevalences may differ materially in UK, Canadian, Australian, and non-English markets.

Third, the LLM citation research referenced in Finding E covers ChatGPT, Claude, Gemini, and Perplexity. Citation dynamics in Copilot, Grok, You.com, and smaller LLM-driven surfaces may differ, and the relative weights of the correlates identified may not transfer across platforms not tested.

Fourth, the HubSpot case study and related vendor-published statistics are cited in this report as the dominant reference points in the category. We have not independently verified any specific headline claim from those sources. Their treatment here is as objects of analysis, not as confirmed empirical findings.

Fifth, the "directional" label attached to several findings (notably B, C, D) indicates findings supported by descriptive patterns across adequate samples but not inferentially validated in the same way Finding E's Cohen's d figures are. Directional findings may change with additional data, alternative sampling, or re-measurement across time.

These limitations do not invalidate the findings. They bound the claims the findings support.


Future Research

Four extensions of this report are worth planning.

Longitudinal re-measurement. Repeating the research at 90, 180, and 365 days would produce trend lines on AI Overview prevalence, tool-vendor displacement, and keyword demand that a single snapshot cannot. The report's directional findings would migrate toward inferential support as the time series accumulates.

Cross-platform research expansion. Expanding the LLM citation research to Copilot, Grok, You.com, and smaller platforms would test whether the Cohen's d = 1.12 domain-rating finding transfers beyond the four currently tested engines (ChatGPT, Claude, Gemini, Perplexity), or whether platform-specific weights differ materially. Expanding the cited-domain sample beyond seven distinct domains is also a direct prerequisite for any inferential (Tier 3) analysis of the finding.

Conversion attribution. Integrating first-party web-analytics and conversion data from operators willing to share it would close the gap between citation measurement and revenue measurement — the most important unresolved question in the category's current measurement infrastructure.

Non-English-language coverage. Extending the research methodology to UK, Canadian, Australian, French-language, and Spanish-language search markets would test generalization beyond the US English-language baseline this report was produced against.


Get Your Own Vertical Intelligence Report

This report surveys the AEO and GEO landscape as a cross-vertical field study. The ForIntel methodology that produced it is also designed to produce vertical-specific intelligence reports for any B2B or B2C category where search demand, content competition, and AI citation patterns need to be understood before committing resources.

A custom ForIntel Vertical Intelligence Report includes: search-demand analysis across vertical-specific keyword sets; SERP competitive landscape including backlink profiles and content patterns; AI Overview citation mapping across major LLMs; content-gap identification; buyer-archetype profiling grounded in search language and community register; and a prioritized distribution plan calibrated to the vertical's specific channels.

Custom Vertical Report: from $1,300 per vertical.

With AEO/GEO chapter (LLM citation analysis + backlinks analysis): $1,500 per vertical.

For agencies, fractional CMOs, and consultants serving multiple client verticals, ForIntel also offers quarterly intelligence subscriptions — fresh data every 90 days across your tracked verticals, with trend analysis and change-detection across AI Overview citation patterns and competitive positioning.

Quarterly Intelligence Subscription: from $2,000 per quarter.

[Request a Custom Report →] [Schedule a Consultation →]


About ForIntel

ForIntel is the intelligence research layer produced by Foragentis, a Sacramento-based AI research and product company. Foragentis operates ForaPost — an AI-powered social media management platform serving small and medium businesses across more than fifty verticals and eight major platforms — and ForIntel, the research system that produced this report.

The methodology combines programmatic data collection across search, SERP, backlink, LLM-citation, and related intelligence surfaces with an analysis process that separates interpretation from independent review: primary findings and counter-signals are produced independently and synthesized only afterward, so counter-signals are structurally preserved rather than written as appendices. Sample-size discipline and anti-sycophancy safeguards govern every output. Every finding in this report is traceable to its underlying data, and claims that did not meet statistical or sample-size thresholds are labeled as directional rather than inferentially validated.

ForIntel's methodology draws on a research foundation in organizational epistemology, equitable systems design, and applied AI, and on lessons learned from running the pipeline end-to-end across multiple vertical inquiries in early 2026.

For questions about methodology or findings, contact forintel@foragentis.com.


© 2026 Foragentis. This report may be cited with attribution. Redistribution requires permission.

The findings in this report were produced April 18–20, 2026. As a time-bounded snapshot of a rapidly-evolving category, specific quantitative claims are expected to shift with re-measurement. Directional conclusions are more durable than specific figures. Readers making time-sensitive strategic decisions are encouraged to validate against current data before committing significant resources.

Email me the State of AEO and GEO White Paper

One email. Sent to your inbox. Unsubscribe anytime.