
I’ve launched a short series in connection with a webinar series I'm hosting for JAKALA North America - From SEO to GEO: Competing in an AI-Shaped Search Economy.
The core idea is simple: in 2026, search success is no longer defined by whether someone clicks through to your website. It’s defined by whether your brand shows up in AI-generated answers, summaries, and comparisons that shape how people make decisions.
This webinar panel discussion will help teams reset expectations and find a clear starting point. We’ll focus less on rankings and more on what prerequisites now matter most for being surfaced and trusted by generative engines.
Joining me are three colleagues from JAKALA North America:
- Matt Dho, Director of Go to Market Operations, focused on how data, AI, and marketing technology reshape acquisition, engagement, and measurement
- Mandee Englert, Head of Higher Ed, Not-for-Profit, Sports, and Entertainment, working with organizations where visibility has high stakes and governance is real
- Danielle Barthelemy, Senior Account Manager for Industrial and B2B, partnering with manufacturers navigating long buying cycles, complex channels, and shifting buyer research behaviors
I'll continue to publish additional mental models to help you get the most value from the discussion and your own follow-on work.
Mental Model #1: Stop treating clicks as the primary KPI
Clicks still matter, but they are no longer the default proof that search is working.
In an AI-shaped discovery model, the first question is not “did they visit us?” It’s:
Did we appear in the moment the decision got shaped, and were we represented accurately?
That changes what “good” looks like. You should start thinking in four outcome buckets:
- Presence - Do we appear in AI answers for our highest-value questions, or are we absent?
- Role - When we appear, are we the recommendation, one option in a shortlist, a cited source, or a footnote?
- Framing - Is the summary accurate, complete, and aligned with how we want to be understood, or is it generic, incomplete, or wrong?
- Downstream impact - When people do click, call, apply, donate, or request a quote, are they better qualified because the AI layer pre-filtered them?
If your team only measures performance through sessions and click-through rate, you'll under-invest in the work that determines whether you get included in the answer at all.
A practical way to start:
- Pick 10–20 decision-driving questions (not keywords), the ones that authentically determine pipeline, enrollment, giving, or selection.
- Test them on the AI surfaces your audience uses.
- Track whether your organization is present/absent, its role, and framing.
That becomes your baseline. From there, you can prioritize what to fix with less guesswork.
Next post: the second mental model, and why GEO is not “new SEO,” it’s SEO plus entity discipline.
Photo: Are those headlights? Suffolk County, NY


