onpage.app
← All articles
2026-04-287 min

Why we built onpage.app

What we learned running AEO audits by hand for agency clients, and why existing tools didn't solve the operator's problem.

Rank Friendly is an SEO agency in Jerusalem. We've been running technical audits, content strategies, and organic growth campaigns for B2B and e-commerce clients for years. When LLMs started showing up in search results and client conversations, we had to figure out how to respond.

The spreadsheet era

Our first approach was manual. We'd open ChatGPT, Gemini, and Perplexity, type in the queries our clients cared about, and screenshot the results. We'd paste the screenshots into a Google Doc, highlight where the client's brand appeared (or didn't), and note which competitors showed up instead.

It worked. It was also slow, inconsistent, and embarrassing to present to clients who expected something more professional than annotated screenshots.

We tried automating it with API calls and a Python script. That was better — we could run queries programmatically and parse the responses. But we still had to manually analyze the results and write recommendations. The parsing was the easy part. The "what should the client do about it?" was the hard part.

The tool landscape

We evaluated every AEO tool on the market. Profound, Otterly, Peec AI, AthenaHQ, and a dozen smaller ones. Here's what we found:

They all do monitoring well. You get visibility scores, trend charts, competitor mentions. The dashboards look great in client presentations. The data is useful for tracking progress over time.

None of them bridge the gap to action. After you see the dashboard, you still have to do the analytical work yourself. Which queries are most important? What content should we create? Which competitor positions are vulnerable? What should we ship this week?

For our team, this was the whole problem. We didn't need help measuring. We needed help prioritizing and planning.

What we built

onpage.app runs AEO audits the way our team was already running them — query LLMs, parse responses, compute visibility scores — and then does the part we were doing manually: generate specific content recommendations.

Each audit ends with 5 recommendations. Not "improve your content" but specific page types, specific query targets, specific competitor positions to counter. The kind of output you can copy into a content brief and hand to a writer.

We show the raw LLM responses because we think operators should see the data behind the scores. If you disagree with a recommendation, you can trace it back to the exact response that triggered it.

The patterns we noticed

Running AEO audits for agency clients taught us a few things:

Comparison content is massively underweighted. Most brands have zero "[brand] vs [competitor]" pages on their site. The LLMs build these comparisons anyway, using whatever sources they find. If you own the comparison narrative, LLMs are more likely to use your framing.

FAQ and problem-solution content outperforms thought leadership. LLMs are trying to answer questions. Content that directly answers questions gets cited. Content that shares opinions about industry trends doesn't.

Model consistency matters more than any single model. A brand that appears in 2 out of 3 models has a stronger position than one that appears in ChatGPT but not Gemini. Cross-model visibility is a better metric than single-model visibility.

Scores change slowly. Unlike SERP positions, which can shift with an algorithm update, LLM mention patterns are relatively stable. Monthly audits are usually sufficient. Weekly audits add noise without insight.

Who it's for

We built this for operators: in-house SEOs, agency teams, content strategists. People who need to turn data into content plans, not reports into slide decks.

The free tier runs 3 audits per day against one model. Enough to evaluate the tool on your own brand. Paid tiers add more models, history, and export options. Pricing is on the website because we think you should be able to evaluate the tool and its cost without scheduling a demo call.