What AEO is
Answer Engine Optimization is the practice of making your brand appear — accurately, positively, and prominently — in LLM-generated responses. When someone asks ChatGPT, Gemini, or Perplexity "what's the best [your category]?", AEO is what determines whether you show up.
It's not a new discipline. It's the same content strategy you've been doing, validated against a different answer surface. The inputs are the same: authoritative content, structured data, external citations. The validation layer is different: instead of checking your SERP position, you're checking your LLM mention rate.
What AEO is not
AEO is not "SEO for AI." That framing implies a parallel effort with its own team, its own budget, its own tools. The reality is simpler: if your content is authoritative enough to rank well in traditional search, it's probably authoritative enough to appear in LLM responses. The gap, when it exists, is usually structural (missing schema, no FAQ markup, thin comparison content) rather than strategic.
How LLMs choose which brands to mention
LLMs don't "decide" to recommend brands. They generate responses based on patterns in training data and, for RAG-augmented models, retrieved context. The brands that appear most consistently tend to share these traits:
- Referenced across multiple authoritative sources. Not just your own site. Reviews, comparison articles, industry publications, forums. The more independent sources mention your brand in relevant contexts, the more likely an LLM is to include you.
- Structured data that makes relationships explicit. Schema markup, FAQ pages, clear product categorization. LLMs trained on web data use structural signals to build entity relationships.
- Comparison and category content on your own site. If you don't have a page about "[your brand] vs [competitor]," you're leaving that narrative to everyone else.
- Problem-solution content that matches query patterns. People ask LLMs questions. If your content answers those specific questions with your product as part of the answer, you're more likely to be cited.
The 4 query types that matter
Not all queries are equal for AEO. Focus your tracking on these four:
"Best [product category] for [use case]" — the highest-intent query type. If you're not in this list, you're invisible at the point of decision.
"[Your brand] vs [competitor]" — controls your narrative in head-to-head evaluation. If you don't have this content, the LLM writes it for you from whatever sources it finds.
"How to [solve the problem your product addresses]" — captures users before they've decided to buy anything. Being the recommended solution in the answer is the goal.
"Is [your brand] good?" or "[your brand] reviews" — monitors how LLMs summarize your reputation. Negative sentiment here is a red flag worth investigating.
How to use audit results
An AEO audit tells you two things: where you stand and what to do. The visibility score is the first part. The recommendations are the second. Here's how to prioritize:
- Fix category queries first. If you're not mentioned when someone asks "best [your category]," nothing else matters.
- Create comparison content for every competitor that appears instead of you. Own the narrative.
- Build problem-solution content that matches the exact phrasing of queries where you're absent.
- Monitor reputation queries monthly. Sentiment shifts slowly, but when it shifts, it compounds.
Tools
You can do this manually with a spreadsheet and raw LLM queries. We did, for months, before building onpage.app. The tool automates the querying, parsing, scoring, and recommendation generation. The free tier runs 3 audits per day against one model. Paid tiers add more models, history, and export options.