Proprietary methodology

SRO Methodology: The 5 Layers of AI Positioning

Relevante.IA applies a proprietary 5-layer system called Semantic Retrieval Optimization (SRO). Each layer addresses a critical aspect that determines whether AI models retrieve, select, and cite your content.

01

Macrosemantics

Architecture of meaning

Layer 1 defines the global semantic structure of your digital presence through the Macro-Seed-Node model. The Macro is the business category you dominate. The Seed is your main page optimized as an entry point for AI models. The Nodes are supporting pages that cover subtopics and specific queries, bidirectionally linked to the Seed. This architecture enables language models to understand the hierarchical relationship between your content and assign you topical authority in your category.

Concrete example: a dental clinic in Barcelona defines its Macro as "dentistry in Barcelona". Its Seeds are pillar pages like "dental implants", "invisible orthodontics" and "pediatric dentistry". Each Seed has 8-12 Nodes answering specific queries ("how much does a dental implant cost", "difference between Invisalign and braces"). Nodes link to their Seed, Seeds link to the Macro and to at least 2 sibling Seeds.

Why it matters: without a defined Macro, AI perceives your site as a scattered collection of pages without a dominant theme. This reduces retrieval probability when a user asks queries within your category. Macrosemantics is what turns your site into a recognizable topical authority — the precondition for AI to prefer your passages over those of random competitors.

02

Microsemantics

Passage-level optimization

Layer 2 works at the text fragment level — the level at which AI models actually process information. Each 100-to-200-word passage becomes a self-contained, citable, and retrievable unit. We apply 46 documented tactics: from semantic density (information-carrying words per passage) to bridge phrases connecting entities, to direct assertion patterns that facilitate extraction by RAG systems. The goal is for each fragment of your website to function autonomously as an answer to a user query.

Concrete example: a poorly microsemantic paragraph starts with "Our company has years of experience...". Microsemantic rewrite: "Relevante.IA has applied since 2025 a 5-layer SRO methodology combining semantic architecture, passage optimization, technical eligibility, E-E-A-T calibration, and query framework alignment. The goal: ChatGPT, Gemini and Perplexity citing your business when users ask." The entity appears at the start, verifiable data, clear predicate.

Why it matters: AI retrieves passages, not full pages. If your fragments aren't self-contained, the model discards them even if your page ranks well in Google. Microsemantics is the difference between content that's read and content that's cited.

03

Technical eligibility

The gateway

Layer 3 ensures your content is technically accessible to AI crawlers. Without technical eligibility, the previous layers are invisible. We optimize three fundamental pillars: LCP (Largest Contentful Paint) below 2.5 seconds so content loads fast, clean and semantic HTML5 DOM that AI parsers can process unambiguously, and minimal JavaScript that doesn't block main content rendering. Additionally, we implement complete schema markup (Organization, Service, FAQ, BreadcrumbList, Article) providing AI models with structured metadata about your business.

Concrete example: a site built only with traditional client-side rendering (CSR) can rank well in Google because Googlebot executes JavaScript, but ChatGPT and Perplexity crawlers have less tolerance. Migrating to SSR/SSG (Next.js, Astro) the content is served pre-rendered in HTML, reducing LCP from 4.5s to 1.8s and making content extractable for all AI bots. Schema markup goes from absent or basic to 6 validated schemas (Organization, Service, FAQPage, BreadcrumbList, Article, Person).

Why it matters: if your content doesn't load, doesn't load fast, or requires JavaScript the bot doesn't execute, the model never gets to evaluate your semantic quality. It's the least sexy lever but the most structural — without it, layers 1, 2, 4, and 5 produce no effect.

04

E-E-A-T

E-E-A-T for the AI era

Layer 4 establishes the trust signals AI models use to decide whether your content deserves to be cited. We implement the four E-E-A-T pillars adapted for semantic retrieval: Experience demonstrated with real cases and verifiable metrics, Expertise with detailed author bios and linked credentials, Authority with external mentions, backlinks from recognized sources and Wikidata presence, and Trustworthiness with responded reviews, structured testimonials, and citations to authoritative sources. Each signal reinforces the probability that an AI model selects you as a reliable source.

Concrete example: a technical blog without visible author, dates, or external citations is interpreted as orphan or promotional content. E-E-A-T refactor: each post includes AuthorBio with name, role, credentials, and LinkedIn link; visible datePublished and dateModified in schema; 3-5 citations to recognized external sources within the body; verifiable testimonials with properly marked Review schema. Result: the model finds the four E-E-A-T signals and raises citation probability.

Why it matters: in queries with multiple valid candidates, the model prefers sources with more accumulated trust. E-E-A-T is the lever that breaks ties against competitors with similar content — and ties are most cases.

05

Query semantics

Intent alignment

Layer 5 ensures your content is aligned with the ways users actually ask AI. We work with 11 query frameworks: procedural, comparison, risk, mechanism, causality, definition, evaluative, result, use case, decision, and instructional. For each framework, we create specific content that directly responds to the query pattern, maximizing the probability that your business appears in the generated response.

Concrete example: a tax consultancy with a generic blog about "tax services" doesn't appear in queries like "how to file income tax if I work from home" (instructional), "difference between freelancer and LLC" (comparative), or "when to change tax regime" (decision). Applying query frameworks, the consultancy creates specific content for each pattern, multiplying semantic coverage and appearing in responses it didn't capture before.

Why it matters: AI retrieval fan-out expands each user query into dozens of variants. If your content doesn't fit any recognizable pattern (instructional, comparative, etc.), it doesn't enter the evaluation phase. It's the initial filter that discards content even if it's high quality.

How the 5 layers connect

The layers aren't sequential steps: they function as an integrated system. Macrosemantics defines the structure, microsemantics optimizes each fragment, technical eligibility opens the door, E-E-A-T determines if you're selected, and query semantics ensures you appear for the right questions.

Layer 1Macrosemantics
Layer 2Microsemantics
Layer 3Technical eligibility
Layer 4E-E-A-T
Layer 5Query semantics

Apply the 5 layers to your business

Start with a free audit and discover which layers your website needs to become visible in ChatGPT, Gemini, and Perplexity.