### How to Use the Universal Ranking Equation

There are four things that influence your search engine rankings:

- What you do with your Web content
- What other people do with their Web content
- What the search engines do with their data
- What people search for

If you could quantify those things, you’d have a formula that outlines how search engines determine rankings. You would not be able to plug in all the right numbers, but you could certainly plug in your numbers. With some decent competitive research, you could also plug in other people’s numbers. With some acceptably sparse search engine analysis, you could plug in “working” numbers for each search engine.

The problem with that model of analysis, however, is that most SEOs are inclined to use the wrong kinds of numbers. The *right* kinds of numbers for an equation are difficult to pin down, so I’m willing to cut the SEO community some slack. Any evaluation of formulaic SEO is undoubtedly based on personal opinion and bias.

But what do you do with your content? You create it, but the act of creation is not quantifiable. You “optimize” it, but we don’t all agree on what constitutes “optimization” and how do you quantify optimization anyway? You also promote your content, hopefully earning links to help improve your visibility and rankings.

We can quantify these facts in an abstract way, using Boolean values. You always get a 1 for content you have created. Consider that a free point. Since we can’t agree on what constitutes “on-page optimization”, everyone who makes an effort gets a 1 for trying. And if you know you got at least 1 inbound link you don’t control, give yourself another point.

So most SEOs score at least a 3 for effort in what we could call the *Yp* value.

Do you know who your competition is? Don’t go searching on your favorite keywords. Ask yourself: “With whom do I compete for customers/visitors?” You should see these sites across multiple keywords. Can you name 9 of them? Any site that sits in the top 10 results for 2 or more of your (unrelated to each other) keywords is a competitor. So, if you and Joe’s Jewelry both rank for “diamond rings” and “moissanite jewelry”, you’re competitors. And you don’t have to limit yourself to the top 10. You can look at the top 100 results for each keyword. Maybe some of your competitors are struggling as much as you are.

However, increasing the scope of the results increases the complexity of the term we’re trying to defnie. You need to quantify the number of sites you compete with. That number increases dramatically (although not exponentially) as you increase the number of keywords and the number of results per keyword. Combine these 3 factors together and you get the term Mc = Known Competitors * Competitive Keywords * Results per keyword.

The *Mc* term has nothing to do with Einstein’s relativity equation. But it gives you a working number to plug into the second term of our developing equation. Given a competitive field defined as *Mc*, you have to know whether they optimize their Web sites, whether they have inbound links, and whether they have content. It might seem like they all have content, but unlike you some people get good rankings without content.

Ideally, you have to take the average values for your competitors, but let’s simplify things for the sake of illustration and say that the term for “what other people do with their pages” is defined as Op = Mc(Avg. Optimized) + Mc(Avg. Linked) + Mc(Avg. Content). The maximum value for Op = 3Mc.

So now we have two terms, Yp and Op, that can be combined to determine the Competitive Field: Cf = Yp + Op, or Cf = Yp + 3Mc. Assuming the best you can hope for, Cf = 3 + 0Mc. The most competitive value is defined as Cf = 0 + 3Mc (this is, by the way, a meaningful equation that describes any competitive field you have not yet entered). But going forward let’s say that your numbers look like Cf = 3 + 3Mc.

The *Cf* value at least gives you some indication of what you may be dealing with, but you have to arbitrarily define the scope of that field to give yourself a frame of reference. I would recommend a maximum of 10 keywords and a maximum of 20 results per keyword. Scope of Field: Sf = Max(targeted keywords) * Max(results per targeted keyword).

The *Sf* value sets an upper limit on your research, but it also defines the boundaries of your normalized equations.

Let’s skip the 3rd term in our equation and look at the last term. What do people search for? Basic keyword research should give you an approximate idea of how many related expressions people used in the past 30 days or so to find content related to each of your targeted keywords. So the scope of the Query Space looks like this: Qs = Max(targeted keywords) * Max(queries per targeted keyword).

That second term is a bit tricky. Let’s say people use 20 expressions to search for “moissanite jewelry” and 60 expressions to search for “diamond ring”. You define *Qs* as Max(targeted keywords) * 60.

Why? Because that is the worst-case scenario. You have to assume that people search for your content in ways you cannot guess. Your best guess is most likely to represent the largest number of variations on a keyword that people use. You could, however, reduce the scope of your Query Space by taking the average of the values your research.

So now we have three terms to work with: Yp, Op, and Qs. We know that Yp + Op = the Competitive Field. Qs represents the interest that people have in that field. You can use the Query Space to modify the Competitive Field in a number of ways, but we’re looking for why pages rank the way they do.

Intuitively, we know that the larger the Query Space, the less likely any one site will dominate the Query Space. So let’s multiply the Competitive Field by the Inverse of the Query Space: Cf * 1/Qs = Cf / Qs. This ratio tells you how challenging a Query Space is. The closer to 0 this value becomes, the less challenging it is.

Intuitively, what we’re describing here has been called “the long tail of search” by many people. That is, the more queries you evaluate, the more likely you can achieve high rankings for at least one query (and hopefully more).

So let’s call that expression the Competitive Factor of the Long Tail, or *Cl*. We know that a lower *Cl* value is better for us and a higher *Cl* value is worse for us.

So now we’ve defined three of our four terms, that leaves only “what the search engines do with their data”. How do you quantify that? And what does such quantification mean?

We’re looking for a number that gives us a probability of ranking well in a competitive query space. The search engine algorithms become the wild card. You have to plug in some sort of value for the search engine term or the expression becomes meaningless. But keep in mind that whatever value you use will always be variable.

Call that variable *Sa* for Search Algorithm. The Search Algorithm becomes more complex as you factor search engines into it. Each search engine has its own algorithm, but they all rank on the basis of Relevance + Value. Relevance carries the most weight in search results rankings so we can weight these terms something like this: Sa = Weighted Relevance + Weighted Value.

But what weights do search engines apply? That’s the million dollar question. However, let’s give some ground to the link lovers here and say that the weightings are nearly equal. So we’ll go with .51 for Relevance and .49 for Value. If we use *Sr* for Search Relevance and *Sv* for Search Value, *Sa* can be defined as: Sa = .51Sr + .49Sv.

So here is the real problem: what can we plug in for *Sr* and *Sv* that is actually meaningful? One solution is to look at how relevant results for queries are and how valuable the ranking sites for those queries tend to be. Holy Data Collection Calamities, Batman! Do we really have to look at all those sites?

Yeah, that pretty much is the only way you can complete the formula (I never said this would be quick and easy). But given that we cannot quantify the algorithms of the various search engines (we don’t know what those algorithms are), the best we can do is quantify the results they show for our defined Query Space. And then we have to take into consideration how many search engines we want to evaluate.

You can apply the formula (which I have not yet fully defined for you) to 1 search engine or several. It’s scalable. The *Sa* value simply becomes the average of the *Sa* values across your search engines — divided by the number of search engines (because the likelihood of any page having the same ranking on all search engines is lower than the likelihood that it will be deemed relevant for the same query on all search engines).

So let’s use the following term *WA(Sa)* (Weighted Average of *Sa*) in our equation: WA(Sa) = Avg(Sa) / Max(search engines), or WA(Sa) = Avg (.51Sr + .49Sv) / Max(search engines).

So now our formula looks like this: URE = WA(Sa) * (Cf / Qs).

The Weighted Average of the Search Algorithms multiplied by the ratio of the Competitive Field to the Query Space determines how likely your pages are to rank for any given query.

It’s an estimation or approximation, not a prognosticating tool. If you take the time to do the research and quantify it this way, you have a crude tool to help you determine how likely it is that your page will be successful across a query space. It’s crude because we lack precision, but it’s a metric that reveals a great deal of information about how competitive any collection of search results actually is.

You could, in fact, pretty easily code a tool to calculate URE values for various query spaces (assuming the tool can get by with some heavy search results scraping). A URE calculator would be a better SEO comparative analysis tool than anything out there right now (most of them are pretty cheesy, and the few that do help are the ones that let you look at on-page factors without assigning any arbitrary values to anything).

I wouldn’t look for a *Universal Ranking Equation* tool any time soon, but it does establish a metric that rises above opinion. There are other metrics that can rise above opinion as well. In SEO theory, you want unbiased metrics because they are not as misleading as the biased paradigms favored by so many SEOs.

All we have to do is keep looking and eventually we’ll find them. That is, after all, what the **Search Engine Optimization Method** tells us: Experiment. Evaluate. Adjust.

## No hay comentarios:

Publicar un comentario