Ranking in the Semantic Web No Matter the Year
For those of you who want to know what it takes to rank in the Semantic Web no matter what year or decade we’re in, then I am going to give you an overview into what’s working in SEO and what’s likely to continue working for years to come.
First of all, let me begin by asking if you even know what’s ahead?
In case you don’t know, here are some mind-boggling statistical forecasts that should get the gears inside your head spinning out of control.
This is all due to take place in less than 4 years since we are already halfway through 2016.
What are you doing right now to prepare for what’s ahead?
- Are you still waiting for the next shiny new object to help you rank?
- Are you waiting for the next so-called ninja andor guru to come along and show you the way – and take your money in the process?
- Are you waiting for super secret tactics that nobody else knows about but you can have for only $7.77?
- Are you waiting for another Ninja or Guru to sell you another secret formula for success for only $10k?
- Do you still think that the more expensive the product, the better it is?
- Do you really think that there’s a magical, mysterious recipe that can teach you things nobody else knows about SEO and ranking?
If this is you, then you’re in for a rough awakening. There are no magic pills, formulas, recipes or tactics.
You’re better off investing in a good old-fashioned con game like Three Card Monte. This way you’ll know ahead of time you’re going to get taken for whatever you’re willing to spend.
And if you think you can win at that game, there are also very efficient and unscrupulous marketers who know there are many people like you, waiting for that “next best, greatest, secret, what-have-you”. As long as there’s a willing buyer, there will always be sellers.
Back to reality – Anything you do on the web requires lots of work and a substantial time investment.
But What Does It Really Take to Rank for the Semantic Web or Web 3.0?
We need to start by clearing up misconceptions and misperceptions. If you listened to the webinar I did on Monday, June 6th, then this should be familiar territory for you.
Let’s talk DA & PA:
There’s been a bunch of talk about DA/PA the last few years. – At first it was the end all and cure all as far as rankings are concerned, until everyone found out it wasn’t … I will offer a caveat: Your DA has to be above that magical 80 score for your web property to be almost guaranteed to be left alone.
In fact, when we used to look for expired domains, we used to concentrate on anything with 40+ DA. Those domains usually provided a really good ranking boost to our money and/or client sites. But what we didn’t realize is that there was actually something else that was juicing up those 40+ DA domains.
You live and learn. You test and retest. You do it again. You look at results. You continue testing and so on to get to the point where you know why something works or doesn’t
Now let’s look at TF/CF/TTF:
The initials above the image stand for TrustFlow/CitationFlow/Topical TrustFlow for those who are unfamiliar with the metric. Yes, I know you know, but it’s for those who don’t know what you know.
This is actually a more accurate approximation of the trust rank and ranking score sections of Google’s algorithm. It provide a way to quantify whether you’re sending Google the right types of signals – except that Majestic fluctuates wildly from one cycle to the other. So you never really know what’s what.
Please don’t take this the wrong way. They’re all metrics which should be watched closely rather than ignored.
However, they are and probably will remain private, third-party metrics. Just like Google, Majestic will not reveal its algorithm. Therefore, the metrics cannot be relied upon to accurately reflect anything that’s going on in Google’s algo at any given point. In fact, they can’t be relied upon to reflect anything that’s going on with your website as far as metrics are concerned
Time to Speak Semantic Web:
We need to stop thinking of the web as if it’s multi-tiered like some web giants would like you to believe. When speaking of the web (Web 3.0 or the Semantic Web), it is simply an information system. Big shock?
It shouldn’t be. From the very beginning, it provided a way for users to retrieve or search for stored data using hypertext links (get used to this word).
Guess what? It hasn’t changed all that much at its core since Sir Tim Berners Lee conceived it back in 1989!
It’s a space for information stored by way of ones and zeroes. We use the internet to access the web even though we tend to use the terms interchangeably.
If you want to consider the social aspect of the web, it is a way for people to interact with one another using stored data.
Yes, I understand that this is a rather simplistic explanation. However, this is all the Web is at its very core.
No matter how big or small the website, it’s still providing stored data so that others can access it through hyperlinks.
So what’s the real difference?
This is something I went over during the June 6th Semantic Web webinar. As I mentioned during the presentation, it’s still all about links.
Think about it: The web exists for users to retrieve or search for stored data using hypertext links. That being the case, and as you probably already know, it is now all about link quality.
Said quality is measured by the authority and trust established by the website linking to you and the authority and trust established by the website to which you are linking. The higher the authority and trust, the higher quality the link. You don’t have to be Magna Cum Laude to figure this out. In addition, and stating the obvious, if you’re looking to rank in Google, then you have to rely on Google’s algorithmic determination of what that authority and trust is.
Isn’t there always a catch?
In this case, there are actually a couple of catches. First of all, Google isn’t about to share the algorithm that determines authority and trust. It didn’t share the code when it developed PageRank. And it’s certainly not about to reveal the code now that it’s implementing a RankingScore.
Since Google won’t share the code, we have to rely on third-party metrics. But these metrics (MOZ and Majestic for the most part) are algorithms which try to mimic what Google is measuring. You can see how this is, for all intents and purposes, an impossible task.
Yes, there’s more:
As if the above weren’t enough, Google has determined that its RankingScore will be based on how far a website is from a seed site or seed set. I have mentioned this before. The further you are along a link chain from a trusted site or trusted set of sites, the less “power” or trust that link will pass along to your web property.
As I always say, don’t take my word for it. Go to the patent and read it for yourself. It starts off by saying, “Producing a ranking for pages using distances in a web-link graph.”
Google will only pass trust through Follow link (or DoFollow links if you prefer).
Since most seed sites and seed sets NoFollow outgoing links by default, they figure it’s much more difficult to manipulate this metric.
Going back to the patent, it states:
“PageRank scores are computed based on the web link-graph structure, wherein the web pages are the nodes of the link-graph which are interconnected with hyperlinks. In this model, PageRank R for a given web page p can be computed as:
.A-inverted..di-elect cons..function..times..fwdarw..times..function. ##EQU00001## wherein P is the set of all the web pages, |q|.sub.out is the out-degree of a specific page q in the set P, and 0.ltoreq.d.ltoreq.1 is a damping factor.
However, the simple formulation of Equation (1) for computing the PageRank is vulnerable to manipulations. Some web pages (called “spam pages”) can be designed to use various techniques to obtain artificially inflated PageRanks, for example, by forming “link farms” or creating ‘loops.’”
Since it’s so easy to manipulate PR, they came up with the following:
The patent further states, “Hence, what is needed is a method and an apparatus for producing a ranking for pages on the web using a large number of diversified seed pages without the problems of the above-described techniques.”
The New Method:
I’ve already mentioned that the new method is called a RankingScore. And it’s easy enough to understand.
Google will determine how valuable your outbound and inbound links are according to how far your link is from a seed site or seed set.
Let’s look at this image one more time to get a good visual in our mind. Seed sites are inside the rectangle labeled “Set of Seed Pages 102.” They link to one another and link out to the sites on the rectangle labeled “Set of Non-Seed Pages 104.”
Page 124, for example is 3 hops hops away from the closest seed site and 4 hops away from seed 110. Pages 112 and 114 have direct links from all three seed sites in the seed set.
According to the patent (and the algorithm of course). Pages 112 and 114 will benefit directly from the links provided by the seed set. Pages 124 and 110 will get very little, if any, love from the algorithm because of how far away they are from the seed set.
What to do with this Knowledge?
The answer is obvious: Get as close as you can to the seed set, which gives rise to the following question:
If most seed sites and seed sets provide NoFollow links by default, how can you benefit from links from these seeds? It all depends on how much trust and authority the website providing the link has built up with Google.
I don’t want this post to get pitchy, but I will say that products and services provided by the Team at Semantic Mastery accomplish exactly what the algorithm requires. The entire premise behind our SEO products and services is to “communicate” directly with the sections of the algorithm that can positively affect your rankings.We call it the “Google Tickle”. While others worry about triggering penalties, we simply concentrate on triggering “Google love”.
Back to the business at hand.
Difficult Challenges Present Great Opportunities:
Whenever a challenge such as the one currently being presented by the algorithm arises, there’s an even greater opportunity to make it work in your favor. You just have to figure out how to make it all work for you.
Did you ever wonder why a link from Wikipedia is so powerful even though it’s NoFollow? It’s the inherent authority and trust which Wikipedia has built up. Google draws great amounts of data from Wikipedia for its Knowledge Graph.
If Google trusts Wikipedia data, then it stands to reason it would also trust any information it draws from Wikipedia about you or your online property, including NoFollow links, and you would benefit from that inherent trust.
Seed Sets Can be Likened to Trust Networks
If you’ve been paying close attention to this post, the best thing you can do to rank in Web 3.0 is create a seed set. You can accomplish this by letting the Team at Semantic Mastery do it for you – or you can spend hours on end learning how it’s done and then implementing the knowledge and strategies.
It depends on how you think your time and resources are better spent. Should you be working on your business or in your business?
Whichever you choose, make sure that the network you build includes websites that have built up trust with Google. This way you can piggyback off their inherent trust and authority. And make sure that they are customized properly and interlinked for maximum benefit.
Once you get familiar with the process, it will take less time. The end result is that you will have a way to establish seed sets in a particular niche. And the currency of the Semantic Web of Things is Trust.
There are many other tactics which have carried over from web 2.0 and are still working in web 3.0. The question is, “how long will these tactics last?”
Rather than focusing your energy on tactics that may or may not work moving forward, why not use tried and true tactics that directly “communicate” with sections of the algorithm that will positively affect your rankings and traffic?
The kicker in all of this is that you will generate traffic from other sources, not just search engines. You will have a branded, trusted, authoritative presence in websites that are trusted by search engines – “Guilty by association” if you will.