SERP Splicing: How Google Might Divide Search Results

by on October 11, 2010 | posted in SEO Theory

One of the biggest enemies of SEOs in the past has been search demographics. Google users would have one of three pre-determined intents when making a search – navigational, informational, or transactional.

Navigational meant their query had only one right answer – for example, when I search for “SEOMoz” or “Google”, my aim is to find said website. Informational meant that I wanted to learn more about a subject – such as “SEOMoz CEO” or “Google’s Automated Car” – without any intent to purchase. Which brings us to transactional queries – such as “cheap flights” or “buy shoes online”  – where our aim is to purchase a product based on query characteristics.

This has been a pain for SEOs because it meant that certain keywords, although frequently high in volume, had blended amounts of monetization ability. For example, “life insurance” might have some 50/50 blend of people who want to purchase life insurance against those who want to learn about life insurance.

This means that although keyword data might show some immense search volume for this query, only a half or so actually were looking to purchase – in the end, what every SEO, and webmaster, wants. A naive CEO or internet marketer would sometimes look at raw search data and imagine a glut of traffic, conversions and cash – an unfortunate, and potentially costly, interpretation of this metric.

The Second Level of Sophistication – Understanding User Intent

Once we understand this as SEOs, we’ve advanced to a secondary level of sophistication. We understand user intent, expected search volume, and how to properly manipulate these numbers to set business expectations. Also, it helps us completely eliminate or deemphasize keywords that lack the search intent we want.

The problem with this second level of sophistication is that we sometimes might assume that the search volume is still ours to have – that even though the keyword volume might be 100,000 and half want transactional data and half want informational data – that all, or a good portion, will click through to our site. And from there, perhaps we can change informational surfers into transactional ones. This is largely a naïve viewpoint, and the reality is, surfers are often likely able to assess your title tags, meta descriptions, and URL in determining whether or not your “life insurance” page is sufficient for their informational query.

They will then pass you off for other results – ones that hopefully better fulfill their original intent.

This is where Google comes into play. Google’s job is to return the best results possible for their users. As such, they understand all of the aforementioned details, and that sometimes, keyword queries have various kinds of intent – and because of this, it is in their best interest to return results for both.

The Third Level of Sophistication – Understanding Algorithmic Adjustment

When we understand how the link economy, SEO and e-commerce works, it makes sense that it would be difficult to return a blended SERP for these mixed results based on competition. Businesses fight tooth and nail for SERP positions where monetization can mean hundreds of thousands of dollars – making laser-sharp on-page SEO and constant link building a must. This makes pages with transactional intent way more likely to rank higher on the SERPs – because their backers see the benefit of constantly throwing money at their ranking improvements through link building.

Informational pages, on the other hand, don’t – there is definitely some sort of value through complementary product offerings or advertising, but for those most competitive terms, it doesn’t make sense to fight and allocate the same number of resources for an ROI just isn’t as high.

However, rankings still happen. We still see Wikipedia monopolize search results. In fact, many people believe Wikipedia gets its own special ranking boosts in the SERPs – something that especially irks more than one greedy SEO.

It is my thesis that what many people think is a special “boost” exclusively for Wikipeida isn’t what’s happening at all – instead, what’s occurring is a splicing of search results. Google is identifying that some queries have multiple user intents, and based on volume of each kind and type of intent, appropriately modifying the algorithm to return the best results for all users.

If Google, or any other search engine stuck with a traditional algorithm to return results for “life insurance”, there would be no informational results. It would be entirely websites that wanted you to fill out their form and add stuff to your cart to hand over your money. But that’s not what Google wants. They want the user to not have to input multiple queries, or scroll through multiple pages of results. Google wants every first-page of the SERPs to have great results for every query intent.

To show support for my thesis, let’s look at the referred-to query, “life insurance”.

Dave Ramsey ranks 2nd for the article “The Truth about Life Insurance”. And he doesn’t even have life insurance at the front of the title tag – or near the front of the URL! According to OSE and Yahoo! Site Explorer, he has only 318 links pointing to the page at the time of this posting – with only 2 unique domains linking to it with the exact anchor text “life insurance”! Meanwhile, a near-exact match domain, “Life Insurance Agency”, with Life Insurance at the front of the title tag and 9,410 page links – and 1,009 unique domains linking to it with the exact anchor “life insurance” (according to OSE) – ranks SIXTH!?

You see CNN and Wikipedia 3rd and 4th, respectively – both informational results. When I look for more Dave Ramsey-centric pages of this kind, like bankruptcy and debt management – he ranks in a similar position with similar metrics. And he’s not Wikipedia. And neither is CNN.

So what gives? What is Google doing? Google, undoubtedly, has data about user intent, CTR and numerous other search metrics – metrics which help inform a query to query algorithm. For “life insurance” type searches, they face an almost 50-50 transactional to informational dichotomy – according to my approximate self-assumed guess – making for a difficult juxtaposition as it comes to returning the best results.

My thought, then, is that Google is using a weighted sort to account for this problem. They undoubtedly use numerous metrics to determine who your page is aiming for – such as anchor text, title tag, URL, whether or not a form or cart exists on page, and semantic indicators like salesy like-language. They then use information like latent semantic indexing to determine your page is about life insurance like the surrounding data says it is – and appropriately sorts it into a “navigational”, “informational”, or “transactional” category.

From there, I believe Google places strong weight on domain strength for these kinds of informational queries – at least when blended – much more than they do for purely transactional ones. By doing this, Google both makes up for the competitive differential between intents and also maintains validity in the SERPs, pulling results from trusted domains. This would prove the Wikipedia argument, and also, show why a website like Dave Ramsey, out of nowhere, can rank for these very competitive keywords. Ramsey has an extremely strong domain – PageRank 6, with 105,000 backlinks.  Wikipedia also has the strongest “informational” domain on the internet – PageRank 9, 97 million backlinks – showing how they can jump “transactional” pages like it’s nothing on many SERPs.

Google has also shown that they can break this down even more – on purely informational queries, they know that many searchers are looking for varying levels of query information – such as purely definitional, or oppositionally, white-paper type depth on a subject. For the subject of this post – and the likely emphasis of most SEOs – we will stick to the splicing that occurs at the higher, more competitive, navigational versus transactional level. But it’s interesting to think and brood over how Google may break down this high-level informational against transactional divison into smaller, meta-intents to best find matching results for all users.

SERP Splicing Has Impact Comparable to the 7-Pack

I’m not going to pretend like I understand the intricacies of the algorithm, but it is my guess that this approximates on a query-by-query basis – the more informational users have proven a query to be, the more likely they are to increase the weight of these kinds of results – to a point where they don’t monopolize the SERP. Google can easily determine this based on SERP clickthrough data and the like, but undoubtedly, they want enough results of each type for each query for every user – and that means splicing the results. Although your website might be capable of edging out an informational page, it will be extremely more difficult than traditional metrics might assume – and in essence, may be comparable to fighting against an exact-match domain or one that has a brand strength driven from commercials running on television.

When we think about what this does to search result pages and how it can potentially dominate or “split” results pages formerly containing 10 queries to 5 or less – it is easy to see how this “splicing” has an influence comparable to the impact the 7-pack has on local search results. That is, it compresses much of the search volume into five or so transactional results, and also, “bleeds” it, as users end up on informational pages, don’t find what they’re looking for, and search again.

This automatic splicing means that before, you could hope that even though maybe there would be good number of informational queries for your search term, you would at least get a strong portion of the traffic. But now, for a term like life insurance, you’re essentially fighting for five positions – positions at the bottom of the page. Looking at that query, and definitely, many more – transactional results are limited to five or so spots on the first page.  And when you consider that MetLife is number one – if you don’t have an exact match domain or a willingness to invest  thousands into off-internet branding – ranking on the first page is nearly impossible.

There’s a hope, though, that because users are split and they have less results to sort through, they’re more likely to go to the second page. But that’s an optimist’s mindset. Most users are lazy and will find acceptance with one of the five transactional matches, misclick on one of the informational ones, or just give up and find another query. Fighting for eleventh place is not something you want to model your business or SEO strategy around – as you won’t be feeding your kids for long based on it.

Similarly, if you run a website that subsists on an advertising model and is largely informational in nature, you should probably focus more on improving domain strength rather than focusing on a page-by-page linkbuilding model. And as such, I am extremely jealous – this kind of model is much more conducive to easier, streamlined, natural link building – although it requires a greater mass of links.

SERP Splicing – A New Kind of Oligopoly

Sure, ranking in the life insurance vertical is possible. But I suggest you aim for those purely transactional queries – ones without Dave Ramsey or Wikipedia lurking in the results. In life insurance, that would be queries like “life insurance quotes” or “life insurance rates”. There, you have more hope to rank – and maybe, after you’ve done SEO for five years and MetLife is out of business and you’re somewhere on the second page, you can start to think about vying for 5% of the clickthroughs for “life insurance”.

Then, you’ll have a better chance.

Image credit goes to TigerPixel.

  • http://www.canuckseo.com Jim Rudnick

    Umm…great piece Ross! Lots to think about and as an SEO practitioner involved for the most part up here in google.ca land….I find the same query results here as you posted for google.com!

    I wonder if Dave Ramsey cares however in getting to be the #1 result for “life insurance” or if the clickthrus he gets from his placement is surpassing his expectations already…

    :-)

    Jim

    • Ross Hudgens

      I’m sure he enjoys it.. but I’m pretty certain his original expectation was not to do that. And it might not even be now! He could improve his ranking (seemingly), by making a few tweaks, but chooses not to – or goes by the “if it ain’t broke, don’t fix it” mantra.

  • http://localsearchbranding.com Domenick

    Another good read Ross, you’re quickly becoming one of my favorites. So after reading this it would be smart to organize your site for the entire buying process, Awareness, Research & Purchase which could possibly cover the Navigational, Informational & Transactional queries?

    • Ross Hudgens

      Domenick, I don’t know enough to give a good recommendation on that. It’s hard to say if they sort websites into categories such as informational and transactional, or they break it down on a page by page basis. My assumption would be page by page, but to give a recommendation on it would be a misstep on my part.

  • Pingback: Weekly Search & Social News: 10/19/2010 | Search Engine Journal

Previous post:

Next post: