As of late, the stirrings for Google’s head have again increased. Murmurs have vaulted upwards, to the point where Google, in the aim to maintain positive PR, has even pushed significant algorithm changes live to qualm the uprising. People continue to complain about webspam, others root up public, blanket statements in disgust, and a few others, unwavered, continue decidedly as they were.
Although many of these claims, such as Stack Overflow’s duplicate content scrapers outranking them, are legitimate, many of the problems deeply rooted within these uprisings, and many of the complaints that continuously patter the internet, are not.
It is my belief that Google is doing a damn good job fighting web spam. I can’t recall having a disturbance in the force – meaning repeated, angered queries – required to find what I was looking for. I’m sure I had to tweak a word here or there, but the difference between a query or two is five seconds. It never disturbs my thought process, my productivity cycle, or anything else – it simply is. I find what I want, every time. And I am a power user, one with much-higher-than-average expectations for the potency of my search results.
And I, an SEO, am extremely satisfied.
So, then, what makes my holier-than-thou attitude worth any more salt than the many other respected others who continue to bitch and moan about the incongruities of the search results? Well.. I think I have the perspective to understand exactly what’s taking place.
The Google search-results-environment takes on characteristics similar to many other competitive situations such as athletics – environments where an intermediary, the refs, make subjective calls based on a defined set of rules. Outside these rules, the refs, and the system, make occasional judgments based on the outside behavior of others to streamline their craft – things like illegal nutritional supplementation – such as steroids. Rule procedures are put in place, but it is nearly impossible to ream out the whole crop – at least in a way that’s ethical and cost efficient.
Competitive environments that are subjective or ambiguous to any degree fall victim to these fallibilities, and as such, come under constant criticism of those that participate in them – mostly due to the own failures and cognitive biases of its participants. To any clear-minded observer, it’s obvious that 50% of the calls coaches dispute with refs aren’t wrong – in actuality, the percentage lies much closer to 10, or even less than that – as can be backed up by rigorous evaluation each professional sports league does of their officiating crew ex-post-facto.
Google’s clear parallels to these competitive, subjectively judged environments are obvious:
- Somewhat ambiguous, not clearly understood rules – By choice, Google does not open-source their algorithm. This causes a lot of uncertainty from the webmasters that compete under them. In the NFL, coaches often don’t bother to evaluate the intricacies of the rulebook, either. How often do you see a coach challenge a call in the NFL when it’s not challengeable at all? A lot. You’d think, after getting paid millions of dollars to do their job, that they’d clearly understand absolutely defined rules. Well.. this is obviously not the case. With Google, the rules are only defined in abstract terms, terms that would break the internet if implemented appropriately – and it results in a similarly frustrating environment for those running their websites – here equivalent to NFL teams.
- Human, extremely competitive participants – Many humans, especially men, are instinctively very competitive beings. Tech is dominated by males, so, unsurprisingly, many of the people doing battle on the interweb are those with Y chromosomes, making things – and reactions – hypercompetitive. These kinds of instinctual pulls make coming in 2nd place unacceptable, causing posits like “Google is wrong”, “my coaches are terrible”, “my competitors are cheating/on steroids” to pervade quite frequently – because it is in our nature to never surrender to “they were simply better” – even though it is often the case. Even in environments where there are static rules and no subjective refs, we find ourselves coming up with excuses such as “that nerd played chess 20 hours a day while I was out having a life” or “his family was rich, I started with nothing” – to justify our own failures.
- Participants that understand that their competitive environment is partially determined by subjective measures - In a more clearly defined environment, such as chess, there is almost no criticism of the rules – because it is much more obvious, and impossible to bend the rules when playing against an even moderately-aware opponent. However, in an environment where a human ref is necessary, we believe that subjective opinion is often incorrect (to satiate our own insecurities about performance), and also, because we psychologically believe we can waver the decision maker, we – and the coaches – will often ping the refs even more than we actually believe the call to be wrong. This occurs on the SERPs because we know the algorithm often falls to manual review, and also, because we believe we can uproot somewhat shady competitors with a spam request – even if their link profile only has one or two spammy links.
- Competitive framework that is impossible to perfectly judge based on subjective characteristics – In sports, these subjective characteristics will never be perfect because the referees are human, and our physical and mental fallibilities make it an impossibility that there will ever be a “perfectly” judged event. In Google, the subjective characteristics are much different, and one of the most misunderstood mechanics of the entire web. Since paid links can never be absolutely detectable and non-paid links can often be more manipulative than the paid ones, there will always be incongruencies and gray area as it comes to judging the web.
My thesis given the above details is that complaints will always be present, and many times, the complaints are misguided, and mostly made by one of a few participant types – those with high levels of personal bias, those sophisticated enough to know the environment enough to manipulate it (such as coaches that complain just to complain), and, on a much, much smaller level, those with legitimate complaints about how the “game” is being officiated.
Given finite time and resources, it is in Google’s best interest to completely ignore complaints past a certain threshold of their approximate userbase size. I would be shooting blindly in the dark to come up with a figure, but it’s safe to say that Google, or any competitive organization, could come up with an approximation of how many complaints in this “competitively subjective” environment would be worth completely ignoring. When complaints reached a certain percentage of the potential userbase, there would be a concern or greater likelihood that actual fallibilities in the officiating process are occurring, and a likely hole has arose that is worth addressing.
Until that time, minor blips in the larger userbase are likely occurrences of the above bullet points – or otherwise, not significant enough to waste resources on.
The Power of Voice and Statistical Exceptions
In a communication vacuum, my above recommendation would remain static. All of the above things would remain true, and every competitive league would hold it in their best interests to adhere to it. However, with the burgeon of social media – and before it, for sports, talk radio and television – this was not always the case, and statistical one-offs caused “availability bias” – in which people predict the frequency of an event, or a proportion within a population, based on how easily an example can be brought to mind. This meant, that in sports, when an officiating failure occurred at a critical moment or for a critical event, it brought immense light to the event and often times a referee was condemned or fired for their failure, and more notably, the rule was changed – even though there were no previous complaints of it’s failings.
More obvious, and relevant examples of this are your friend who had one really turbulent ride on a plane, then was scared of planes for the rest of their life – even though planes are statistically safer than cars. Or the Earthquake that caused Earthquake insurance purchases to rise even though the statistical probability before or after the fact never changed.
This occurs with Google, too, when people make them look stupid – causing one-off changes. What makes them “look stupid”, of course, is when people or websites of voice note their displeasure – and as such, start a rallying cry around those that follow them. The most obvious, recent example of this is Stack Overflow creating a devaluing of scraper sites (crazy, I know), caused by an influential tech userbase and an equally powerful CEO, Jeff Atwood.
Surely, this had been occurring before, but never had a website had such an influential, tech-centric userbase such as Stack Overflow, making the uproar started by Atwood (and many of its users) – capable of echoing more than any other webspam stir had previous.
These kinds of PR uproars make Google – and other “subjective competitive” environments take notice because they often create a greater stir than their worth, and can also create pain points in the form of stock drops or reputation falloffs for the company.
The hope is that these kinds of people – influential ones – are more sophisticated than the average user, and that their impact and influence, based around legitimate concerns, actually bring about real algorithm improvements that otherwise would have been ignored. The potential problem, of course, is if a loud voice actually causes a short time patching of one problem, that actually opens up deficiencies in other parts of the algorithm that don’t go immediately noticed because they aren’t as applicable, or as instantly identifiable, as something such as exact match domains.
The palpable and alarming realization such a jarring an immediate adjustment is almost worth fretting over – if Google had the ability to do this before, why hadn’t they? Or was the solution they offered something that put a patch on a low tire - one that will inevitably blow later? When a search engine rolls out an algorithm change that modifies 2% of searches extremely quickly after a stirring of outpour, it makes you wonder what suffered – and what parts were done under duress – that occurred because of public business stressors these loud voices create.
One-Offs and Complaint Hula Hoops
This isn’t to say Atwood didn’t have a legitimate complaint – what makes his complaint significant and worthwhile as opposed to Joe Webmaster’s, is that the kind of platform he runs at Stack Overflow – a Q&A site that produces thousands of pages of content – that is so rare on the internet that a complaint of this kind makes it a legitimate rarity that can uncover algorithmic weakness – because the failure of the algorithm, multiplied by the size of the site ([site:http://stackoverflow.com] produces 18 million pages – and is only one site, the biggest, on the Stack Exchange network) – means that the deficiencies were clearly visible to a user-base, and it’s CEO, at scale.
Beyond Atwood, though, is an increasing stir of people complaining about this, that, or the other thing. What makes them not worthy of immediate change, though, is that they can’t instantly point to a 40 million page problem – but, together, they can make a large buzzing worthy of ear muffs.
Complaint Thresholds in Competitive, Subjectively Judged Environments
Given the above determinates, it’s safe to assume that complaints, to a point, will never die. They will exist simply based on the environment they’re based in. But the point to derive from that isn’t that complaints should never be listened to simply because they always occur – the point is to understand when they’re worth listening to, at what intersections, at with what veracity.
- If a complaint is based on a algorthmic (or sports rule) rarity, most likely ignore unless there is significant press backlash that comes with failure to address – that is, if resources to fix the problem can more appropriately be allocated elsewhere.
- Judge complaints not on a per user basis, but rather, on a percentage of the web index basis. If a user with a four page site complains, view them as a statistically less worthwhile source to listen to than say, a user with a four million page site. Apply similar values to the domain and page rank metric – size of the website matters, but size of the website does not directly connect to influence on a per complaint basis, in case factors like being really rich, annoying and bitter come into play (see: Dallas Mavericks’ owner Mark Cuban).
- Maintain a constant complaint threshold based on percentage of the web index, search volume and other appropriate datasets. In something like the NBA or NFL, this could be measured in complaints to press or something like that, *significant* appeals at the failures of the judging staff – not just primps in the side every time someone gets fouled. For Google, who has to deal with heaps more data, this could be measured by more sophisticated measures like sentiment analysis, webspam submissions, or some other hybridized algorithm - I’d have to think they could draft something up. When complaints stay within a certain threshold, maintain current efforts on internal projects. If complaints break a certain threshold, assume something is actually wrong and address it.
The longer and longer I’m an SEO, the more and more I see the older sect result to complaining and brow beating to justify their defeats in the SERPs. More likely than that, though, is that they are falling behind, becoming mid to late stage adopters – as old people have historically proven to be – and thus complaining less and less about the algorithm, and indirectly, more and more about their personal failures – although they cognitively aren’t aware – or unbiased – enough to realize it.
Understand, then – understand, now – that if you’re going to complain, you’re just going to be a little blip on a large, large radar. Google does not have a 911 line, and, with all probability, you have as much appeal as the third assistant coach on the Milwaukee Bucks bench – that is, Google all but burns your webspam complaint.
Algorithmic failures exist, yes. Google likely knows this, yes. But also understand that much of what you consider the failures of the algorithm – is actually undoubtedbly failures of your own. And understand that many of the failures of Google’s algorithm are also based around the volatile, difficult to define conditions they operate in – and because of that, they will never be completely “fixed”.
This thing, the Google algorithm, is very, very good – so don’t cry to us – or Google – if you aren’t. This is a competitive, subjective environment, yes – but few whistles are needed for those not officiating.