
Google Releases the Full Version of Their Search Quality Rating Guidelines - gere
http://searchengineland.com/google-releases-the-full-version-of-their-search-quality-rating-guidelines-236572
======
brudgers
Document:
[http://static.googleusercontent.com/media/www.google.com/en/...](http://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf)

Announcement:
[http://googlewebmastercentral.blogspot.com/2015/11/updating-...](http://googlewebmastercentral.blogspot.com/2015/11/updating-
our-search-quality-rating.html)

~~~
lemevi
Maybe dang can replace the link to the actual announcement and remove the blog
spam.

~~~
novaleaf
not downvoting you, but searchengineland is actually a very good/reputable
site. It is very believable that the original submitter found this out through
them. (IE: this isn't spam)

~~~
lemevi
What value does the link add? Which URL has more information? Should all of HN
go through this blog to read the actual information? No. It's blog spam.

~~~
brudgers
The value, in my opinion, comes from the fact that _SearchEngineLand_ broke
the story. The original Google post is extremely low key, as one might expect
given that previous versions of the document were leaked not released with an
accompanying announcement.

As best I could tell, no other outlet had covered the document release before
_SearchEngineLand_ nor had anyone submitted the original to HN.

As an aside, what else can one expect visually from a site devoted to SEO?

YMMV.

------
hopfog
I used to work as a Search Quality Rater for Google. It was pretty
interesting. It's worth noting that raters are not Google employees but
contracted through third parties and working from home. Many of my colleagues
were stay-at-home moms making a few extra bucks. This was a few years ago
though so things may have changed.

~~~
lmz
I did the same for a few months earlier this year. It was theoretically a
decent sum of money here (3rd world SE Asia) if you could do the minimum
amount of hours they require.

------
chdir
OT, the top rated comment on the above article published on
googlewebmastercentral.blogspot.com is :

 _Google just made public their latest version of the Quality Raters
Guidelines (PDF linked from the blog post)﻿_

Which is just summarizing what the blog post is about (probably a G+ share
that got plus'd a lot). Nothing against the commentator, it's rather their
ranking/voting system that needs to be fixed.

------
dhimes
The "your money or your life" web page stuff is interesting. They call these
YMYL pages anything that can have large impact on the user: financial info
pages, legal info pages, medical info pages, and so on. They say they have
very high quality page rating standards for these pages. OK, but how?

Later in the document they talk about doing reputation research on the site.
They say, "[F]or Page Quality rating, you must also look for outside,
independent reputation information about the website."

So they _are_ evaluating content. They are encouraging their reviewers to make
a judgement. This strikes me as unreliable, maybe even a slippery slope for
them.

~~~
dazc
From the announcement linked to below:
([https://news.ycombinator.com/item?id=10601232](https://news.ycombinator.com/item?id=10601232))

Ratings from evaluators do not determine individual site rankings, but are
used help us understand our experiments. The evaluators base their ratings on
guidelines we give them; the guidelines reflect what Google thinks search
users want

~~~
dhimes
Ah so they probably have a specific set of sites that they check against. That
makes sense- thanks.

------
jacquesm
Surprising to see so much human input. Essentially google took a leaf out of
Yahoo!'s playbook after first crushing them using just algorithms. Is this
proof that beyond a certain degree of algorithmic extraction you need a human
hand to advance or is it evidence for a return to the old days of a more
curated index? (Or are there other possibilities?)

~~~
snowwrestler
Google has been upfront about greater use of learning AI (as opposed to hand-
tuned algorithms like PageRank) to index and rank their search results. In
order to return a good result, a learning AI system must first be trained on
what a good result looks like. Since search returns results to humans, humans
have to provide the guidance on what is a good or bad result.

Often an AI can be trained by a corpus, like training a spell correcter by
feeding it the text of a dictionary. But the data that Google works is the
entire web, which changes all the time, so they need constant feedback on what
is good or bad.

Google also personalizes results now, so the AI needs to learn not just what
is a good result, but what is a good result for that particular person, given
everything else the AI knows about that person. That would require inputs from
a wide variety of people.

------
bhartzer
If you know it's going to get leaked anyway, just release a sanitized version
of it.

------
astazangasta
Why this? Why not measure empirically based on how often people follow your
first result?

~~~
whistlerbrk
That approach is prone to creating a positive feedback loop through an order
bias.

~~~
elsewhen
that's why they track what users do post-click. if the user quickly bounces
back to the search results page , then the result probably wasn't a very good
fit for that particular keyword.

