Thousands of humans rank Google search results manually

For a long time, it has been believed that Google’s results were solely the product of fancy computer algorithms and clever engineering, but The Register‘s recent look at Google’s 160-page guidebook which is essentially a reference manual for human “raters” has proved otherwise.

So the question on everyone’s mind is who are the humans entrusted with this most noble job?

As it turns out, Google outsources the job to two crowdsourcing agencies, Leapforce and Lionbridge, who in turn employ home workers to do the job. According to one Leapforce job ad there are 1,500 raters.

The work is flexible but demanding. Raters must pass an examination and are consistently evaluated by Google. For example, a rater is given a “TTR” score – “Time to Rate” measures how quickly they make their decisions.

The 160-page manual gives detailed advice for raters on subjects such as relevance, spamminess, and – more controversially – the elusive “quality”.

For relevance raters are advised to give a rating based on “Vital”, “Useful”, “Relevant”, Slightly Relevant”, “Off-Topic or Useless” or “Unratable”.

Raters may also be asked to give a spam rating: “Not Spam”, “Maybe Spam”, “Spam”, “Porn” and “Malicious”.

Users are also asked to second guess “user intent” i.e. what the user was trying to accomplish when he/she typed the query. Intentions are classified into three categories: “action intent” which is a user wanting to “accomplish a goal or engage in an activity”, “do queries” and navigational, or “go queries”.

Websites with content older than four months old are never rated as “Vital.”

To read more on what other guidelines are set in the manual, take a look at this article.

Share on TwitterShare on FacebookShare on LinkedInPin it on PinterestSubmit to redditSubmit to StumbleUponShare on Tumblr

Written by Regina Timothy

Editor of TechNews Report. Loves all things technology

Leave a Reply