profits vs. search relevance?


This is hilarious.

I search for “fuji premium plus” glossy longevity on google because i want to find some info on the cheap consumer paper that i’ve been using.

On a small set of 11 results, I look at #4 and find this page which in summary has the right keywords — if you count the ads. There isn’t a cached version of the page either — which is kind of suspicious. The crawled site could have tricked google by feeding it different content, or the site could respond to referrers from google with different content than folks who come in straight, which in fact it does along with a couple of google ads.

It might make sense that google didn’t keep a cached copy of the page because it looks like search results — but why wouldn’t it cache a copy? Maybe because of the meta http-equiv 60 second refresh?
Still, it seems like the game of serving back search content with irrelevant marketing messages to referrer searches has been around a while and it’s easy to check if straight-through-page != referrer-through-page.

I wonder if the reason why these sites aren’t filtered out is that they also tend to be high users of the google’s adwords revenue generating service? And so, relevance in this way, would be an unprofitable move for google.

TwitterFacebookGoogle GmailGoogle+Share/Save

profits vs. search relevance?


This is hilarious.

I search for “fuji premium plus” glossy longevity on google because i want to find some info on the cheap consumer paper that i’ve been using.

On a small set of 11 results, I look at #4 and find this page which in summary has the right keywords. There isn’t a cached version of the page either — which is kind of suspicious. The crawled site could have tricked google by feeding it different content, or the site could respond to referrers from google with different content than folks who come in straight, which in fact it does along with a couple of google ads.

It might make sense that google didn’t keep a cached copy of the page because it looks like search results — but why wouldn’t it cache a copy? Maybe because of the meta http-equiv 60 second refresh?
Still, it seems like the game of serving back search content with irrelevant marketing messages to referrer searches has been around a while and it’s easy to check if straight-through-page != referrer-through-page.

I wonder if the reason why these sites aren’t filtered out is that they also tend to be high users of the google’s adwords revenue generating service? And so, relevance in this way, would be an unprofitable move for google.

Leave a Comment

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>