"Tribune said it asked Google to stop using Google to crawl newspaper Web sites but Google continued to do so."
Wait, really? I find this disingenuous to say the least, coming from the same Tribune company that created a skunkworks project six months ago explicitly tasked with co-opting and "gaming" online communities of note (Digg, twitter, flickr and others.) They're desperate for pageviews. How many readers would they lose if the Googlebot actually kept out?
And can't they just block it out with a robot.txt file?
Maybe they did ask and Google didn't respond because they figured any idiot knows about the robot.txt file. Of course they weren't expecting the idiocy of the Tribune's web admin.
So, the Tribune put an undated story on their front page under "top stories" and blames Google for not figuring out that the story was old? How would a human even realize that, let alone a bot?
If accounts of the story are accurate then the only party that screwed up was the Tribune by a) using an absurdly naive "top story" algorithm and b) not dating their stories (pathetic for any site in this age, but a newspaper!?)
Apparently, we're supposed to guess article dates by reading the copy or look at the dates on the comments. Nonetheless, they appear to be dating all of their stories now.
Wait, really? I find this disingenuous to say the least, coming from the same Tribune company that created a skunkworks project six months ago explicitly tasked with co-opting and "gaming" online communities of note (Digg, twitter, flickr and others.) They're desperate for pageviews. How many readers would they lose if the Googlebot actually kept out?
And can't they just block it out with a robot.txt file?