Added: Looks like HN has been blocking Googlebot, so our automated systems started to think that HN was dead. I dropped an email to PG to ask what he'd like us to do.
(A couple weeks ago I banned all Google crawler IPs except one. Crawlers are disproportionately bad for HN's performance because HN is optimized to serve recent stuff, which is usually in memory.)
A site can be crawled from any number of Googlebot IP addresses, and so blocking all except one doesn't help in throttling crawling.
If you verify the site in Webmaster Tools, we have a tool you can use to set a slower crawl rate for Googlebot, regardless of which specific IP address ends up crawling the site.
Let me know if you need more help.
Edit Detailed instructions to set a custom crawl rate:
1. Verify the site in Webmaster Tools.
2. On the site's dashboard, the left hand side menu has an entry
called Site Settings. Expand that and choose the Settings submenu.
3. The page there has a crawl rate setting (last one). It defaults to
" Let Google determine my crawl rate (recommended)". Select
"Set custom crawl rate" instead.
4. That opens up a form and choose his desired crawl rate in crawls per second.
If there is a specific problem with Googlebot, you can reach the team as follows:
1. To the right hand side of the Crawl Rate setting is a link called
"Learn More". Click that to open a yellow box.
2. In the box is a link called Report a problem with Googlebot which
will take you to form you can fill out with full details.
Crawl-Delay is (in my opinion) not the best measure. We tend to talk about "hostload," which is the inverse: the number of simultaneous connections that are allowed.
A few years ago, I did pretty much the same thing myself. Thankfully the late summer was our slow season and the site recovered pretty quickly from my bone-headed move, but the split second after I realized what I've done was bone-chilling.
I think just about everyone has thought at some point that they understood how something worked, only to have had things go pear-shaped on them.
The lesson: people are not fully knowledgeable about everything, even the smart and talented ones.
"You would not believe the sort of weird, random, ill-formed stuff that some people put up on the web: everything from tables nested to infinity and beyond, to web documents with a filetype of exe, to executables returned as text documents. In a 1996 paper titled "An Investigation of Documents from the World Wide Web," Inktomi Eric Brewer and colleagues discovered that over 40% of web pages had at least one syntax error".
We can often figure out the intent of the site owner, but mistakes do happen.
If you're writing HTML, you should be validating it: http://validator.w3.org/
Is there any real downside to having syntax errors?
Obviously, that's not a problem if you already know exactly how different browsers will treat your code, or you're using parsing errors so elemental that they must be patched up identically for the page to work. For example, on the Google homepage, they don't escape ampersands that appear in URLs (like href="http://example.com/?foo=bar&baz=qux — the & should be &). That's a syntax error, but one that maybe 80% of the web commits, so any browser that couldn't handle it wouldn't be very useful.
Anyhow, one downside to having syntax errors might be that parsers which aren't as clever as those in web browsers, and which haven't caught up with the HTML5 parser standard, might choke on your page. This means that crawlers and other software that might try to extract semantic information (like microformat/microdata parsers) might not be able to parse your page. Google probably doesn't need to worry about this too much; there's no real benefit they get from having anyone crawl or extract information from their home page, and there is significant benefit from reducing the number of bytes as much as possible while still remaining compatible with all common web browsers.
I really wish that HTML5 would stop calling many of these problems "errors." They are really more like warnings in any other compiler. There is well-defined, sensible behavior for them specified in the standard. There is no real guesswork being made on the part of the parser, in which the user's intentions are unclear and the parser just needs to make an arbitrary choice and keep going (except for the unclosed center tag, because unclosed tags for anything but the few valid ones can indicate that someone made a mistake in authoring). Many of the "errors" are stylistic warnings, saying that you should use CSS instead of the older presentational attributes, but all of the presentational attributes are still defined and still will be indefinitely, as no one can remove support for them without breaking the web.
There is no reason to allow most of these errors other than coding sloppiness.
The web would have died in stillbirth and it would never have grown to where it is now.
"Be generous in what you accept" (part of Postel's Law) is a cornerstone of what made the internet great.
XHTML had a "die upon failure" mode, and it has died, why do you think XHTML was abandoned and lots of people are using HTML5 now.
The irony of that statement on hacker news is pretty amazing. Have you looked at how the threads are rendered on this page. It is tables all the way down.
Maybe instead that hostload could be parsed from robots.txt? It sure seems like the better mechanic to tweak for load issues (while traffic/bandwidth issues are still unresolved).
Another thing that might help google is for them to announce and support some meta tag that would allow site owners (or web app devs) to declare how likely a page is to change in the future. Google could store that with the page metadata and when crawling a site for updates, particularly when rate limited via webmaster tools, it could first crawl those pages most likely to have changed. Forum/discussion sites could add the meta tags to older threads (particularly once they're no longer open for comments) announcing to google that those thread pages are unlikely to change in the future. For sites with lots of old threads (or lots of pages generated from data stored in a DB and not all of which can be cached), that sort of feature would help the site during google crawls and would help google keep more recent pages up to date without crawling entire sites.
I believe you can do that using a sitemap.xml
There's an example of doing such with nginx here:
With that you'd just have to send out the HTTP header from the arc app saying that current articles expire immediately, and old ones don't.
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
(I used to telnet to port 80 for testing, and type GET / HTTP/1.0 <enter> <enter>, and that should be LF on Linux & Mac)
Do you ignore whether your HTML is valid just because the browser rendered it correctly?
I've got real work to do. Making a validator happy is fake work.
So, do you want to pay the price upfront when you can plan for it or afterwards when the fix must be done immediately because customers are complaining?
I'd much rather pay the exact price later, than an inflated price now.
Then, working at a startup taught me that it's not black and white. Several quotes come to mind, but Voltaire's is my favorite:
"The perfect is the enemy of the good."
Cowboys may get things "done" quickly, but that doesn't help when things are subtly broken, have interoperability problems, or are nearly impossible to extend without breaking.
If the cache has a copy of an article that is a few hours old it will just give that version to Googlebot while if it thinks a human is requesting the page then it will go to the backend and fetch the latest version.
 15k reqs/sec on a moderate box
If you have an audience, you have PR power.
In the lower-right corner of Google Maps there is a tiny link that says, "Edit in Google Map Maker". Click this link and you can edit Google Maps. Your edits get sent to Google and they'll approve/deny it in typically a few days.
the listing only shows up if you type the exact name of the hospital into the search bar, which is useless.
I don't mean the ranking but other aspects - like you guys blacklisted some domains which produce low quality content in wholesale. (I don't know if the algorithm was tweaked to detect and filter such sources or if it was a manual thing.)
Webmaster Tools has a crawl rate slider which operate on a site-by-site basis, and that's existed for quite a while now.
If you're asking if they can manually boost a site's ranking, hopefully that isn't what's being suggested.
Just realized that this could be a problem for lots of sites, and I'm curious as to what the best solution is, since not everyone has Matt Cutts reading their site and helping out.
Seriously, one thing about Google is that they seem to really like ensuring people are logged on, preferably at all times. Fortunately recent changes to Google Apps (promoting apps user accounts to full Google accounts) has made this more complex on my side and probably degraded the level of actionable info they can get out of it.
It's good to see this stuff sometimes. Thanks, Matt!
And then reading some of the other threads on this topic is a bit...something.
Guys, can you calm the conspiracy theory nonsense a bit? Please?
If you're not on this site very much, you might not realize that Matt pops into almost every thread where google is doing something strange regardless of who they're doing it to, and tries to help figure out what is happening. This isn't HN getting some sort of preferential treatment, this is just the effect of having a userbase full of hackers.
You'd see the same type of thing on /. years ago if you frequented it enough.
This is nothing new. This is what a good community looks like. Everybody relax.
Honestly if you read the things that Matt and Pierre have said, they just looked at "freshness" (I believe that is what it is called), and inferred that PG had blocked their crawlers.
This is all stuff you can get from within google webmaster tools (which isn't some secret whoooo insider google thing. It's something they offer to everybody, and it's just like analytics.)
OH! Wait! I mean (hold on, let me spin up my google conspiracy theory generator): thehackernews.com has more ads on it so google is intentionally tweaking their algo to serve that page at a higher point than the real HN because of ads!
C'mon, guys, look at their user pages. They're both just active users of the site trying to help out.
Of course it's preferential treatment. And if you scan the last month or two of Matt's comments they are general in nature and not specific as in:
"I think I know what the problem is; we're detecting HN as a dead page. It's unclear whether this happened on the HN side or on Google's side, but I'm pinging the right people to ask whether we can get this fixed pretty quickly."
You don't think "pinging the right people" and "get this fixed pretty quickly" is preferential treatment?
Answering people's individual questions doesn't scale to the entire Internet, so Google really has no choice but to address problems on a case-by-case basis. In this case, Matt reads HN and personally wants to solve the problem. That's the only way Google could possibly work, so that's how they do it.
Yet here, a website owner is purposely blocking the crawler and they jump with solutions to try to fix the problem. Sigh.
I remember when I first started visiting HN I saw all these smart people and the tight community and I was amazed that something that felt so close-knit and exclusive yet was still open could still exist these days.
I was a lurker for a long time before I actually signed up and participated because I honestly felt like I swasnt entitled to be part of "the group" and I should somehow earn my wings. Then in late 2010 I signed up but didn't submit for a bit and didn't join discussions. I still felt like I didn't have enough to offer. I now feel like I've somehow earned the right to be part of this community though in hindsight I'm quite embarrassed of my first few submissions.
So this story does have a point that I'm about to get to. I first heard of HN through an article in GQ and then forgot the link. I couldn't find the site again after searching Google for "Hacker News" as easily as I thought. This frustrated me slightly back then but now I think it's a good thing.
As the size of a community gets larger the quality of comments and submissions usually decreases. Letting people join HN freely and openly is a great thing but I fear that if it became a huge sensation then we'd be inundated by garbage submissions and comments way more frequently. I know about the post on how newbies often say HN is becoming Reddit and all that so I do try to remember that.
So the point is that not everyone respects communities like this and are thoughtful about joining and how they choose to interact on communities like HN the same way I was and I feel like maybe it's okay if Google isn't giving us the best ranking for certain terms. I mean, HN is easy to find still, just not that easy to stumble over.
( http://www.webmasterworld.com/profilev4.cgi?action=view&member=GoogleGuy )
I'm aware of webmaster tools, but it seems not all webmasters are.
But people do choose to remove their sites, so we can't always tell between a mistake vs. someone who genuinely prefers not to be in Google's index.
i am sure here are many folks here as well who had similar problems, why not help us all out ))
So which part is laughable? The part where they're able to crawl the vast expanses of the web and return relevant results for the majority of their users? Or is it the part where they came out of nowhere to dominate search because they did it better than the rest?
Come on now, you can't be all things to all people. Google is far from perfect but for a lot of us it's much closer to perfect than the competition and they're constantly trying to improve it. Why don't you go ask Matt Cutts to fix whichever parts of it you think are laughable to your liking? He's been hanging around here and he doesn't seem shy about answering people's questions and concerns. I do doubt he'd give the time of day to a one sentence remark that adds nothing of value whatsoever to the larger discussion or any of it's offshoots.
It now only consists of ads, a twitter feed, and a "Abba-da-dabba-da-dabba-dabba Dat’s all folks!" line.
So what if Googlebot thought HN was dead? Why would it opt to show a "more dead" page in place of it?
I think you're fibbing.
and it didnt turn out quite well ...