Let's say that 127 million people don't like the Mona Lisa. What does that tell you about the quality of the painting? Nothing. That's why downmodding is a bad idea.
That's why I didn't vote for that option myself. People keep asking about it though. One option would be to let users flag content as not appropriate for HN, and then have a per user option of filtering away content the community doesn't find appropriate.
I'm all for flagging the content. It'd be especially nice if it'd throw up a warning in front of the summary if enough people flag it.
Digg does pretty well in this regard. If enough people bury the story as inaccurate, you see a big "Warning! The contents of this article may be inaccurate!". That plus scanning a few comments are helpful for deciding not to waste my time on an article.
Suggested flags include: 'inaccurate', 'nsfw', 'not suitable for HN', etc.
Let's say that 127 million people LIKE the Mono Lisa. What does that tell you about the quality of the painting?
For your argument to be sound, it has to apply equally to upmodding as to downmodding. So might as well get rid of modding entirely and just have all the crap come in unfiltered?
It is true that if a lot of people downmod something, this does not necessarily mean that thing sucks.
But it is also true that A LOT OF THE TIME when a lot of people downmods something, that thing does in fact suck.
Hows about we add downmodding and also tags and a way for each user to specify his/her own set of filter parameters?
Right. Also, people are much more inclined to try mold the front page into their idea of what the site should be using down arrows than they would any given comment page.
But you are not going to write a comment infinitely, are you? Wouldn't 12 hours be OK, for example?
It's about memory allocated for the hash table. If they add some RAM and change the timeout to 12 hours you won't even notice what's going on under the hood.
And closures are nice from programming perspective. Should I say it's almost definitely the future of server-side development... ;)
HN is using URLs for passing those hash keys (if they are hash keys of course), but they need to store them on the server as well. So URLs themselves don't solve the problem in this case.
It's actually that HN doesn't use databases, where they could keep session-specific data much longer.
The reply link I used to add this comment, had timed out, which seems wrong.
Have continuations just moved the the complexity -- making some aspects more elegant, while resulting in undesired side effects like expiring reply links?
I find that often happens when I'm designing abstractions - complexity moves somewhere else, and I later get unexpectedly bitten.
All HN posts show up in Google unbelievably, even suspiciously quickly. Not sure if it's a hack on Google's side or it's popularity, but a link to Google with "site:news.ycombinator.com" in the request would do perfectly.
Google does a good job at indexing HN quickly. There are several problems however since Google's ranking algorithms aren't necessarily the best at ranking content in HN. SearchYC addresses some of these problems and I like their solution, but I would prefer not having to leave the site to search. Given that search dominates the top of the list in "Feature Requests" I'm assuming there's a good number of people that feel the same way.
I opted for the simple get rid of scribd or make it optional since the overwhelming majority in your poll wanted some option of getting the original PDF.