Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, you may be right. I might try this. Does anyone have any opinions about whether this is a good idea, and if so how best to do it?



Something about this bothers me. Can't put my finger on it.

I think the assumption is that there are articles so bad that it should be obvious that they are off-topic. In the past, I've submitted things on the border. I imagine many others have too. I've always counted on the groupthink to correct any errors in submission I might make. This seems to assume that there are hard guidelines. Watching the board over the years, I'm not sure that assumption is accurate.

Put differently, if you had a cache of really bad articles, shouldn't we see them? That way we'd know not to submit. But if you already know they're bad, then what's the point of voting or flagging?

Perhaps I'm just mentally adrift here. Honeypots make sense to me when we are talking about boolean things: a website visitor is either harmful or not. An email sender is either a spammer or not. But I'm not sure at all that this concept applies to something like an essay. Seems like if it would, you could just use the flagging behavior mentioned to rank the articles and dump everybody else's votes. Right? This is like verifying the voting behavior by setting up some completely different system to rank quality detectors. But if you could rank quality detectors, why keep the old system? And if not, how would you separate which parts of which system are useful and which are not?


I think it is more and more becoming clear that google's approach is the best way: your ideal first page has to be personalized (when you are looged in, Google personalizes your results with location, previous searches, +1s, and even who you follow on twitter).

Trying to be everything to everybody means there will be people left with sub-optimal results.


This becomes suboptimal on a news site. I don't want to just see content that I know and is interesting to me, I want to read things I would have never have a chance of coming across otherwise. Of course, it is also impossible to read everything. This is why the voting system works so well. If many users find an article interesting, chances are that it will be interesting to everyone. This dynamic is increased substantially with a specific group like Hacker News. This is why I don't like personalised pages, they are only really feasible for things like google+ and certain types of searches.


For those downvoting: Please share your thoughts. If you disagree, I'd love to know why.


It's not you; it's that pg has said many times (and I agree) that fragmenting the front page is a bad idea. We must all see the same front page to judge the same quality.


Interesting. However, many people already filter the first page, let it be by points, or by the twitter accounts:

http://twitter.com/#!/newsyc20

http://twitter.com/#!/newsyc50

http://twitter.com/#!/newsyc100

http://twitter.com/#!/newsyc150

And from each filter, people auto-select things that interest them. Sometimes I only see a story when it is retweeted to me.

And this you can't prevent. It lies on the fact that different people have different definitions of what "quality" means.

Which is the core problem highlighted by linked blog post.


The front-page of HN is a filtered list in and of itself. And I for one actually like the result.

Some people would like to see a different filter and thus create one, because they can. That is indeed not something you can, or should want to, prevent.

But the fact that some people create their own filters is not a motivation to not tweak the HN front-page filter in such a way that the front-page matches the intended goal (pg's goal in this case, presumably adopted by the majority of HN readers, more-or-less codified in the guidelines) as closely as possible.


"The core problem ... lies on the fact that different people have different definitions of what "quality" means."

The user is not the problem. The user is the solution. http://news.ycombinator.com/item?id=3170840


I agree with this. However I don't think that existing communities can easily adapt to this well. Instead I think it has to be something that is part of the initial design of the platform so that it's tightly integrated into the service. You could go further than Google and actually deeply integrate a social graph with feed weighting based on who you follow and what they vote on as well as your previous voting record and maybe some shortcut to topic, such as tags.


Why aren't we taking the bayesian approach?


Can you be more explicit?


The concept of a 'honeypot' article is a little ridiculous.

The best you can hope for is "this preferred population of people overwhelmingly dislikes this article". The problem at hand is that you want foster the interests and tastes for that one group of people.

So instead of crusading against lame newbs with a labour intensive system of 'silly articles' (who picks what is an article that ought to be downvoted?), you could compare their weighted voting history against your population of 'good users', yadda yadda yadda.


I'll bet he want to complificate the formula to look something more like

"P(H|Q)=P(Q|H)P(H)/P(Q)" where

H = "User upvoted a honeypot article" and

Q = "User's votes are not a good signal for article quality"

And a similar adjustment for flagging.


Bayesian approach could be nice uf it can be used to increase diversity.


Sounds like a machine learning problem. Showing people the cache of really bad articles would lead to overfitting.

Making each user's vote a vector with the value of the honeypot formula determining its strength, and the vote total for any article a floor function of its total votes would be pretty cool, but might be too computationally expensive--certainly more so than the user-categorization approach.


I don't see any problem. The article is talking about ranking up voters and flaggers, not submitters, so you could continue with your submissions and not be penalized.


I don't know Arc, but I'm happy to write out the idea in simpler instructions than the blog post if it helps.

The implicit approach is very practical. You basically just need to bootstrap it and then it will run itself.

To bootstrap:

- Track if each user has seen an article or not.

- Track how many times each user flags any article, let's call this any_flagged

- Add an admin-only "honeypot" button (or track articles flagged by admins)

- When an admin marks an article as a honeypot:

  1) Increment the honeypot_seen counter of anyone who sees (or has seen) the article.

  2) Increment the honeypot_flagged counter of anyone who flags the article.

  3) Increment the honeypot_upvoted counter of anyone who upvotes the article.
- Repeat until you are happy with the number of honeypots and the coverage of the community. Intuition says if you focus on front-page articles, then you should be fine after 30 or so flagged articles.

Then calculate your super flaggers:

- Apply the h-formula to each user, h(u) = (honeypot_flagged - honeypot_upvoted) / (honeypot_seen * any_flagged)

- Select the top N% to be super flaggers. Again, intuition would say 5-10% is reasonable, but that depends on the way the data looks.

Set it to implicit mode. Now, each article has super flags tracked and when its super flag threshold (percentage of super flaggers) is crossed, you declare it a honeypot. Then you run a process analogous to the one in the bootstrapping phase.


Clever idea in general, a few issues:

1. Sometimes people upvote without reading the article just because the comments are good, and want others to benefit from that as well.

2. Sometimes people upvote to save a submission, since there's no separate save function. For example, I want to send it to a friend later, but not right now, so I upvote it to make it easier to find (b/c HN search is hit-or-miss).

I confess I'm guilty of these, but I doubt I'm the only one.

By way of comparison, take Reddit. Upvoting and saving are separate functions. You can save submissions you think you might want to revisit later. Reasons for doing so:

1. Scanning the headlines quickly, but don't have time to actually read everything, want to save the article and comments for later perusal (lunch break, after work, whatever).

2. Subreddits reduce the cost of upvoting. For example, every time I consider whether to upvote a Bitcoin story on HN, I consider whether it's front-page worthy. On Reddit, that's not a problem, I can just assume it won't hit the general front page b/c it's a relatively niche subject, but the upvote might help it within its own subreddit.

One more potential problem with the idea of superflaggers. If 'social media experts', or spammers, or whatever the people are who game sites like Digg and Reddit got wind of the fact that flagging honeypots could increase the weight of their flags and/or votes, mightn't they also figure out how to abuse that?

A professional, as some of them seem to be (eg, able to spend all day every day doing this), might be able to achieve a denominator very close to 1, and a numerator close to the total actual honeypots.


> One more potential problem with the idea of superflaggers. If 'social media experts', or spammers, or whatever the people are who game sites like Digg and Reddit got wind of the fact that flagging honeypots could increase the weight of their flags and/or votes, mightn't they also figure out how to abuse that? A professional, as some of them seem to be (eg, able to spend all day every day doing this), might be able to achieve a denominator very close to 1, and a numerator close to the total actual honeypots.

Super flaggers receive no special powers other than the ability to contribute to the honeypot score of a given article. Their votes and flags are counted the same as a normal user. I addressed this in more detail in another comment below, but basically the ability to and utility of gaming this system is minuscule.


>Super flaggers receive no special powers other than the ability to contribute to the honeypot score of a given article. //

No special powers other than helping to ensure their submissions are not 'honeypotted' and submissions contrary to their view are 'honeypotted'?

Wouldn't that also downgrade those who hold contrary views to them - as the contrarians would be more likely to upvote the stories that the gamers are helping to get marked as honeypots - thus ensuring that the gamers keep those with opposing views from gaining a position in the quality control caucus?

I notice you're in AI, have you run some formalised tests on how such a voting system would play out?

My personal (untested) preference is towards making voting plain and all scores open and then letting users somehow create their own metric for filtering. Perhaps that won't work on the scale of a successful site though.

Are there actually upvoting and flagging cabals in operation on this site now?


> No special powers other than helping to ensure their submissions are not 'honeypotted' and submissions contrary to their view are 'honeypotted'?

Please see my detailed explaination of why this is not a problem [1].

> I notice you're in AI, have you run some formalised tests on how such a voting system would play out?

No. If there is a top-tier conference publication in it, I would be happy to do some MC runs. That being said, this is not really a publishable idea unless I can actually implement it and measure the results somehow on a real site. :)

> Are there actually upvoting and flagging cabals in operation on this site now?

Probably not. There definitely are such rings on Digg and reddit (I know for a fact). This is a general system, so it could be useful on any social news site.

[1] http://news.ycombinator.com/item?id=3166548


>Sometimes people upvote to save a submission,

Isn't this what bookmarks are for?


OT: wow, I always assumed voting on reddit stories was per-subreddit, does a story submitted on multiple subreddits really share the sum of the upvotes?


There's no such thing as "a story submitted on multiple subreddits". Any submission is associated with exactly one subreddit. But an upvote there also counts as an upvote towards sending it to the frontpage.


> Track how many times each user flags any article, let's call this any_flagged

This is a pretty fatal flaw in your plan. Not everyone can flag and it seems if you flag too much you loose the ability to flag. This restriction would have to be relaxed before your plan could be tested.


Seems like a bad idea, that isn't even necessary. I went and looked at the guidelines, which I admit to not having done in some time, and it says:

Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.

Of the 30 items on the frontpage as I write this, none fit this characteristic. The closest might be the one about Google not removing police brutality videos. But that hardly seems to be a seriously negligent submission.

This what you say is on-topic: On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

While there's a lot that may not qualify according to these guidelines, IMO, I'm happy to leave it up to the voters of HN to decide. Clearly people thought these were the 30 most interesting stories right now (based on how voting is done).

I'd argue that better content needs to be written, more than I'd argue that we need honeypots. Maybe some way to promote a really indepth and insightful comment to a front-page submission or something.


That's a fair point. I wrote the post as a reaction to an off-topic article that I saw on the front page. That's not to say that the overall quality of HN isn't superb, but I was just particularly upset that a glaringly off-topic article was so highly upvoted.

This is a general system, however, and certainly addresses a problem which many other social news sites suffer from. I can imagine reddit implementing something like this, for instance. Thus, I think even if one believes it may be overkill for HackerNews (at the moment), it's an important contribution to the question of how to properly and automatically moderate social news sites.


Agreed. You should discuss this with Reddit and other social news sites.


Any suggestions on how? Is there a good subreddit to post this on and get attention?


Which article?


I haven't noticed any huge decline in the front page recently, although I do think that many stories scroll off the new page way too fast. This is still a huge problem for longer articles as well as lectures and podcasts, and basically prevents anything but short pithy content from making the front page.

As far as comments go, I think the main problem is that there are just too many comments that don't really contribute much. The other thing that bothers me is there are a handful of commenters who contribute very little of value and yet are some of the fastest rising in terms of karma. I've always been fairly non-sensitive to mean-spirited comments, so whether this has gotten better or worse I have no idea.

edit: Maybe there the new page could be changed to some kind of queue system for non-breaking news, where it would be limited to 30 new stories per hour.


This is how I classify articles (loosely based on a famous quote by Eleanor Roosevelt+):

-Articles on ideas: +1. Examples from current homepage include "Why we moved off the cloud" -- and that's kind of stretching it.

- Articles about events or claims: "meh" Examples: Batch is the best photo editor, google+ now available for google apps, etc

-Articles on other people, current events, activism: Flag Examples: Google police brutality, Vimeo bans game videos, etc. This is more slashdot material, nothing interesting.

*The E.Roosevelt quote is something like "Great minds discuss ideas, average minds discuss events, small minds discuss other people"

EDIT: I realized after I edited my answer that it doesn't really apply to your comment. I meant to say that given this method of classification, it's noticeable that the quality of articles has decreased, and would love to see more ideas than pointless "they're taking our rights" articles.


The quote is not by Eleanor Roosevelt. It goes back to at least 1901, in the precisely titled Reminiscences of Legal and Social Life in Edinburgh and London, 1850-1900. (Discussed at http://en.wikiquote.org/wiki/Eleanor_Roosevelt.)

Also, I don't think it's true. It sounds true because of the way it is formulated. But great minds often discuss all of these things. Think of, say, David Hume. Clearly a great mind, clearly (IIRC) interested in all three.


If my upvotes/downvotes/flags may be secretly used to profile me in ways I can't know about, and my secret profile might negatively impact my experience on this site, isn't my best option to not participate at all?

In short, with this mechanism I can't trust that doing anything, be it clicking through to an article or flagging one, might negatively classify me. I can't be sure this site isn't trying to trap me somehow.

My advice is to let all of this hand-wringing about such nebulous issues as 'quality' go. It's a fairly decent community here and that's about as good as you could expect from the Internet at large. If you're really worried about what this site has become, simply conclude the 'Arc experiment' and shut it down.


The proposed negative impact is that votes from "bad" users would be counted less. So it can't be that your best option is to not participate: if you stop voting, you already punish yourself worse than this algorithm would.


I don't see how a system like this would improve comment quality.

In my opinion, high comment quality is strongly related to the atmosphere of respect on HN. Making comment scores private helped with this.

Creating complex rules, or rules that mysteriously favor the votes of some users relative to others, leads to the perception that HN is a caste system, even if status is earned over time... and nudges the atmosphere more toward competition than friendly discussion.

The one feature that I think could be useful would be some way to merge stories that are essentially identical. There is a karma incentive for people to post lots of stories about topics that are "trending" at the moment. The more users we have, the more thorough the community will be with this, and it's both a good thing and a bad thing. The upside is deeper coverage of important events, the biggest downside is a fragmented discussion, but in addition the homepage is often filled with 4 or 5 (or more) highly similar stories.

Adding "merge" would improve the s/n ratio, reduce redundancy in the discussions, and make it easier for someone checking HN at the end of the day to get up to speed on what happened and to (perhaps) leave an insightful comment or two.


I think you could do something much simpler, like letting us downvote submissions. I only flag egregiously bad submissions, there are many other times I wish I could downvote though.


Seconded. There is so much stuff that is superfluous or just not that good but not sufficiently bad to flag it.


The second proposal, implicit honeypots, is similar to something I'd been thinking of as a way to address the degradation of comment quality, and could be finagled to address that larger issue.

My idea was that you would start off with a limited number of "trusted" members, and invisibly flip a bit in their profile. These would become supervoters, and their votes would confer or remove some multiple of karma for each up- or down-vote. In so doing, they would exert a proportionally larger influence on the visibility of comments, hopefully helping to highlight good ones and bury bad ones.

The supervoter bit would not be static; it could be gained or lost. A supervoter whose submitted comments received net negative votes by other supervoters would lose their bit, as would one who consistently voted against the trend established by the other supervoters. Similarly, a non-supervoter who tended to submit and upvote comments that were favoured by existing supervoters and downvote comments that were buried by supervoters (before they had been greyed out, to avoid gaming) could, after a certain threshold, have their bits flipped as well.

Ideally, this process would be entirely transparent, with no one but yourself and other similarly privileged users able to see who had the bit. Similarly, it would be best if the change went unannounced. I'm aware that HN is OSS; perhaps it would be better to leave this out of that repo as reddit does with anti-spam measures. The reason for the secrecy is the same as that for hiding karma scores: it reduces karmawhoring and gaming.

As for submissions, I think the problem there is both less severe and easier to solve. HN's front page is still slow enough that it can be hand-curated. More mods would likely be able to keep a handle on things. However, an algorithmic solution could work. Similar to supervoters, have superflaggers: if the ratio of submissions someone flags to those removed is high enough, flip a superflagger bit in their profile. Then, any article in the new queue that was flagged by (say) 3 or more superflaggers would be automatically removed.

The reason I like these proposals, as well as the submission's, is that they are invisible and nobody knows if they are even operational. By hiding these workings from membership at large, I believe it would be possible to have a positive effect on quality while still discouraging the kinds of behaviours that have led to a massive decrease in quality on large parts of reddit.


I'm worried that this would lead to groupthink.


That's a legitimate concern, but I don't think HN is quite that homogenous. I also don't think we tend to downvote out of disagreement, which would be required in order for groupthink to set in. You occasionally see a thoughtful comment expressing an unpopular sentiment get downvoted, but not often and usually not for long.

For groupthink to be a serious threat, it's not simply enough to have the top-rated posts express a given view, you also have to have contrarian views be buried. I don't think that a group of people that pg would be likely to pick would a) upvote all the same things, or b) downvote posts they disagree with. As the supervoter bit would be passed to people who voted in generally the same way as a reasonable number of the superusers, it would be unlikely that people who downvoted out of disagreement would get the bit. Further, I think there's a wide enough range of views amongst the people pg would likely select to minimize the likelihood of a single viewpoint gaining dominance.

There's enough contrarianism built into the basic personality of most HNers that I think we'd be fine.


I have unfortunately seen all the things you describe, for example see any thread about PHP, or the thread about perl and random syntax, or about using an ORM for SQL.

I've seen it in threads about social issues as well, although none come to mind right now.


I've seen it too, and while disturbing, I don't think it's representative of the HN population as a whole, and I certainly don't think it's representative of the types of people who would initially be picked. The goal of my proposal is to counteract the people who do engage in those unwanted behaviours by giving proportionally more voting power to those who demonstrate that they don't.


Classically, honeypots are put into play to avert malicious activity / users.

My gut instinct about it being applied in this format, while extremely creative ( wow - very cool thought experiment ), may have undesirable consequences. The intuition is merely from the fact that you are creating an adversarial premise for a wide-band community with varying maturity and motivation.

n.b. have personal / professional experience wiring such efforts and am certain you know top blokes who're ninjas in the game


> you are creating an adversarial premise for a wide-band community with varying maturity and motivation.

I really have to disagree here. There is this misconception in these comments that somehow my system would let people get these massive egos and encourage them to harm others. That is simply not true, for several reasons:

1) You will never know if you are a super flagger, normal user, or ignored/penalized user.

2) Super flaggers gain no power to move things up or down for a specific article. They are merely there as a proxy for detecting if anyone is consistently upvoting improper articles. If the top 10% are super flaggers, the next 80% are normal users, and the bottom 10% are ignored, then that means the super flaggers will only account for 1/9th of the upvotes on average. And if they flag an article, it will not get removed faster than if a normal user flags it; rather, it will simply increase the chance that the article will be used as a honeypot in the future.

3) A single super flagger has little leverage, assuming you choose a large enough pool of super flaggers. One person will not do much to push the honeypot threshold over the top.

4) It's a moving target. So even if you were to ascertain that you are a super flagger and you decided to try to flag articles inappropriately (in the ever-so-small amount that you can do damage that way), you won't be a super flagger for long. Rather, you'll quickly be drowned out by your own noise and you'll fall off the super flagger list when the next update is performed.

That's not to say this system is perfect. I suppose one could manipulate it if:

1) You were somehow able to determine that you were a super flagger (non-trivial).

2) You were able to get a bunch of evil buddies together who also were super flaggers.

3) Your group is a sufficiently large portion of the total super flagger population, say 30%.

4) The admins did not include some oversight to periodically check up on what was being made into honeypots.

Then, yes, you could go to town flagging things for a while. It's certainly not fool proof, but if you had that large of a coordinated group on HN, you could wreak havoc in much more efficient and straight-forward ways.


It seems to me that most of the issues are caused by too few ideal HNers watching the "New" page and up-voting quality content. Instead, primarily controversial submissions manage to garner the necessary number of up-votes to make it to the front page before falling off of the "New" page.

I don't believe that the proposed honeypot solution would address this.


Random articles from the "New" page could be occasionally inserted into the main page to see if they catch any up votes.


I was not even thinking about the mechanics of your system, which as you've reasoned above and in the article may work beautifully.. merely the fact that HN would start deploying such a methodology opens up a road that may have interesting ramifications down the track.

Am sorry if my response sounds less precise or more philosophical than you want.. but it is well intentioned.

In a democracy , one common problem is that you have to respect others that you think are voting wrongly and put up with bad content. HN as it stands now is a wide-band place.. Eternal September is always going to be a risk.

Another way to address some of these concerns would be to have sub-sections ( much like a normal web-board ) where people are encouraged to discuss some common subsets / topics .. or even have a special section for newer folks.


HN already does things like deadpooling people who are consistently offtopic, trolling or just nuts. I think that unless steps are taken to keep bad content under control, any forum is going to go under.


Thanks for redirecting me here.

>3) Your group is a sufficiently large portion of the total super flagger population, say 30%. //

There will be populations that aren't interested in the original mix. These populations could swamp the site under such a system.

So if HN becomes popular with a particular niche who're not interested in the original mix and they become a large proportion of the population - for example: There are many ebay sellers ("ebayers") that like the site for occasional link they don't find elsewhere. These ebayers are usually inactive. But they start to vote against nerdy tech stuff and always upvote ebay related articles. The site focus will drift and the feedback loop will attract more users that want ebay stuff and put off others from voting (as their votes are getting ignored because non-ebay stuff starts to become a honeypot).

...

Anyway. Would love to play with such a system and see where it goes. Like I said before I'd love it if somehow the site could let you implement this whilst at the same time allowing me to ignore your honeypot system and just have displayed voting (+clickthroughs and saves). That is we'd be able to establish our own metrics. Then people could try different filter algos and choose which gives them the nicest site.


The real problem your honey-pot suggestion has is that there is a diversity of opinions and expertise across the site, and it's not the case that they're easily separable.

Should you discount someone's technical opinion because they are vocal about their political opinion? Because that's essentially what your system would do. And it would do so without notifying them that this was occurring.


I think the hardest part would be trying to discern intent just from someone up-voting one of these honeypot articles, given the loose definition of what's considered on-topic here. You'd be stuck between being subtle and catching false-positives, and being obvious and driving people away / generating useless discussion with obviously off-topic stories on the front page. I'd argue against this for that reason, and because you'd probably alter user's voting behavior to try to "avoid the honeypot" rather than simply evaluating the article.

I'd take a different approach, and argue that most users are capable and willing to filter articles that they think fit the guidelines, however I'd bet that most people don't leave the front page when considering what to vote on. This self-imposed filter bubble of convenience seems to create two separate areas of content, with more pop-culture-leaning tech news on the front page with tons of comments and votes, and a graveyard of dead/questionable/interesting/technical articles without any feedback on new.

I would experiment with adjusting a user's per-article voting power, either silently or with feedback in the form of a voting power average similar to karma average -- however, I'd adjust it based on where the stories are and/or how much feedback they've already received when a user votes on them. I'd discourage voting on already super-popular stories and encourage voting on stories that haven't gotten much exposure (from "new"). You're also forced in the latter case to evaluate an article on its merit, before it has any comments and few points, similar to how you evaluate comments now without seeing comment karma, which seems to have helped with comment quality.

TLDR: Front-page stories with 2k votes don't need another 1k - we know it's a good article. Those 1k votes would be better served picking out gems and interesting "smaller" news from /new and other areas. Disincentivizing voting only on popular stories and incentivizing voting on new/unfiltered new ones would better serve the community than trying to catch people with honeypots.


> Front-page stories with 2k votes don't need another 1k - we know it's a good article.

Once an article has hit 100+ votes, displaying more does seem kinda pointless.


Seeing a bad submission might encourage others to submit those kinds of links, thus defeating the purpose and ending up in a worse situation. Additionally, submissions that are upvoted are kept in a "Saved Stories" section of one's profile. I generally use this as a bookmark for things I want to read [again] later. What if the title is potentially interesting yet the contents are trash? I might save it thinking to read it later and it would be counted against me. Further, there is no mechanism to remove a saved story which would be a worthwhile thing to implement.

A better approach might be to hand-select N individuals you know are solid community members and make their votes count more. Three points per upvoted story rather than one, for example. To spread this further, take the top N users each of them upvotes the most (beyond a minimum threshold, decaying over time, etc.) and give them two points per upvoted story. With N=20, that gives you a pool of 420 people who can influence the front page to a greater degree than most while still keeping it manageable. You shouldn't need to update your original pool very often and the secondary pool can be recalculated once per day.

You could also have a bad pool consisting of those users who were flagged more than once by anyone in the original or secondary pools (again, decaying over time). Their votes could count for nothing.

This approach would anchor the community around known, trusted members and let their actions become the drivers for the behavior you wish to encourage. If you wish you had more members like those you hand-pick to upvote stories, what you're really saying is you wish those stories were upvoted more, so giving them more votes achieves that goal. The secondary pool is then reputation-based as is the bad pool.


I'm actually quite reluctant to flag unless an article is clearly spammy. My feeling is that if the article is getting a little traction and stimulating discussion, the normal voting procedure should handle it. If I upvote a couple of honeypots because maybe I do happen to find them interesting, and perhaps for a reason the submitter didn't intend, then I'll be penalized.

I think there is room for honest disagreement. For example, yesterday someone killed an article that I was preparing a comment for when it had reached a karma of nearly 50 and there was a discussion taking place. It was somewhat political (and therefore dangerous), but I find the OWS interesting because it seems to be an authentic movement that's starting to self-organize.

Some 'foreign-born' (non-US) entrepreneurs I know are watching it kind of closely. So, again, honest disagreement. But if I want to keep my h(u) kosher, I'll have to flag more than I want just to make sure.


I don't like the analogy with "honeypot".

A "honeypot" exists to be a target for abusive behavior. Somebody who attacks a honeypot is making an attack -- they're guilty.

Somebody who votes for a bad story once in a while is just somebody who votes for a bad story once in a while. They're not a criminal, the way a person who attacks a honeypot. They shouldn't be treated like a criminal.

The big problems I see is: (1) different versions of the same story show up multiple times [extreme case: when the front page was about nothing but Steve Jobs] and (2) certain people who write consistently mediocre blog articles that seem to be voted up by voting rings every day.

Other than that, the quality of hacker news is really pretty good.


> Somebody who attacks a honeypot is making an attack -- they're guilty. <

I think the underlying assumption of the article is that people who are upvoting offtopic stuff are in fact attacking the site. But is offtopic content really the biggest issue? Clustering (or lack thereof) seems to me the predominant problem, as you said. For example, right now, we have the Bill Gates thing on the front page twice. And come to think of it, the Android SDK update shouldn't be there at all.

Maybe there should just be an option to merge discussions. Just a simple function where users can vote to merge an article with an older one on the same subject.


Disclaimer - I'm a relatively new user.

Disclaimer aside, I think it'd be interesting to try - perhaps as an experiment, similar to the /classic you posted.

As for how to implement? Well, I'd imagine that the actual programming is relatively easy (disclaimer #2 - I'm no expert on Arc, so I could well be wrong), so I'm assuming you mean how to handle it. As I see it, you have two options. Firstly, submit link-bait and other detrimental submissions yourself, with a dummy account. Secondly would be to allow the admins/mods to actually mark something as detrimental, and retroactively modify each user that upvoted it.


I think it's a pretty good idea. It'd be easy to come up with articles that are very clearly about politics/current events or otherwise quite off topic, yet 'hot topics'. Let's see: Occupy Wall Street, Republican primaries, Drug Legalization would all be ideal targets.

If nothing else, it'd be very easy to run as an experiment to see if there's anything to be seen about how articles like those get voted up and / or flagged, by who, and whether actually utilizing the honeypot data could be put to good use.


Have you considered giving extra weight to submissions by first-year users, rather than just their votes?

If someone from that subset takes the time to find and submit an article to HN, I suspect that carries more signal than a simple upvote, whose effort-cost is near zero.

You might include high-karma users as well, if the first-year members aren't submitting enough.


It completely misses the mark in my view: The problem is an overall decline in quality (i.e. mediocre articles). The presented solution fights spam. Two very different things.

Systems like this get gamed pretty heavily too. In this case a spammer could simply set up a bot that downvotes everything with <0.

I'd much prefer a system that subtly boosts the voting power of certain users. Preferably based on something that is difficult to fake. e.g. Seniority + Karma. Assuming that it is indeed true that the old members perform better than the new ones that should improve quality. This would probably create a HN elite...but if that is what it takes to improve the quality of articles then so be it.

Giving the Guidelines a bit more prominence would help to. e.g. linking to them in the submit page. If new members are anything like me then they simply do not look at the bar at the bottom of the page & so never see the guidelines.


>> If the h-ratio of a user is less than an admin-specified threshold, we flag the user as detrimental to the overall quality of the site...

This would give me pause. I read HN more for the comments than for the articles. The comments frequently of higher quality than the articles. I usually upvote early from the new page so that the topic will get wider discussion. That means that not infrequently, I will upvote a mediocre article or one I disagree with to get the discussion going. If necessary I'll throw in a comment saying why I think the article is off-base.

I would hate to be ranked as doing a disservice for what I am attempting to do. Perhaps I need to flag more Arabic and self-promotion spam to build up my meta-karma, although I think the current flaggers are doing a good job.


My occasional check of the 'new' page shows that most stories fall off the end of the page and are never seen again. Seems overcomplicated to rig a page no one pays much attention to (I could be wrong, but my anecdotal experience over time backs this up).

Wouldn't a simpler metric be how many times a specific user's story posts are flagged? Combine this with a karma threshold for submitting (much like the down-vote threshold) and it seems to me to be a viable option.


I don't like this idea at all. It seems sneaky in that mean, vindictive kind of way. I think my incentive to vote at all disappears if I'm continually worried about the possibility of voting on a honeypot. To me this seems like a way of punishing potentially everyone for something a certain set of users is doing. Also the thought that what I'm voting on is being tracked is mildly disturbing.


I would rather have a system where karma has some sort of multiplier for their submissions or comments, i.e. voting for a comment or submission from someone with high karma counts as f(karma) times as many votes. This preserves the "legacy" of HN by weighting submissions and comments from established, validated users higher than newer ones.


But that favors longevity and undermines the "meritocracy" that many geeks seem to cherish. As a newer voice here, I think it odd I might be less valued (by orders of magnitude) because my contributions over six months or a year or whatever simply because I wasn't user #5.

Based on 120-day rolling karma? Might work.


PG, what about coding up the thing, marking a few articles in the archive as "honeypot articles", then rerender the site (locally) and see how it performs?


>Does anyone have any opinions about whether this is a good idea

The problem is that flagging is used as a downvote mechanism. For example, I've observed that some anti-Apple or anti-Android articles(even if otherwise legitimate) are flagged so that they go lower into the page(you see articles with fewer points submitted later on top, so this is evidence of flagging). This kind of (mis?)usage of flagging can be detrimental to this method.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: