Hacker News new | past | comments | ask | show | jobs | submit login

I'm the author of #6 on the same list. It's definitely interesting to see it has been used thousands of times on GitHub, and who knows how many more in proprietary code. I don't think it's buggy, but I now think it could definitely be improved.

I think this shows an example of a big problem with StackOverflow compared to its initial vision. I remember listening to Jeff and Joel's podcast, and hearing the vision of applying the Wikipedia model to tech Q&A. The idea was that answers would continue to improve over time.

For the most part, they don't. I'm not quite sure if it's an issue of incentives or culture. Probably some of both. I think that having a person's name attached to their answer, along with a visible score really gives a sense of ownership. As a result, other people don't feel enabled to come along and tweak the answer to improve it.

Then, once an answer is listed at the top, it is given more opportunity for upvotes, so other improved answers don't seem to bubble up. This is a larger issue with most websites that sort by ratings. Generally they sort items based on the total number of votes, including hacker news itself. Instead, to measure the quality of an item, we should look at the number of votes, divided by the number of views. It may be tough to measure the number of views of an item, but we should be able to get a rough estimate based on the position on a page, for example.

If the top comment on a HN discussion is getting 100 views in a minute and 10 upvotes, but the 10th comment down gets 20 views and 5 upvotes, the 10th comment is likely a better quality comment. It should be sorted above the top ranked comment! There would still need to be some smoothing and promotion of new comments to get them enough views to measure their quality as well.

Such a policy on StackOverflow would also help newer, but better answers sort to the top.




An idea I've had for a long time is that "the community" can vote to override an accepted answer. There are many times when the accepted answer is incorrect, or a newer answer is now more correct, but the only person who can change an accepted answer is the OP.

I think community-based changes to the accepted answer would go a long way to solving your problem too, but it requires someone to be reviewing newer answers and identifying when there's another that would be more appropriate.

It'd incentivise writing newer answers to older questions. Correcting accepted answers that probably weren't ideal to begin with. A new "role" where users hunt through older questions and answers looking for improvements to make.

Stack Overflow answers are supposed to be community-based, but we unfairly prioritise the will of the original questioner *forever*. I don't think that's optimal.


As a side gig I teach an intro to web development class online. Every semester I get students asking for help about why their code isn’t working. Nine times out of ten, they are trying to use some jQuery code they copied from stackoverflow because it is the accepted answer. They don’t yet know enough to recognize that it isn’t vanilla JavaScript (which they are required to use).


The best way to address these students is to ask them "Why do you think that this should work?"


What platform do you use to teach the course? I've been teaching an ML from scratch course for junior devs at my company and think it'd be useful for others.


  > but the only person who can change an accepted answer is the OP.
This system makes the person arguably _least qualified_ to understand the situation the single arbitrator as to which answer is accepted.

Was it the most efficient? First to answer? Copied-and-pasted right in with no integration work? Written by someone with an Indian username? Got the most upvotes? Made a Simpsons reference? Written by someone with an Anime avatar?


>This system makes the person arguably _least qualified_ to understand the situation the single arbitrator as to which answer is accepted.

Devil's advocate: If it fixed their problem adequately, it's acceptable.

Maybe separate "acceptable" and "ideal" answers would be a nice feature?


In the vast majority of cases, OP did not check (or even define) edge cases, race conditions, memory usage, network activity, etc etc etc.


Seems like the answer is to add a community accepted answer that's easier to change over time, while keeping the accepted answer feature as is.


>This system makes the person arguably _least qualified_ to understand the situation the single arbitrator as to which answer is accepted.

What is the argument for the OP being the least qualified?


The argument is probably that while they are the best qualified to know whether it solved their issue, they're not qualified about whether it was the best way to solve their issue, since they had to ask in the first place.


Of all the people involved, he was the one who _didn't_ know how to resolve a specific issue.


Well, they do know the tech stack, the domain, the specific problem. Now they know whether the solution resolved their specific problem, if their code review/testing caught any bugs, etc, etc.

If anything, they have the most amount of information in this context. I really don't think of them as being the least qualified.


Yeah there’s an argument for that. I think it’d hold more weight if narrow, specific, and loosely defined duplicate questions were allowed - but they aren’t.

Questions and answers belong to the community. I think the accepted answer should too - maybe after some period of time.


(or she :))


Why you always on about women, Stan?


Currently the only incentive to post a new answer to an old question is you get a special badge. That's neat but limited. I've gone through old R questions and posted answers with a more modern syntax and my answers rarely get much attention.

I'd be cautious about overriding an accepted answer. Imagine a situation where there's an easy-to-understand algorithm that's O(n^2) and the "Correct" algorithm that's O(n). If OP only has a dozen datapoints, the former might be the best answer for her specific problem, despite it clearly not being the right approach for most people finding the thread via Google in the future.


They actually recently added this feature - you have a "this answer is outdated" button you can press. Note sure what the reputation threshold to see it is.


I've browsed a few tags and haven't been able to see that button and my reputation is 40k+ so I'd expect to have all features enabled.

Are you able to point me to a Meta/Blog post or even just a screenshot please? I'd be keen to see it.

Actually it looks like https://meta.stackoverflow.com/questions/405302/introducing-... is the announcement for wanting to tackle the problem. Not sure if they've implemented it yet though.


"An idea I've had for a long time is that "the community" can vote to override an accepted answer."

I don't know if this is still a thing, but for some time in the past when an answer was edited more than a certain amount of times it automatically turned into what was called a "community wiki" answer.


Or you could just edit the accepted answer if it’s wrong? I’ve seen a few posts where the top contains an “UPDATE” that, in summary, links to another answer.


One of the things that baffles me the most about SO is that I can't sort answers by _newest first_.

If I search for something related to javascript for example, I know there will be a ton of answers for older versions that I am most likely not interested in. However I can only sort by oldest first (related to date).

Old answers are definitely useful a lot of times, but the fact that there's not even the option to sort them the other way around tells me that SO somehow, at it's core, considers new answers less important.

A strange decision if you ask me, considering software changes so much over time.

If anyone has a possible explanation for this I'd love to hear it.


There are three buttons that act as sorting directions at the top of the answers section: "Votes," "Oldest," and "Active." The "Active" option sorts by most recently modified, which is _usually_ what you'd want instead of strictly newest. (i.e. an edit would update the timestamp, making that answer have a more recent activity date)

So, I guess the answer to your question of "why can't I" is "good news! you can" :)


Well, none of those options do what I want.

More often than not, sorting by "Active", "Oldest" and "Votes" usually surface the same 2 or 3 answers, and I still need to scroll down to the bottom to find out the most recently posted answer that has more up to date info.

I don't see why I shouldn't have the choice to sort by "Reverse Oldest" if you will, when it's so useful a lot of the time.


This is why Stack Overflow has just started the "Outdated Answers project" in which users can set answers as outdated: https://meta.stackoverflow.com/questions/405302/introducing-...


I always thought the should have a language version. Eg python3, php7. JavaScript es6....


Tags work to categorize by language (https://stackoverflow.com/questions/tagged/python-3.x); by having multiple languages on one site, you'll have a broader audience because there's few developers that only work with one singular language.


> If I search for something related to javascript for example

As someone that's been learning a little JS over the last year, I quickly came to the realization that you skip over the SO links that come up in the search, and you go to one of the many other sites. I've had good luck with w3schools and mdn. SO is a lost cause for JS.


I agree.

However sometimes I am looking for some error related to a botched nodejs install for example, or something that has to do with permissions being set incorrectly and other stuff that does not live in MDN and other documentation sources.

For the actual language questions I do go directly to MDN instead.


> we should look at the number of votes, divided by the number of views

Closer, but still not quite what you want probably or a few stray votes can make a massive impact just from discretization effects. What you really care about is which answer is "best" by some metric, and you're trying to infer that as best as possible from the voting history. Average votes do a poor job. Check out this overview from the interblags [0].

[0] https://www.evanmiller.org/how-not-to-sort-by-average-rating...


This isn't just a statistical problem, it's also a classical exploration/exploitation trade-off. You want users to notice and vote on new answers (exploration), but users only want to see the best answers (exploitation). The order you show will influence future votes (and future answers).

In addition, it's a social engineering problem. At least people with a western psychology seem to respond very strongly when a score is attributed to their person (as opposed to a group success like in a wiki). So you better make the score personal and big and visible, and do not occasionally sort by random just to discover the true score.


I think that's a great example of the "smoothing" that I was alluding to, though not in a format accessible to most programmers. However it is still just using a function of upvotes and downvotes. I think true rating can be much better when you also incorporate number of opportunities to vote. Because having the opportunity to vote (by viewing an item, or purchasing it, or whatnot) and choosing not to vote is still a really useful piece of data about the quality of an item. Especially when you are comparing old items that have had millions of opportunities against new items with only thousands.


> number of opportunities

Yep, definitely. The only challenges there are that there's less literature about doing so and that if you have both up and down votes there's no longer one right way to define a single objective for scoring.


> I think this shows an example of a big problem with StackOverflow compared to its initial vision. I remember listening to Jeff and Joel's podcast, and hearing the vision of applying the Wikipedia model to tech Q&A. The idea was that answers would continue to improve over time.

Interessting. As a random visitor this was something that never came to me from the way SO presents itself.

> For the most part, they don't. I'm not quite sure if it's an issue of incentives or culture.

I think it's more a problem of communication and UI. SO is not really the kind of site that animates people to answer or improve things. The overall design is also more technical and strange, not motivating and userfriendly.

Today for the first time I realized that there is a history for answers and an "improve"-Button that seems to allow me to change someone else answer. I only saw that because I expliciet looked for this because of this thread.

Wikipedia in the beginning was very vocal and motivating to engage all kind people to help and improve articles. SO never had that vibes for me. Additionally, it simply has not the interface that makes it simple to do this stuff. There are only this aweful comments under each answer, which are not really useful to discus an answer in all lenght and from all sides. Might be better to change them to a full fletched forum with some kollaboration editing and some small wiki-functionality or something like that.

I remember they tried to do some kind of wiki with high quality-code-parts, what happend to that?


One of the really frustrating things about SO is that once you reach a certain rep threshold, you lose the ability to suggest edits, and instead gain the ability to just make the edits directly. I'm a lot more likely to do the former, because it helps ensure that if I actually made a mistake, it will be caught by the people voting on it. And so SO has lost out on a bunch of my suggested edits because they took away my ability to suggest edits.


What would really help with the vision here is some way to comment and associate tests against posted code. I have corrected algorithms on Wikipedia that were obviously wrong with even a cursory test. Then people can adjust the snippet, debate the test parameters, or whatever else they need to do while maintaining some sort of sanity check. If it’s good enough for random software projects used by a dozen people, it’s probably good enough for snippets used by thousands of developers and even more users.


This post made me think the same thing. It would be nice to have a StackOverflow that was actually more code focused. People could write tests or code and actually run them.


I always try and improve existing answers with edits. Often just adding important context when the answer is just a line of bash and adding links to source documentation.

There's very little gamification incentive to do so and often the edit queue is full. Still, there are lots of times where important caveats and information is pointed out in the comments and never added to the answer


The other day I asked a question about the c/c++ plugin of vscode, somebody swooped in to edit it to just be c++ because “c/c++ is not a programming language”. The question wasn’t answered. I wonder what’s the incentive for people to do something like that.


> As a result, other people don't feel enabled to come along and tweak the answer to improve it.

It's worse than that. Edits have to go through a review process that is much more selective and often arbirarily rejects good edits.


Only if you're a low rep user, though. And no, many more bad edits are accepted, than good edits being rejected. By orders of magnitude.


> Only if you're a low rep user, though.

What qualifies as "low rep"? I'm easily in the too quintile.

> And no, many more bad edits are accepted, than good edits being rejected. By orders of magnitude.

Do you have any data to support this?

The editing and updating process for stackoverflow is broken and as a direct result I've used the site less and less over the years. Denying the problem just hastens the demise of the site.


Editing answers is a complete waste of time. You can post a correction along with a copy and paste of the relevant section from the documentation, yet have your edit disappear without explanation.


To correctly measure the quality of an item one needs to take something like Google's PageRank algorithm and apply it to people. That is, there needs to be some measure of the reputation of the person posting. This doesn't mean that a person who was correct in the past is necessarily correct right now, but it is true that people who are often correct tend to go on being correct, and people who are often wrong tend to go on being wrong. Careful people tend to continue to be careful, and sloppy people tend to continue to be sloppy. It's important to capture that reality and use it as a weight given to any particular answer.


Potentially a stupid question; why is it not possible to just make a MediaWiki site explicitly for SO questions? Does it exist already?


The technical cost/effort for someone like you or me to do that is minimal. The expensive part is the ongoing social maintenance fee aka moderation. As evident by the stack overflow drama re: Monica, it’s an unsolved (non-technical) problem that you could make your own mint to print money on, if you were able to fix any tiny part of it.


And then we would run again into people with an inflated ego, edit wars etc.


The Monica situation is probably a bad example; that was Stack Overflow (the company) royally and unilaterally messing up. It's certainly not a usual situation for resource-curating communities.

I've written, and deleted, several essays on the matter, but a TL;DR: Monica's legitimate questions to staff about a policy got caught up in a crackdown on sealioning-type harassment of trans (etc.) mods in the mod chat, and SO management basically declared war on Monica by mistake. We don't know whether they dealt with the actual harassment (though I think they did, belatedly), because if they did, proper procedure was followed and the perps weren't named-and-shamed in the press.


Wouldn't a simple TTL - Time to live, solve that problem, of course with an option to see the graveyard.

This would mean that the same questions would get answered again and again over the years, but I think that could also solve the negative reputation problem of the website.

Two bird with one stone, or if you're Slovenian, two flies with one swat. ^^


For anyone else that is curious like I was, the #6 answer on that list is from 12 years ago: https://stackoverflow.com/a/140861/


>> For the most part, they don't. I'm not quite sure if it's an issue of incentives or culture.

Classic example of "good is the enemy of best".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: