I think this shows an example of a big problem with StackOverflow compared to its initial vision. I remember listening to Jeff and Joel's podcast, and hearing the vision of applying the Wikipedia model to tech Q&A. The idea was that answers would continue to improve over time.
For the most part, they don't. I'm not quite sure if it's an issue of incentives or culture. Probably some of both. I think that having a person's name attached to their answer, along with a visible score really gives a sense of ownership. As a result, other people don't feel enabled to come along and tweak the answer to improve it.
Then, once an answer is listed at the top, it is given more opportunity for upvotes, so other improved answers don't seem to bubble up. This is a larger issue with most websites that sort by ratings. Generally they sort items based on the total number of votes, including hacker news itself. Instead, to measure the quality of an item, we should look at the number of votes, divided by the number of views. It may be tough to measure the number of views of an item, but we should be able to get a rough estimate based on the position on a page, for example.
If the top comment on a HN discussion is getting 100 views in a minute and 10 upvotes, but the 10th comment down gets 20 views and 5 upvotes, the 10th comment is likely a better quality comment. It should be sorted above the top ranked comment! There would still need to be some smoothing and promotion of new comments to get them enough views to measure their quality as well.
Such a policy on StackOverflow would also help newer, but better answers sort to the top.
I think community-based changes to the accepted answer would go a long way to solving your problem too, but it requires someone to be reviewing newer answers and identifying when there's another that would be more appropriate.
It'd incentivise writing newer answers to older questions. Correcting accepted answers that probably weren't ideal to begin with. A new "role" where users hunt through older questions and answers looking for improvements to make.
Stack Overflow answers are supposed to be community-based, but we unfairly prioritise the will of the original questioner *forever*. I don't think that's optimal.
> but the only person who can change an accepted answer is the OP.
Was it the most efficient? First to answer? Copied-and-pasted right in with no integration work? Written by someone with an Indian username? Got the most upvotes? Made a Simpsons reference? Written by someone with an Anime avatar?
Devil's advocate: If it fixed their problem adequately, it's acceptable.
Maybe separate "acceptable" and "ideal" answers would be a nice feature?
What is the argument for the OP being the least qualified?
If anything, they have the most amount of information in this context. I really don't think of them as being the least qualified.
Questions and answers belong to the community. I think the accepted answer should too - maybe after some period of time.
I'd be cautious about overriding an accepted answer. Imagine a situation where there's an easy-to-understand algorithm that's O(n^2) and the "Correct" algorithm that's O(n). If OP only has a dozen datapoints, the former might be the best answer for her specific problem, despite it clearly not being the right approach for most people finding the thread via Google in the future.
Are you able to point me to a Meta/Blog post or even just a screenshot please? I'd be keen to see it.
Actually it looks like https://meta.stackoverflow.com/questions/405302/introducing-... is the announcement for wanting to tackle the problem. Not sure if they've implemented it yet though.
I don't know if this is still a thing, but for some time in the past when an answer was edited more than a certain amount of times it automatically turned into what was called a "community wiki" answer.
Old answers are definitely useful a lot of times, but the fact that there's not even the option to sort them the other way around tells me that SO somehow, at it's core, considers new answers less important.
A strange decision if you ask me, considering software changes so much over time.
If anyone has a possible explanation for this I'd love to hear it.
So, I guess the answer to your question of "why can't I" is "good news! you can" :)
More often than not, sorting by "Active", "Oldest" and "Votes" usually surface the same 2 or 3 answers, and I still need to scroll down to the bottom to find out the most recently posted answer that has more up to date info.
I don't see why I shouldn't have the choice to sort by "Reverse Oldest" if you will, when it's so useful a lot of the time.
As someone that's been learning a little JS over the last year, I quickly came to the realization that you skip over the SO links that come up in the search, and you go to one of the many other sites. I've had good luck with w3schools and mdn. SO is a lost cause for JS.
However sometimes I am looking for some error related to a botched nodejs install for example, or something that has to do with permissions being set incorrectly and other stuff that does not live in MDN and other documentation sources.
For the actual language questions I do go directly to MDN instead.
Closer, but still not quite what you want probably or a few stray votes can make a massive impact just from discretization effects. What you really care about is which answer is "best" by some metric, and you're trying to infer that as best as possible from the voting history. Average votes do a poor job. Check out this overview from the interblags .
In addition, it's a social engineering problem. At least people with a western psychology seem to respond very strongly when a score is attributed to their person (as opposed to a group success like in a wiki). So you better make the score personal and big and visible, and do not occasionally sort by random just to discover the true score.
Yep, definitely. The only challenges there are that there's less literature about doing so and that if you have both up and down votes there's no longer one right way to define a single objective for scoring.
Interessting. As a random visitor this was something that never came to me from the way SO presents itself.
> For the most part, they don't. I'm not quite sure if it's an issue of incentives or culture.
I think it's more a problem of communication and UI. SO is not really the kind of site that animates people to answer or improve things. The overall design is also more technical and strange, not motivating and userfriendly.
Today for the first time I realized that there is a history for answers and an "improve"-Button that seems to allow me to change someone else answer. I only saw that because I expliciet looked for this because of this thread.
Wikipedia in the beginning was very vocal and motivating to engage all kind people to help and improve articles. SO never had that vibes for me. Additionally, it simply has not the interface that makes it simple to do this stuff. There are only this aweful comments under each answer, which are not really useful to discus an answer in all lenght and from all sides. Might be better to change them to a full fletched forum with some kollaboration editing and some small wiki-functionality or something like that.
I remember they tried to do some kind of wiki with high quality-code-parts, what happend to that?
There's very little gamification incentive to do so and often the edit queue is full. Still, there are lots of times where important caveats and information is pointed out in the comments and never added to the answer
It's worse than that. Edits have to go through a review process that is much more selective and often arbirarily rejects good edits.
What qualifies as "low rep"? I'm easily in the too quintile.
> And no, many more bad edits are accepted, than good edits being rejected. By orders of magnitude.
Do you have any data to support this?
The editing and updating process for stackoverflow is broken and as a direct result I've used the site less and less over the years. Denying the problem just hastens the demise of the site.
I've written, and deleted, several essays on the matter, but a TL;DR: Monica's legitimate questions to staff about a policy got caught up in a crackdown on sealioning-type harassment of trans (etc.) mods in the mod chat, and SO management basically declared war on Monica by mistake. We don't know whether they dealt with the actual harassment (though I think they did, belatedly), because if they did, proper procedure was followed and the perps weren't named-and-shamed in the press.
This would mean that the same questions would get answered again and again over the years, but I think that could also solve the negative reputation problem of the website.
Two bird with one stone, or if you're Slovenian, two flies with one swat. ^^
Classic example of "good is the enemy of best".