Hacker News new | past | comments | ask | show | jobs | submit login
Docker for Windows won't run if Razer Synapse driver management tool is running (twitter.com/foone)
417 points by edward on Feb 18, 2020 | hide | past | favorite | 185 comments

Ha, stackoverflow is a great tool, but... the more answers I see on an area I’m experienced in, the more I realize how wrong they can be. Sure, the answer usually gets the job done, but everything else about it might not be right.

If I need to use stackoverflow, these days I make sure I do a few things:

- read a number of answers, comments included. Just because an answer is accepted doesn’t mean it’s the best one

- read API docs for anything I’m unfamiliar with, and make sure I understand how the solution works

- never copy paste; instead use the general idea or concept to rewrite a solution in a safe, future proof manner

- if an answer works but doesn’t feel like the right solution, be prepared to do my own research to find a better approach

Hopefully this results in a better engineered product, but one of the main benefits is learning and expanding knowledge of APIs/techniques.

I'll suggest one more addition to this (and any code you take from the internet): put a comment in saying where you got the idea from (with a link). It can help the next guy out a lot. And you might be the next guy.

> put a comment in saying where you got the idea from (with a link)

StackOverflow content is licensed under CC BY-SA, so you're legally required to do that if you copy code from an answer. (Unless it's so trivial the code is not copyrightable, but it's better to be on the safe side and attribute the source correctly anyway.)

> And you might be the next guy.

It already happened to me a couple of times that I had a problem, googled it, and found a helpful SO answer .. by myself some years ago. So this is definitely a helpful suggestion.

Especially if it's not a copy-paste job. If there's any adjustments or even a rewrite, the comment should include the reason why the changes were made. The next guy coming along might not realize that changes were needed

It's usually better to put that information in the revision comments when you commit rather than in the code itself. Otherwise the code accumulates layers and layers of comments over years.

Personally, I've developed a strong preference for putting this type of information as close to the affected code as possible. Immediately next to the affected code is the only place these types of comments have a prayer of being kept up to date. Keeping them in another document, a commit message, or anything similar just results in the content of the message instantly going out of date.

I feel like it depends - I know I don't just start digging through the commit history of every line I edit.

If I'm trying to fix a bug, and only have to change one line for it to start working, and the tests still pass - but that line was there for a reason that wasn't commented (some edge case) - I'll likely miss it.

This is huge and has saved me a few times already when I became my own "next guy"

So much this... I tend to do this. There are times where I need a limited subset of a library, and will just pull the two classes I need, add the comment at the top, and adjust the namespace.

Including an update to all of your copyright notices, of course

> read API docs for anything I’m unfamiliar with

One of the differences i've noticed between myself and some of the junior devs i've worked with. I would usually use the official docs as my first resource, and they would use stack overflow instead.

This is a hard lesson I had to learn as a junior developer.

I'm impatient and Stack overflow can often give me an easy solution. But just as often, it can't and I will spend 30 minutes unproductively trying things with little progress.

As I become a better developer I become more inclined to spend 30 minutes with the documentation instead.

When you're reading documentation, you also will notice other things in addition to solving your particular problem. Those other things might be helpful few minutes later and you will actually save that time. Or may be tomorrow, if your memory is good. Or may be you'll read those documentation and will actually find a better solution.

Of course most of interesting problems are not described in documentation.

Depends on how clear the documentation/usage is... and sometimes you accept that you have limited knowledge and understanding. Either way, it helps to understand what you are doing, and when in doubt look it up.

If the library documentation isn't clear then that's a clear sign that whoever created it doesn't take quality seriously and you shouldn't use it in the first place. Of course sometimes there's no alternative so you just have to deal with it.

I face a lot of anxiety from this. I'll have a task like setting up Point in Time Recovery for postgres, and I've spent days mostly just reading documentation - How postgres manages wal logs, how the internals of several popular solutions work etc.

I know it'll be worth it in the long run, but it feels like I'm wasting time because "I'm not doing anything" - I'm not committing any changes while my coworkers are busy adding new features.

I use a combination of both. I'll find something on stack overflow, and then look it up in the official documentation. In other words, treat stack overflow like a sophisticated index to a reference rather than as a reference itself.

I don't like this blanket statement ..

I would use whichever googles first and most relevant results are. Also what the issue is matters.

I don't always go to MDN for javascript issues.

But if I want to know why something is the way it is, I will.

As parent said, I don't recommend doing this. In particular, the third-party sources are unlikely get depreciation notes... So unless this is a function you know well, it is almost always better to go to official docs instead.

I will say that as a Ruby on Rails dev, where many things have changed over time, there are quite a lot of highly voted StackOverflow posts where they go into detail about what the answer is on the date of posting, then someone comments "this is different in Rails 4.1+!" and the responder will revisit the answer and revise it to include applicable advice for whichever version you might be on.

This is not granted, but it happens a lot. Although if the docs are good, you should definitely expect to find good information about deprecation notices there, too! But it's not at all uncommon for conversations about code, on StackOverflow or wherever else, to include the footnotes about what is different and what version changed it.

> I would use whichever googles first and most relevant results are.

The people who made the mistake in the OP probably did exactly that.

But what if the docs suck, as is the case with a lot of .io domain libraries?

And anything MS?

Microsoft have best most fascinating documentation in some cases. It can be fantastically detailed, while being completely useless at the same time.

Take their Azure Python API, which is clearly translated directly from C#: There's a number of functions, which according to the documentation takes a string as an input. The thing is that it won't actually take a string, it will take three or four predefined strings. Everything else will yield no result. The only thing Microsoft doesn't care to put in the documentation is which strings are actually valid and what result you can expect from them.

This is slightly unrelated but there was a time when reading the azure docs where you’d be reading a page for version 1.0 but the sidebar would take you to the documentation for version 2.0 and I actually got caught up writing a mismatched integration and it took me forever to figure out why I had done that.

microsoft's documentation for anything other than .NET stuff is absurdly bad.

i was trying to do some work with their crm solution it took me a literal week to find out how to do simple oauth -- mostly because 99% of the documentation only had .NET examples using their own framework!

Microsoft's documentation for .NET is also absurdly bad.

No, I don't want to do a full reload for every overload of a method. Yes, I might want to see where these extension methods are popping up from, out of nowhere.

had extremely good experience with microsoft documentation. and i never did .NET. even old win32 API stuff still works, unlike many APIs i know of...

Microsoft documentation was historically the gold standard for comprehensiveness and quality. If what the other commenter says (e.g. about their Azure docs) is true, I don't know if things have changed or what. Maybe a post-Nadella culture shift?

At one point they "burned the library" and broke a lot of old MSDN urls. It's now common to find dead links to MSDN on old Stackoverflow posts.

The documentation is definitely there, but finding it can be a curse. The search is worse than Google, the site is surprisingly slow for a bunch of static text, and the ability to discover something when you don't know the right search terms is poor because Microsoft name everything in the most generic way possible.

The Microsoft Docs change from MSDN was horrible (and still is). A lot of content got mysteriously mangled for no discernable reason. Their KB articles, which used to be excellent, have mostly been "disappeared" too. You can see a vast difference in quality between the newly-written pages and what was inherited from before. I find spelling and grammar errors often in the newer pages.

A lot of MS documentation now is awful. Azure stuff especially.

Sometimes there is just no usable documentation for certain things at all. I've even seen documentation for certain Azure things where it just directs you to Stack Overflow!

I guess people's expectations might have been lower? MSDN sucks along every axis I can imagine.

The content is bad: Quick, is System.DateTime[0] timezone-aware? The examples also use (the harmful and bug-prone) DateTime.Now 5 times, while DateTime.UtcNow is only mentioned once, as a minor aside.

The structure is bad: Is something part of "Core", "Framework", "Standard", or "Platform Extensions"? It's particularly ridiculous that trying to switch from Core to Platform Extensions while you're viewing a Core class (such as System.DateTime[0]), you'll get kicked over to.. Framework.

The layout is bad: Look at System.DateTime's list of constructors[1]. It takes up a whole screenful to say what Python's datetime.datetime constructor[2] paragraph says in three lines.

The design is bad: Compare the method listing of .NET's IDictionary[3] to Scala's Map[4], or Rust's HashMap[5]. Which one makes the type signatures the easiest to parse? Which one helps you get where you want to the fastest?

(Hint: For me, at least, not the one that insists on making everything a uniform shade of baby blue.)

[0]: https://docs.microsoft.com/en-us/dotnet/api/system.datetime?...

[1]: https://docs.microsoft.com/en-us/dotnet/api/system.datetime?...

[2]: https://docs.python.org/3/library/datetime.html?highlight=da...

[3]: https://docs.microsoft.com/en-us/dotnet/api/system.collectio...

[4]: https://www.scala-lang.org/api/current/scala/collection/immu...

[5]: https://doc.rust-lang.org/stable/std/collections/struct.Hash...

Python's datetime appears to have one constructor with several optional parameters, whereas .NET's System.DateTime has several separate constructors each taking a different set of parameters. The MSDN documentation is accurate and correct; your critique applies to the underlying code (which is built to be backwards-compatible to .NET 1.0, which I don't think even supported optional arguments).

The signatures that are useful when developing the library might be more complex than the ones that the library consumer would be interested in. This isn't much of an excuse when you also maintain the documentation tools.

For a similar example, Scala's old collections library had a ton of machinery to ensure that `map` and co. would specialize correctly.[0] But the user never sees that, because they added a mechanism called "use cases", that allow you to override the signature shown in the docs with a simpler one.[1]

[0]: https://github.com/scala/scala/blob/v2.12.10/src/library/sca...

[1]: https://www.scala-lang.org/api/2.12.10/scala/collection/immu...

Is the io prefix somehow related to software tech?

It is often used for the following (that I have seen - may be others):

- open source libraries - online casual browser games - development/staging environment counterpart to a .com

It's a common use-case that the best docs are provided on some dude's gist or blogpost. Devs and orgs update the code but don't bother with the forgotten docs. It happens all the time.

And the original article here is showing a real-world example of the opposite.

Namely, "some dude's gist or blogpost [or stack overflow answer]" not getting updated. Not just that they didn't but that they couldn't.

Unsurprisingly, which you should rely on depends heavily on context. It takes experience and patience to figure out which one is correct based on circumstances.

once have a working dev environment, external docs are the last place to look.

IDE type search and doc comments are much more likely to be useful.

I agree:

Recently I found an interesting snippet of C code which was using an sprintf to write a string into a char array. Unfortunately, it contained an off-by-one error where the null terminator would be written outside the char.

I proposed to fix the answer, but for incomprehensible reasons the edit was rejected (!)

See https://meta.stackexchange.com/a/164449/192171

If an answer is factually wrong (as opposed to just poorly formatted or missing a few details), generally its better to leave a comment rather than try to edit it.

I agree its not ideal, but suggested edits go into a site-wide review queue, so not everyone reviewing them will necessarily be familiar with C. This makes edit reviews a poor tool for evaluating factual correctness; the comments section and voting system on answers is a better tool for that job.

"We must fix grammatical errors and must not fix technical errors" agrees with my experience on stackoverflow perfectly.

Fixing technical errors is strongly encouraged, just not via the suggested edits feature. Like I said, a comment or an alternative answer would be more appropriate.

If the fix is buried in a comment or lower answer, far fewer people will see it, especially on the ossified, highly-voted answers where it's often needed. Difficulty moderating proposed edits is a process problem, and comments aren't a good workaround.

"You can comment" doesn't help when you need a certain rep threshold just to comment.

Don't you need more to propose an edit?

SO is the worst example of "poop must be delicious - after all, billions of flies can't be wrong!" mentality. As long as the answer is upvoted by enough people, it's accepted as the right one and it takes a titanic effort to correct it, no matter how wrong it is.

> As long as the answer is upvoted by enough people, it's accepted as the right one

That's not correct; the questioner can accept any answer, the criteria is that the questioner chooses which answer helped them. There is no connection between votes and acceptance, no claim that "accepted" is endorsement by StackOverflow the company, by subject matter experts, or by "the community".

The accepted answer is the answer selected by the questioner to fix its problem. OP is often not the most knowledgeable person is the room. The real work come from the answer. Show, explain and prove that you understand the problem before answering.

It is a good thing to test and to do more research, it is even better to add them to the post. Share your knowledge :)

> the more answers I see on an area I’m experienced in, the more I realize how wrong they can be

The fastest correct looking answers on SO are usually the accepted answers. I read an post or tweet a few years back suggesting that you have the quickest draw if you want SO rep: create a bare-bones answer, with no research, so that it gets accepted. I understand why: my answers have been on the receiving end of this - I spent way too much time and effort answering and my answer dropped into obscurity.

That's why I no longer answer on SO.

Why not add a fifth point about contribute with a better answer if you know better? It feels wrong looking down on people just trying to help.

The problem is stackoverflow seems designed to avoid "better answers".

Questions are often long abandoned, and the original questioner won't come back and choose a new correct answer. On a rarely looked at question, it would take years for a correct answer to raise up the ranks.

I've seen various answers which used cryptography dangerously, and asked if there was any way they could be fixed, and I just got the answer you gave. That shouldn't be acceptable when dangerously bad code is being showed to people.

It's worth noting that it is possible to edit other people's answers. So it is possible to correct answers with incorrect or dangerous information.

With sufficient "rep" (2000, where an upvote on one of your questions or answers is worth 10), edits don't even require approval from others (before that, they must be peer reviewed before the edit takes effect)

Personally I find the "edit without peer review" too powerful, since you can do it for any question, regardless if you're supposed to be competent there or not; it would be better if it was limited for tags where you've got a bronze badge. Perhaps it's the lack of oversight which makes me a little wary. And 2000 rep isn't that hard to achieve, it just requires some persistence.

When I asked: https://meta.stackoverflow.com/questions/276496/how-to-deal-...

I was explictly told "Inform and educate, don't censor!." -- which from context clearly means "don't edit answers just because they are wrong".

I believe the idea is that if you're editing questions early on you learn what the expectation for edits are. I reject a lot of edits in the system for being edits that should be submitted as their own answers or as comments to the post they're editing. Ideally, once a user has gone through the system for long enough and had enough edits rejected for improper use of the system they'll know what to do once they reach the higher rep threshhold.

In practice however???


The system works well for fast and easy. A system like StackOverflow will never work well for highly specialized tasks and that's alright.

> The problem is stackoverflow seems designed to avoid "better answers".

SO is perfectly happy to accept better answers, especially to those which have gone stale and perhaps no longer applicable.

If you are offering further information that can benefit the reader its always welcome.

Its one of the reasons a lot of the more concise answers are not always the accepted answers. Because people do ask quite often : "Okay but why did that fix it"

In-fact : Answering without giving a proper reason for the solution is frowned upon mostly.

I've gotten multiple Tumbleweed badges for answering an old question with a better answer. You can't blame the system for the fact someone who either long ago fixed the problem or gave up doesn't come back to change the "correct" answer.

I can totally blame "the system", as it is exactly the system which doesn't make it easy for others to change the "correct" answer, when stackoverflow claims it isn't supposed to be a Q&A system for individual people, but for the community as a whole (hence why bad/repeated questions are deleted).

It suffers from a similar problem to Wikipedia, namely that the likelihood of your information being prominent is a function of how well you rules lawyer, rather than how good your information is.

This is the whole point of a community. Especially if you’re sure that the answer is slightly wrong, you can definitely fix it yourself, which is the best part of StackOverflow. There’s no point in even leaving a comment like “X should be X.Y”

Great point, this too. Also - if there’s a good answer already there but not accepted, it’s worth upvoting, so that hopefully it becomes more visible to others.

- read a number of answers, comments included. Just because an answer is accepted doesn’t mean it’s the best one

Agreed. I find that the "accepted" answer usually gets refined/improved/made more generic after acceptance.

This usually means that the "best" solution over all similar problems is usually further down the list of answers.

Isn’t there a “law” that states the best way to get a correct answer is to post the wrong one first? I feel that should be in mind whenever reading through StackOverflow answers.

* * Cunningham’s law is the one I’m thinking of. https://meta.m.wikimedia.org/wiki/Cunningham%27s_Law

That is how I feel when doing code reviews of offshored projects, problem is, it doesn't scale so the code does the job and stays until something like this blows up.

The most important step in my opinion is: Check the date of the question and answer.

I've spent WAY too much time banging my head on the keyboard while trying to implement a solution posted 3-4 years ago, which doesn't apply in the slightest for the latest version of whatever I'm doing.

You must feel good that you have hours to fix things then. Unlike a large number of people here.

I am a somewhat active SO user and one thing I see other contributors repeating over and over is "stackoverflow is not a code writing service". Usually this comes up in the context of some kid posting their college assignment and asking people to solve it, but I think it should extend further.

Do not copy/paste code from stackoverflow. SO is not a code writing service. It is a problem solving/question answering service. Any code you see in the answers should be takes as reference only. If you don't understand what the code does, re-read the answer, check the documentation, understand and then write your own code. What happened here is such a stupidly trivial bug. If someone only took a second glance at this code it would be blindingly obvious what is going on here. I saw it as soon as I looked at the SO code sample and I don't even write C#. Do people at Docker even do code reviews?

Also, when I write answers in SO, I try (not 100% of the time, but I do try) to make my code generic, so it solves the problem the user is having, but does not include any specifics from the question. I suggest you all do the same.

P.S. I know that this rant of mine is rather idealistic. Nobody is going to stop copy/pasting code and these things will keep happening. But writing this was therapeutic.

I think it would be a great addition for StackOverflow to have a sort of report button for dangerous code.

Take for example libcurl (thus, implementation in PHP, Ruby, Go, etc). There are sooo many posts with cURL example code where CURLOPT_SSL_VERIFYPEER is disabled. These are often answers with thousands of upvotes, that have been there for years.

Inexperienced developers copy/paste this code into production without understanding what it does.

You can report the errors in comments, and develop a habit of looking in the comments for reports of dangerous code.

...if you have 50 reputation.

You mean literally everyone who's been coding for more than 3 months. You need 5 upvotes on your questions to get that much.

Unless you've never made any particular use of StackOverflow, stumble on an incorrect answer, realize you have no way to fix it, and then just leave because you don't feel like jumping through hoops just to make somebody else's day better.

This is a problem for new user growth on all reputation-based sites -- review sites like Yelp, even HN.

The difference is that those sites generally don't hard gate features behind having a certain amount of MeowMeowBeenz.

Sort of -- Yelp will hide negative reviews from brand new users, and HN enables some light moderation features and faster commenting as your karma increases.

If you've never made any particular use of StackOverflow, why would you expect them to care what you have to say about anything on the site?

It would probably take you as much effort to get 50 rep as it's taken you to post multiple sour grapes comments about how dare they have a threshold which excludes you, don't they know who you are??

> If you've never made any particular use of StackOverflow

I would have made use of it... if I didn't have to post questions I have no interest in posting to farm enough reputation to actually do anything to fix the incorrect answers I occasionally stumble across in Google searches.

> It would probably take you as much effort to get 50 rep

Again, I have zero interest in posting questions just to farm reputation. Less than zero, actually, because of the mental overhead involved in doing so.

Do you understand that from the other side, you aren't a known skilled developer submitting valid corrections, you are indistinguishable from any other internet IP address submitting spam and/or low grade buggy misunderstandings?

One way they distinguish people who care about code from complete robo-junk is with a minimal effort barrier of community upvotes, which you refuse to engage with. They don't know you're special and important enough to skip the entrance requirements - how should they know? What the system sees is that you won't put effort in to get 50 karma. You want to skip their criteria because you say you are good enough - yet everyone including spambots would say they were good enough to skip the criteria.

If the filter was "verified members of the ACM/IEEE/IETF/whatever can bypass the 50 point requirements" would that be better? Or would proving your identity and membership details still be too high effort?

What test wouldn't be too high effort for you to bother, but would still be something that could be automated by stackoverflow and would cut out a huge mass of spam?

If it's trivial to reach 50 rep by asking questions, how does it prevent unskilled developers from submitting corrections?

Unskilled developers won't find it trivial. See the new question review queues for the kinds of questions submitted by people who can be bothered to sign up and try to participate for evidence towards this claim. e.g. right now there's one which isn't about programming it's about system administration, doesn't explain what they tried, and concludes with "it is not working" with no further details. This first-question is not likely to get upvoted. By being a minimum-competence filter it's not selecting in only the most skilled developers, it's selecting out huge swathes of people who don't want to do what stackoverflow wants its users to do.

People who submit terrible contributions will fall below 50 rep and lose the ability to do so.

> It would probably take you as much effort to get 50 rep as it's taken you to

Yeah? So what. I shouldn't have to farm for rep. I'd even donate to charity to prove I'm not a bot.

I don't want to spam the site with niche questions that nobody can even answer, or go hunting down popular questions to throw answers at that will actually get me points.

> sour grapes

That's not what sour grapes is.

> Yeah? So what.

So whining that stackoverflow has rules which you don't want to follow doesn't make for interesting reading. It's not unfair; you (other commenter) are not hard done by. Don't want to get 50 rep? Then don't get 50 rep. Want to contribute? Get 50 rep it's not a lifelong commitment. Still don't want to get 50 rep? No problem, move on with your life.

> I shouldn't have to farm for rep.

You don't have to farm for rep or post niche questions or hunt down popular questions or anything because you don't have to take part in any way. It's not mandatory.

> I'd even donate to charity to prove I'm not a bot.

Yeah? So what?

> That's not what sour grapes is.

It is what sour grapes is. "sour grapes: Criticism or disparagement of that which one cannot have.". "I can't have stackoverflow on my terms, so I'm going to criticise and disparage it".

> So whining that stackoverflow has rules which you don't want to follow doesn't make for interesting reading.

Most complaints, even very good ones, are uninteresting reading.

> "I can't have stackoverflow on my terms, so I'm going to criticise and disparage it".

If complaining about not being to have something "on one's own terms" is enough to be sour grapes, then just about every complaint in the world is sour grapes. That's obviously wrong.

> It's not mandatory.

Did you honestly think I was saying it was mandatory? Come on, you can do better than that. "respond to the strongest plausible interpretation"

> Most complaints, even very good ones, are uninteresting reading.

Many people have valid complaints about StackOverflow's community building, feedback system, moderation, how it prioritises questions and answers, behaviour of people on the site. These are fine interesting discussions about how useful it is, how effectively it works, how a better site might work, how it feels to participate, and can make for interesting reading. Many other people's complaint is that they went to stackoverflow, didn't follow the site rules, the people using stackoverflow approximately upheld the site rules, and now the poster acts in a way that shows they feel personally slighted by the fact that a site exists which rejected them. This not only makes for uninteresting reading, it's also an unfair attack on stackoverflow which ought to be allowed to exist with any legal rules it likes, without being slighted for simply having rules of some sort of other.

> That's obviously wrong.

If what you suggest I'm saying is "obviously wrong", then come on, you can do better than that, "respond to the strongest plausible interpretation".

> Did you honestly think I was saying it was mandatory? Come on, you can do better than that. "respond to the strongest plausible interpretation"

You made up some things you "have to do" and then complained that you don't want to have to do those things to hack the site to let you do what you want. Objecting to that whole premise is my point. "You think it's mandatory" is the strong interpretation, the weak one is that you're deliberately building a strawman of things you "have to do" which don't actually exist. 1) you don't need to hack or farm, you can genuinely participate. 2) you don't need to post niche questions or hunt popular questions, you can deal with normal questions. 3) If you do this, you won't be able to jump quickly to the thing you want. This is not a problem with the site, this is part of its design, if you dislike that design then we're back to "the site is not mandatory" and if you want to be so great you don't have to go through the normal processes then we're back to "how should the site know that you're not a total spambot if you won't go through the procedure they've setup which involves community feedback on the things you do (which paying money would not involve)?

> didn't follow the site rules [...] rejected them

That's not what's happening here. Neither I nor crooked-v have broken any rules or gotten rejected.

> If what you suggest I'm saying is "obviously wrong", then come on, you can do better than that, "respond to the strongest plausible interpretation".

Sorry, I can't think of any other way to interpret what you said. You gave a definition of sour grapes that is different from the normal definition, but fatally flawed. Normally the "strongest plausible interpretation" would be to assume you mean the normal definition of sour grapes, but that would contradict your entire argument as I understand it, so I can't do that.

While for my argument, I was saying "you have to do X to do Y, which is dumb" and you responded to "you have to do X" in a vacuum. Obvious weakening.

> you don't need to post niche questions or hunt popular questions, you can deal with normal questions.

If it's not niche, I can nearly guarantee it's already been answered, or that I'd be responding to a duplicate where the rules-following behavior would be trying to get it closed.

> you don't need to hack or farm, you can genuinely participate

Not when I rarely find questions appropriate to respond to, and the correct way to respond is always a comment. I would have to seek out questions just for points, which is farming.

The site has a "no shoes" policy, rejecting people without shoes. You refuse to put shoes on, so can't go in. Rejected. Anticipating your next dodge, saying "I own shoes, I could put them on, they're not rejecting me, I'm rejecting them!" - they're rejecting people who won't pass the filter as well as people who can't; both look the same from the other side.

"that thing I can't have sucks": is sour grapes. "My leg hurts": not sour grapes. Not all complaints are sour grapes. "I can't comment without following their rules therefore the site sucks" is sour grapes enough for me. Anticipating your next dodge, whether you "can" have it or not, see previous paragraph. And dogdge after that, no neither of you used those exact words, yes that's what I read crooked-v's intent, supporting evidence: calling the SO score "MeowMeowBeenz" elsewhere in this thread.

> in a vacuum

This zillion comment deep subthread is not a vacuum, it's context and the reply was in all this context. Don't want to do X to do Y? No problem, don't do X or Y, simply move on with your life, nothing lost. You don't have to correct things on SO, and you don't have to be allowed to correct things on SO because it's not public property. Wanting to correct things but not on their terms doesn't change anything. "I want to offer corrections but don't agree to their rules, or don't have time or interest, so I don't" isn't sour grapes and isn't a problem for me. "You can't offer corrections without their gatekeeping meowmeowbeenz", sour grapes. Implying the site owes you a way to comment on your terms, and whining when it doesn't, is a problem.

> If it's not niche, I can nearly guarantee it's already been answered, or that I'd be responding to a duplicate where the rules-following behavior would be trying to get it closed.

A site so full that there's nothing left to ask or answer, yet you're so desperate to hurry in instantly so you can join in. You're not interested in following the site rules .. until it's convenient for you, so now suddenly you are.

> Not when I rarely find [..] I would have to seek out questions just for points, which is farming.

Again "I won't engage with the site on their terms, the only way I can engage on my terms is farming, and the site is bad for making me farm!!!!!". Yes, if you want to engage on your terms you would need to farm. Change yourself to engage on their terms because it's their site. If that means waiting patiently for "rare" questions, or if it means changing what you can answer, or if it means something else, whatever. It's not unfair that it has rules which are keeping you out because you don't want to follow them, and that doesn't make it a bad site.

I doubt that most engineers I know who browse stack overflow even have an account at all.

I've been using SO for 8 years and still don't have 50 reputation.

I am a 6 year SWE that has never contributed to StackOverflow and have zero reputation, goodboy points, or whatever they use.

The two parts of your post ("some code samples have bad practises with known problems") and ("there should be a report button to fix it") don't seem to be connected. To whom would it report, and what would they be supposed to do about it?

Say the answer was there without that option. Next comment "It didn't work, I get 'certificate verification failed' error", next comment "I had to use this option to get around that", next comment "thanks it worked".

What if the code is dangerous, but that's because it's only intended to conceptually answer the question?

For example, part of writing safe code is adding an appropriate level of error handling, but this also dilutes it. If you're trying to understand a new concept, the code with lots of error handling will be harder to parse.

> Inexperienced developers copy/paste this code into production without understanding what it does.

Also those with less English ability may not understand the comments that call out code as being wrong, reading the comments may be too high effort to happen naturally.

Why not just edit the answer yourself and add a big red warning?

Because you don't get to do that unless you either have a ton of rep (which is a pain to acquire on SO because nobody votes), or the question is marked as a wiki. Otherwise your suggested edit goes into a queue where it will likely be declined as "conflicting with the author's intent" even if it's an unambiguous improvement.

If you're going to copy-paste code from Stack Overflow and deploy it to millions of users' systems, I think you have some responsibility to understand what the code is doing.

I can totally see how this happened, and I would definitely not say that I would never make this mistake, especially under time pressure etc. But a careful programmer would a) desire to understand what the code did, and thus spend enough time reading it to spot the `GetType()`, and b) test that the "prevent more than one instance from running" behaviour worked properly, which would involve both testing that you can't run two instances of the program, and testing that if you change the assembly identifier, you can run two instances.

How many times have you run `npm install` or used literally any programming package management system without personally reading every line of code that you're installing?

If I'm going to use some library, at a minimum I read the API docs (getting a sense of the quality of the library from how well-thought-out the API is and how well-written the docs are). That's a long way from copy-pasting a few lines of code from SO without understanding what they do.

Furthermore, it's a lot easier to take the time to understand what a few lines of code from SO does than it is to audit a dependency thoroughly.

I have thankfully avoided the world of NPM, but when I've had to use it I've found it very difficult because a lot of packages don't really have API docs.

It's a lot easier to read a few lines of code from a Stack Overflow answer than it is to read thousands of lines in libraries...

Well, if you are going to be passing something as your own work, you better understand it well.

Also, you should look at least at the docs of the packages you install.. And since most SO answers are shorter that introductory section of most docs, this means reading and understanding entire answer.

>Also, you should look at least at the docs of the packages you install

Brb reading the docs for every micro-package installed as a dependency for what I just installed that provides functionality that could've been hand written in five minutes.

Gonna spend the next few months reading about a thousand package README's for this single package I installed and all its dependencies down the tree, be right back.

You say that like it's not a good thing to do? If you pull a library into your project, you're taking responsibility for that library, for better and worse.

I'm talking about how that's completely not feasible anymore because of how every project has dependencies upon dependencies upon dependencies because the general development attitude has shifted towards "import a single library that tells you if a number is odd, and another library to tell you if one is even"

Ultimately, if a bug in a library causes issue with your program, and your company loses $X, your superiors will not give a shit. You have the burden of responsibility for the program.

For common major libraries used worldwide in enterprise products, you are right in that there is an element of trust. They are made by people much faster/better/stronger than most of us. If all else fails, the sheer magnitude of people using it is a small insurance policy, as disastrous issues will probably be quickly apparent. That still doesn't absolve you of responsibility, but the overall risk is usually pretty low.

A segment of code on SO, or a weeny niche library? You only have your noggin to vouch for its stability and accuracy.

>They are made by people much faster/better/stronger than most of us.

lmfao no they're not

> You have the burden of responsibility for the program.

What does that mean when the developer who committed it might have left long ago? I'm assuming you mean more than "blame" which is a pretty unhelpful response for a company to have, but then what exactly is "burden of responsibility" which should have this bus-factor-of-1 attitude around it, and why?

You're just making an argument against the NPM model of installing five billion packages

No, the way it works you read the docs for the packages you directly use (import from code), and you hope that package authors did the same for their dependencies.

(this encourages one to keep number of at least dependencies small, which is pretty good idea in general)

> Also, you should look at least at the docs of the packages you install

There was a post recently that noted that React is up to over 800,000 dependencies. Good luck with that.

If the code (you copy-pasted) works but you don't understand why then it doesn't work.

Or at least you have no idea whether it actually solves any problem besides the exact scenario you tested it for, and no idea if it's even possible to maintain it.

Every so often, someone will post that cheating in examinations is OK because the important thing is to get the answer from wherever. This shows why knowledge and understanding matter: that GetType() looks suspicious (the type of something rarely uniquely identifies it), even if you don't know much about the specific issue here.

This has been fixed in 2018!!! Why is this here? https://github.com/docker/for-win/issues/1723

The title should probably be changed but it's still an interesting twitter thread that has sparked interesting discussion here

Amusingly, one of the linked issues from that one notes that a third piece of software made the same mistake - "Thunderbolt software by Intel" ( https://github.com/docker/for-win/issues/1301#issuecomment-3... )

> homework assignment for all programmers reading this thread: Think about how you'd find this bug in your own programs. You copy/paste the code, it seems to work, and you don't realize it's broken because you don't run either of these programs which made the same mistake.

Wait... WHAT??

Who the hell copy pastes code into their program without first examining and understanding what each step is actually doing, how and why the pieces fit together as they do, and how it works as a whole to solve the problem?

>Who the hell copy pastes code into their program without first examining and understanding what each step is actually doing, how and why the pieces fit together as they do, and how it works as a whole to solve the problem?

Almost everybody at some point or another?

Say hello to `pasteoverflow`: Paste code from Stackoverflow directly into your editor.


Reminds me of this beauty: https://gkoberger.github.io/stacksort/

Please stop :(

Can it go one meme-step further and automate our jobs away? ;)

I used to work with a "senior developer" that was making well over 150K several years ago that basically just verbatim copy-pasted code off StackOverflow and hacked it together.

He got shit done and that was what mattered to his bosses... of course it was all a steaming pile of unmaintainable garbage but that didn't matter to anyone.

If he couldn't find it on Stackoverflow he would ask another developer who would tell him how to do it... then Mr Senior Developer would ask that other developer if he could just "send him the code".

Last I heard he's still at that company and still making bank doing nothing but copying and pasting other people's work.

>Who the hell copy pastes code into their program without […]

The developers of Docker for Windows and the Razor Synapse driver management tool. Among others.

Heh, I read your comment after I posted almost the same thing word for word.

> Who the hell copy pastes code into their program without first examining and understanding what each step is actually doing

The people who write the software that runs the airplane you’re on.

If you haven't seen stacksort yet: https://gkoberger.github.io/stacksort/

Well... laziness is the mother of invention or something like that.

Has anyone about 40 lines of code with an average line width of 80 characters? I have this rather spare region that doesn't look nice on the mini map.

But seriously, I think the reasons are mundane and developers often have an ample list of problems to solve, so easy solutions for some of those are used as much as possible. Ensuring only one process runs at a time is probably just a feature that invites negligence.

It shouldn't happen, but I am not surprised.

There seems to be a suspicious rise of acceptance for this "practice". It's very concerning. I've never found a SO answer to be copyable, they are usually wrong or flawed.

Usually have to dig in all the responses to get a balanced overview, then apply those concept(s) to your problem at hand....

I know a coworker whose cloudformation code I have had to fix multiple times because they've done just that. Googled "how to do $problem" then skimmed over the first result and copied the code from it.

It's been raised with management, and nothing has been done about it.

> "Who the hell copy pastes code into their program without first examining..."

Quite a lot of people do that. It's pretty much the reputation of Stackoverflow. And of course everybody knows it's bad, but it saves time and it works and there's deadlines so they move on and ship it to production.

How to find this particular bug is not that hard, though: you're looking for a way to ensure that your program can run only once, so you use that mutex. But you want to be sure that mutex is unique to your program and not shared with others that use the same method, so you create another program that uses the same method and test if they conflict. If they do, then your mutex is not unique to your program.

> Quite a lot of people do that. It's pretty much the reputation of Stackoverflow. And of course everybody knows it's bad, but it saves time and it works and there's deadlines so they move on and ship it to production.

The trick of it is, it's only "bad" if you care about code quality and/or maintainability, fuzzy attributes which are seldom measured or optimized for. It's the only logical conclusion to "move fast and break things."

p.s. - the good news is, most of us are working on software that doesn't really matter terribly much, so the fact that it's bad also doesn't matter very much

> If they do, then your mutex is not unique to your program.

Unfortunately, if they don't, you only know that there exists one program which does collide with your app's mutex. Nothing more, nothing less.

> But you want to be sure that mutex is unique to your program

This one's actually impossible to test. Or to be pedantic, we know it can't be done - another app can be using your app's guid for its mutex, but in realistic cases, it's just really hard to test for. There could be even a real use case for collision - independent agents walking the process tree and relying on locks based on guids to not step on each other.

It's not a true proof that it's unique, but if you have two programs use the same method to generate that GUID and end up with different GUIDs, while different instances of the same program get the same one, it's looking pretty likely that they're unique to the program. Going beyond that is only really interesting to mathematicians.

I've had exactly zero instances where I could copy and paste literally.

you forgot to add /sarcasm

>Who the hell copy pastes code into their program without first

Millions upon millions of people learning via googling

But yeah hopefully not in production & ideally sandboxed

Developers at Docker and Razer, among others.

honestly, i think it would be easier to count the people that DO read every thing line of code they copy

Lots of people, sadly

To any Microsoft/.NET devs reading this:

What do you think the chances are that you could make exact references to

return a compile-time warning?

That could make this go away very quickly.

(For bonus points, add relevant telemetry to VS. :P)

A lot of the comments on this topic seem to be focusing on SO's nature or something like what you proposed about .NET itself.

When I first read the Twitter thread I was sort of dumbfounded that whoever did this originally didn't actually check it. That's not a SO or .NET problem that needs to be caught. It's a developer problem.

If someone is copying code into their project that they've never seen before, they should, at bare minimum, make sure it produces the expected result for the first instance they are using it. That developer should have checked to see that the GUID was actually the one from their assembly. Even if someone is copying some complex code that they aren't 100% sure how it operates, they can still test it to make sure it produces what they think it does.

The idea that someone working for Docker just copy, pasted, and then didn't check that it actually did what it was supposed to do is sort of mind-blowing to me. Not sure why no one else is talking about that. I've never used Docker before, but this doesn't instill confidence. It's one thing to have something like this in your end product - that's bad enough. It's another thing entirely when your company builds tools for other developers to use. That shit needs to be rock solid and this is one of those things that seriously worries me when I see it because it's about a lack of effort/testing and someone just blindly took code and didn't even bother to check it.

On the one hand, I completely agree.

I remember reading something about reasoning through a very simple (couple of divisions or something) math problem, where a deliberately (and later obviously) wrong working-out was initially defined, then corrected, to illustrate the importance of end-to-end understanding.

On the other hand,

a) Programming is equivalently complicated to "two-story-high whiteboard full of Greek letters and integration functions", but is never acknowledged by industry as such because of the associated responsibility nobody wants anything to do with

b) We live in a world where entire industries - ML-based AI - are literally!! based on the idea of not understanding the structure of the solution to the problem.

I looked at the title, then briefly glanced over the discussion here, and guessed it correctly --- "same mutex name". I didn't guess they'd get to the same name that way, however. I thought someone had copy-pasted example code from SO without changing a REPLACE_MUTEX_NAME or similar.

My preferred solution is simpler --- CreateMutex takes a mutex name of at most MAX_PATH characters. That's almost begging you to make it the path to the running executable.

    char name[MAX_PATH];
    GetModuleFileName(0, name, sizeof(name));
I know some of you may be thinking "but what if there are two copies of the application in different directories?" Yes, that would result in two instances, but how often is that a problem? If it is, then make it the binary name-only without the path

    pname = strrchr(name,'\\');
and then unless you name your application something very generic like test.exe, there can only be one. I guess the lesson here is to not overthink things --- I see that often in SO answers too.

And that just for some colored LEDs and such. I don't understand why you need Windows-only software for that.

Doesn't a hardware maker want to make it as simple as possible to use their hardware from any OS?

I used to use a Razer mouse when it had a switch on the mouse itself to toggle between different speed settings.

But I changed away from them when that switch disappeared in later iterations of their mice and instead depended on software that didn't work in Linux.

The new version of DeathAdder[1] reintroduced DPI adjustment buttons, and the LED settings persist even when set with the open-source Linux drivers.

[1]: https://www.razer.com/gaming-mice/razer-deathadder-elite

It also has macroing functionality with profile switching. You could achieve the same with something like AHK but the Razer interface is easier for simple tasks.

Docker for Windows has had a fundamental Dns[1] bug since April of last year, I've been forced to run v2.0.0.3 for months now to ensure yum repos resolve inside AmazonLinux containers. Truly a second class platform. [1]: https://github.com/docker/for-win/issues/3810

>Docker for Windows has had a fundamental Dns[1] bug since April of last year, I've been forced...

>Truly a second class platform.

Now maybe some Windows users will see how Linux users have felt for over 2 decades.

Also Docker 2.2 broke volume binding in a way that's kind of staggering (basic operations like move and delete fail). Ive had to keep everyone on Docker 2.1 because after spending two days on it I just concluded it can't do basic file system operations correctly. I really hope they start testing more thoroughly.

Some tangential analysis of Synapse over here: https://twitter.com/davkean/status/1144498663122558977.

Mouse./keyboard customisation software seems universally bad / lowest-bidder style. Logitech's mouse button configuration tool for example weights over 400MB. Literally for a tool which requires a few dropdowns.

Yeah it's a little off-topic, but I wanted to note that for Mac users, there's this app called SteerMouse which is massively better than Razer or Logitech's offerings, works with practically everything, and doesn't have a "rad" nineties gamer-trash UI.


It's not limited to mice and keyboards. Pretty much every device or peripheral out there is infected with that, from printers to graphics cards to headphones, etc..

Nearly every hardware manufacturers software is a giant steaming pile.

I suspect that most hardware companies cultures treat software engineering as as second class activity.

Oooh. What does it have in it?

A home-made .NET-to-Java transpiler that requires custom builds of the CLR and JVM?

The 300MB of random data at the end of the Windows XP install CD?

An encrypted archive containing the set of folders comprising the entire source code history (the top level of which has an alarming number of entries named like "revert_v5_bug [Final] - ACTUAL CURRENT") (because some developer has OCD about backups)?

Given the state of Modern Software Development, It's most probably Electron.

I didn't investigate. It was whether the "unifying driver" or the "g hub" app for macos if you want to check.

I've somewhat ironically found that the performance of my VR rig improves the more keyboard/mouse drivers I shut down. The throughput required for the index controllers and headset is pretty high, and when combined with mouse and keyboard drivers some game frameworks can experience terrible stuttering.

when it forced you to login to use it a few months back, i immediately bought a zowie instead and highly recommend others to do the same

So with tools like Sonarqube we get static code analysis on our code base which shows code duplication within itself. I wonder if there is some way to look for duplication from stack overflow? Not sure that is feasible, since no one is going to want to post all of their proprietary code to some API, but wonder if there is some system for local indexing SO that would also for local duplication detection.

StackExchange data dumps are (at least, were[1]) freely downloadable: https://meta.stackexchange.com/questions/2677/database-schem...

[1] not sure what happens since they "illegally" retrospectively relicensed all the content a few months ago.

Yeah I thought they had something like that. Might be interesting to play with. I meant more of a question as to if anyone had actually done something like that with it. Basically a Turn It In kind of tool that universities use for plagiarism detection, but for code.

Always interesting to see code from StackOverflow in a commercial product, as always this is most likely also a licensing issue.

When I need to ensure only one copy to run I never used GUID, I always used some combination of project name + my name + current date + customer name/business. This way is unique and with a personal touch. Also you can make it as long as you want up to MAX_PATH, not the short style that GUID does it.

Sounds like a natural primary key.

For me overwatch wouln't start if razer synapse was installed.

This is exactly the reason I can never understand how it is impossible for the community in SO to override the accepted answer. The argument goes that the accepted answer is the one that satisfies the asker's needs so it is always the one that gets placed higher up. But in practice things aren't so cut and dry.

- The asker can misunderstand the response.

- The asker can accept a response that for the general public's interpretation of the question is incorrect. It doesn't matter what the asker thought the question means to them if beyond their control their answer becomes the first result on Google years later for a problem that might not be exactly the same.

- Technologies change and new libraries or language features can be added since the question was asked. I've counted several times where people bunch up below the accepted answer to say this is true, and 90% of the time it's the better answer for my use case.

- The question is similar to another that is far more popular and happens to share the same SEO keywords but is completely unrelated, so another answer with 1,000 upvotes in comparison to the original answer's 100 is added. People have no choice but to put comments saying "this is the wrong answer if you're doing X, use the other one," because the comment section of the accepted answer is the highest up they can be seen. But the average user might not listen or see the smaller comment text and use the wrong answer, just because it was the accepted one.

This is because a single person's decision on what "correct" means is valued more highly than the correctness of the more important answer which thousands more people actually support, and is more critical to support because of unintended search indexing.

Sometimes SO questions become used for a purpose completely unrelated to the original questions and answers because of search engines. There should be some kind of acknowledgement of this instead of being unable to protect clueless people from themselves.

This isn't to support editorialization of old answers in general, it's a problem of unintended consequences out of anyone's control because of what gets indexed as the first search result on Google. Nobody can change what's there if nobody but the original authors can edit the answer and they happen to not do so for whatever reason, so we'd be stuck being misled forever. And as we've seen people will copy and paste anything in the code box and not even read the description and caveats if there's a fat green checkmark next to it like a Pavlonian reflex. I'm guilty of this myself.

You can still write your own answer and comment on the wrong answer.

Users who are callous enough to ignore such comments get what their lack of due diligence deserves.

Razer Synapse has been responsible for a variety of odd problems. Just myself I've had it mess with Bluetooth connectivity and screen autorotation.

The worst part is that they solved it in the "edge" branch (the one you need to use docker on WSL2)... because they switched to Electron

Razer went from making great hardware and drivers to making terrible drivers that require you to sign in to a Razer account and constantly run Razer software. I was a Razer die-hard for years, but with their transition from a hardware company to a cloud-based user-data-mining company I avoid them. If you want good hardware with good drivers I highly recommend SteelSeries.

Steelseries drivers are phoning home more and more now...

However - you can configure the mouse, then uninstall the drivers and the settings persist.

So I continue to use steelseries.

The author claims that this is the result of copy pasting from Stack Overflow but it could be due to auto-complete as well. GetType and GUID both start with 'G' after all. Since the snippet is so small the similarity of the rest of it could just be a coincidence.

What am I looking for in the screenshot ?

Nothing. The screenshot is meaningless. The story is in the twitter comments underneath.

> Both programs want to ensure you only run one copy of themselves. So they create a global mutex using the GUID of their .NET assembly, right?

> So what happens is that both of them are creating a global mutex to ensure only one copy runs, but instead of basing the GUID on their own code, they're both using the GUID of a part of .NET itself. > And they're using the same one!

> Back in 2009, the user "Nathan" asked how to get the GUID of the running assembly. Twelve minutes later, "Cerebrus" answered. And that answer was wrong.

> A year and a month later, it was pointed out (by "Yoopergeek") that it gives the wrong GUID.

> Three years later, Cerebrus returns and fixes the answer. They can't delete it, because it was accepted

> That flawed stackoverflow post is here: https://stackoverflow.com/questions/502303/how-do-i-programm...

You would sadly have to find your way through the Twitter "thread" to fully understand.

Someone at Docker and someone at Razer both copy-pasted the same faulty code from Stack Overflow for preventing multiple instances of their program running at once. Unfortunately the code took the assembly identifier and called GetType() on it which returns the same value for all programs.

The Stack Overflow answer was later fixed but since the programmers no doubt both felt they'd "solved" the problem, they didn't see the fixed answer.

Which I suppose is an argument that pulling in eleventy thousand oneliners from npm is better than copying oneliners using the editor: At least you can run a batch upgrade without spending real brain cycles.

I'm sure someone has written a thingy to upgrade npm dependencies, rerun tests, and bisect to tell you about the breaking upgrade if anything reaks.

Left-pad, come back, all is forgiven.

A somewhat tounge-in-cheek comment: but I wonder if there could be a git/IDE extension to track cut-and-paste code like this (especially from stackoverflow) so that when things are corrected or flagged as broken at the source, that change is propagated to your use of it.

It also has an hard time string if another Hyper-V VM is running.

On a related note I've found docker for windows so buggy at times.

i feel like it's better than docker for macos, but that's not a really high bar to clear tbf.

I have been using Docker on OS X for a while, haven't had any major issues, but maybe my experience is atypical.

Glad to see no one here ever has bugs in their code!

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact