> Can you imagine a discussion about relevant risks and priorities that uses that definition of "good enough"? I can't, especially with safety-critical code.
That's not a definition, but a description. It was pretty clearly not an description of "safety-critical code".
You've inserted a context of safety-critical vehicle control systems into a comment responding to an anecdote about writing PHP4.
You say things like:
> Poorly defined, subjective judgement belongs more to art than engineering
That only seems true if you are extremely lucky and/or early in your career. It is extremely common for software engineers to face poorly defined, nebulous problems that you simply don't have the information to solve in an objective manner. The frequency with which this happens is why the approach described by the top comment is so effective. It is a process of continual improvememnt where you try to avoid making unnecessary decisions until you have better information to make them with.
What changes with safety critical code is how you gather that information (and what other processes to build to supplement developer judgment). You try to gather that information with as little risk as possible. Experimental clunky and cobbled together code has a place in this process, but not as a part of live, uncontrolled testing. You run it against models as you prototype solutions and then you refactor or rewrite that code to be good enough to test in riskier situations.
The quality of the assessment matters, but there is really no problem with people making an assessment of whether the code was "good enough" for it's context. In fact, I would refuse to work with a developer who refused to make such assessments. Standards and outside analysis are important, even in non-safety critical systems, but they ate no substitute for a developer making careful assements of if code is good enough.
This is part of the point the article and top comment are making. You can assume that the person who "wrote this shit" is an idiot and mock them, but you will learn more instead if you try to understand the context that drove that person to make the decision, how well that decision worked out, what it cost them and what it gained them. This is how you avoid cognitive biases, not by refusing to accept code that is truely "good enough" in some quixotic pursuit of impossible to achieve perfection.
> That is what I was responding to: the over generalization that clunky code (in the OPs words, not mine) is 'good enough"
I think you are tilting at windmills here. There is no such broad generalization. Clunkly code is often not good enough, which is why it needs to be refactored, "the moment that it starts becoming messy" (which is, I'm sorry to tell you, a context dependent subjective judement call.)
But clunky code can be fine or even great. I'll take a defect free clunky code base that solves a stable problem over an elegant rewrite that adheres to the latest coding standards any day.
Just out of curiosity, what do you think standards are meant to address?
In my experience they are meant to mitigate risk. Now maybe that risk is not credible on a particular project which means that standard doesn't apply. But in all other cases, not adhering to standards means you are incurring additional risk, by definition.
Now maybe you're just saying, "Yeah, but those are acceptable risks" in which case I don't really think we're saying anything different. My experience working on safety-critical code uses standards that explicitly state what risk is acceptable so there isn't much wiggle room for wishy-washy statements like "good enough". They aren't esoteric, abstract standards of practice (and maybe that's where our personal experience diverges). It becomes relatively clear, with a good testing plan to maps to said standards, whether that risk threshold was met.
It's easier to illustrate with hardware, but the same principles apply. Say there's a standard that states each critical component must have a specified reliability level. You could either install a single component that meets that reliability level or design redundancy so the overall reliability meets the standard requirement. What you can't do is install a lower-reliability component and claim it's "good enough" unless you change the definition of critical. And that's what sometimes happens in practice; people get through a design/build and realize they didn't meet the pre-defined/agreed upon standard and so they perform mental gymnastics to convince themselves and others that the component isn't reaaalllllly critical as originally defined. And that discussion shouldn't be based on subjective judgement. As the sign above my old quality manager's office said "In God we trust, but all others must bring data."
>I'll take a defect free clunky code base that solves a stable problem over an elegant rewrite that adheres to the latest coding standards any day.
This might be part of where our opinions diverge. My experience in hearing "good enough" seems different than yours. It sounds like you're using it as "it solves the problem, so it's good enough". My usual experience is more along the lines of "it doesn't meet the standard, but it's good enough." The issue in the latter case is that I think there's some hubris that one fully understands the problem. If you do, then you should have no problem bringing data to support that claim and we'd have no qualms. But if you can't, one thing standards are good at is helping to make you pause to consider all the aspects of the problem you didn't think of. Part of that hubris is the assumption that it's a stable problem. Standards capture the lessons learned when people realized it's not so stable. So clunky code may good enough to solve your conception of the problem, but that still may not be good enough if your conception of the problem diverges from reality (see: 737MAX MCAS, uber, CST-100, etc. as already brought up).
Like I said earlier, you got fixated on your own understanding of what "good enough" means and didn't actually pay attention to what people were actually talking about. Instead of learning something, you went on a diatribe and repeatedly misrepresented what people were actually saying.
I've seen people adhere blindy to standards and I've seen people ignore standards without a good reason. Both are failure modes that can increase risks.
I also think you are grossly simplifing what caused the engineering failures you mention. They have a lot more to do with systemic pressure and misplaced priorities than they do with engineers making contextual assements of risk beyond what is stipulated in the standards.
I don't think I was misrepresenting. I think it just boils down to we both read the OP differently. It's possible to have different takes without it being ascribed to malice or deliberate misrepresentation.
>I also think you are grossly simplifing what caused the engineering failures you mention.
I don't know how you arrived at this conclusion? I am in no way simplifying. I said those types of systems are complex to the point that subjective determination of good enough isn't adequate and how standards help fill those gaps of understanding. I've literally worked on some of those systems and have had listened to people at the highest levels of some of those organizations about the nature of those failures. I've withheld approving plans of one because I witnessed firsthand how the nature of external pressure corrupts what is meant by "good enough". If you know more intimate details on any of those examples, I'm all ears.
>They have a lot more to do with systemic pressure and misplaced priorities than they do with engineers making contextual assements of risk
This is the exact point I've been making but I think the two are interwined. Those competing pressures make fertile ground for rationalization and cognitive bias to influence decisions to change the definition of good enough more in-line with the verbiage of the OP (again, there was no discussion of risk in that post, you shoehorned that into your interpretation. There was only discussion of clunky, stupid code which was blessed as good enough). You seem to imply risk understanding occurs in an objective vacuum and I disagree. That's why I think subjective determination of good enough falls short in some scenarios. I'm not sure if you've been so focused on being right that you've ignored that central point, or if I'm just not communicating it effectively but it's not really worth belaboring further.
> It's possible to have different takes without it being ascribed to malice or deliberate misrepresentation.
Which I did not do. I don't think you are doing it deliberately or I would have ended the conversation long ago.
> You seem to imply risk understanding occurs in an objective vacuum and I disagree.
Not at all, where do I imply that? It is actually the opposite. I am arguing against your position that risk assements should happen in a vacuum and be based purely on standards with no need or room for subjective reasoning.
> again, there was no discussion of risk in that post, you shoehorned that into your interpretation
While the top comment did not explicitly mention "risk", the comment or did reply to you saying:
>> Isn't that by definition what good enough means? That on safety critical code "good enough" is a very different level than on a throwaway-script?
> That's why I think subjective determination of good enough falls short in some scenarios.
I've repeatedly said that subjective risk assessment is usually not enough:
>> Standards and outside analysis are important, even in non-safety critical systems, but they are no substitute for a developer making careful assements of if code is good enough.
>I'm not sure if you've been so focused on being right that you've ignored that central point, or if I'm just not communicating it effectively
I think your communication issues or on the listening side as you keep projecting a strawman onto people rather than actually listening to what people are saying.
But since we both appear to feel that the other one is not listening, that is probably a clue this conversation should end. I do encourage you to take some time carefully re-reading the thread to see if you can figure out why you seem to misinterpret so much of what people say.
What an odd and condescending take, even when try explaining to me the failure modes of a system I actually have firsthand experience with. Doesn't that seem like something that should give you pause to self-reflect?
I understand your point. What you seem to be missing is that we're talking about two different things. I agree that decisions should be made in the context of the risk of the engineering application. That's trivially apparent to the point where it's almost confusing that you would feel the need to bring up up (ad nauseum). It's also not particularly interesting because just about everybody will agree with that. What I'm talking about is when people fall prey to cognitive biases to the point where they can no longer make accurate risk assessments. That's a much more interesting problem because the engineering world is full of cases where otherwise smart engineers make terrible judgement calls, all the while telling themselves that they understand the risk. I literally brought up cognitive biases in my first post and instead of responding to what I'm actually discussing, you just keep underscoring a trivially simple point.
I think you're reading your own interpretation into what I'm trying to say and then somehow twisting it into being a miscommunication on my part. When I'm saying subjective judgement can lead to bad decisions, I am not saying "we take all the unique and contextual facts into consideration and arrive at a reasonable subjective risk assessment for this scenario". I'm saying people's cognitive biases can lead to them discounting risk without good evidence because it results in a decision they are emotionally attached to. E.g., "I don't want to miss schedule and look bad, so let's rationalize away this risk that really wasn't mitigated." That is not an objective risk-based decision, it's a biased emotional one. They may think it's "good enough" to get the job done, until it's not (as in the cases I specifically brought up).
While I already explained it but it didn't seem to sink in, I'll reiterate one last time:
You seem to say your definition of "good enough" is based on good, risk-based judgement. I already said if that's the case then we don't disagree. But I also said that is not the context that the term "good enough" is generally used in practice. In my experience, it's used to justify a sub-standard effort and I've given you concrete examples of that. That point of digression between what I'm saying and what you're interpreting seemed to fly right by you because you're more concerned with arguing, and there's some irony in you pointing out that someone else isn't listening.
> You seem to say your definition of "good enough" is based on good, risk-based judgement.
Not at all. I said that "good enough" is a contextual, subjective judgement and is a critical part of software engineering. The idiom "good enough" says nothing about quality of that subjective judgment, despite your insistence that it does.
> I'm saying people's cognitive biases can lead to them discounting risk without good evidence because it results in a decision they are emotionally attached to
Of course cognitive biases (and all sorts of other things) can degrade judgment. That doesn't mean that we should try to get by without it. We seem to be in agreement on this.
> that is not the context that the term "good enough" is generally used in practice.
Here you are simply wrong. "Good enough" means " adequate but not perfect, not "sub-standard". While it is possible you have been operating in a cultural bubble where that term is only used to mean "sub-standard", in this context the meaning of "good enough" that is being used has been clarified repeatedly but you insist that only your experience with the term matters and thus everyone must use your definition. Instead of working to understand what people are saying, you assume that they are using your definition. Perhaps this sort of assumption explains why you somehow missed out on noticing everyone who uses that term in the normal way. Seriously, go look as some definitions and try asking people what they mean when they use the term.
> That point of digression between what I'm saying and what you're interpreting seemed to fly right by you
Another example of you not really listening. I've repeatedly pointed out this exact divergence.
Note that you specific left out the contextual clue where I said "In my experience" that is not how it's used in practice as an immediate follow-up to that sentence about how it's used. I am not making a general case, but talking specifically about safety-critical code from the very start. I am not saying everyone must use my definition in the general case. I'm saying in the very specific subset of cases, there is an objective definition for good reason.
Let me try a different tack to see if we can get off this pedantic merry-go-round. You've agreed that cognitive biases affect decision-making. So let's say as a developer you are working on safety-critical code that is in danger of being over schedule and over budget. Their manager says if the project isn't successful, your company will lose future work to a competitor and that might leave you out of a job. But if it is delivered on time, you're company will get a massive windfall in terms of future contracts and profits, and likely lead you to a big promotion. What do you do to ensure those cognitive biases do not influence you to incorrectly discount risks and ship the software early before the risks are properly addressed?
(Btw, it's a really bad method of communicating to use absolute terms like 'everyone'. For one, it makes it look like you think you're smarter than you are and more importantly, it's easily falsifiable. That type of communication belongs more on r/iamverysmart than HN)
That's not a definition, but a description. It was pretty clearly not an description of "safety-critical code".
You've inserted a context of safety-critical vehicle control systems into a comment responding to an anecdote about writing PHP4.
You say things like:
> Poorly defined, subjective judgement belongs more to art than engineering
That only seems true if you are extremely lucky and/or early in your career. It is extremely common for software engineers to face poorly defined, nebulous problems that you simply don't have the information to solve in an objective manner. The frequency with which this happens is why the approach described by the top comment is so effective. It is a process of continual improvememnt where you try to avoid making unnecessary decisions until you have better information to make them with.
What changes with safety critical code is how you gather that information (and what other processes to build to supplement developer judgment). You try to gather that information with as little risk as possible. Experimental clunky and cobbled together code has a place in this process, but not as a part of live, uncontrolled testing. You run it against models as you prototype solutions and then you refactor or rewrite that code to be good enough to test in riskier situations.
The quality of the assessment matters, but there is really no problem with people making an assessment of whether the code was "good enough" for it's context. In fact, I would refuse to work with a developer who refused to make such assessments. Standards and outside analysis are important, even in non-safety critical systems, but they ate no substitute for a developer making careful assements of if code is good enough.
This is part of the point the article and top comment are making. You can assume that the person who "wrote this shit" is an idiot and mock them, but you will learn more instead if you try to understand the context that drove that person to make the decision, how well that decision worked out, what it cost them and what it gained them. This is how you avoid cognitive biases, not by refusing to accept code that is truely "good enough" in some quixotic pursuit of impossible to achieve perfection.
> That is what I was responding to: the over generalization that clunky code (in the OPs words, not mine) is 'good enough"
I think you are tilting at windmills here. There is no such broad generalization. Clunkly code is often not good enough, which is why it needs to be refactored, "the moment that it starts becoming messy" (which is, I'm sorry to tell you, a context dependent subjective judement call.)
But clunky code can be fine or even great. I'll take a defect free clunky code base that solves a stable problem over an elegant rewrite that adheres to the latest coding standards any day.