I've been in a lot of code reviews where developers push back because it's "good enough". You need to maintain a defined level of quality otherwise codebases go to shit very, very fast.
I was recently told in a code review that a Cassandra read before a write (to ensure there were no duplicates) was "good enough" because a duplicate "probably wouldn't happen very often". Meanwhile, the consequences of a dupe would lead to a pretty bad customer experience.
I pushed back hard and forced the developer to rewrite his entire code. Would "good enough" be okay in this situation? My bar is much higher than this developer and I stand by my decision. We have the luxury of being tasked with solving customer problems and if we only strive for "good enough" every time instead of "the best I can do within the constraints I'm given", then in my opinion your career won't be very successful. We always have to make the best tradeoffs when it comes to time and expense, but the best developers are the ones that come up with the best solution and the best code that fits in a particular constraint.
My point was more about nitpicking line by line for perfection. What you're talking about sounds like a legitimate performance issue.
I think we're on the same page, but maybe my point wasn't clear enough. I tried to make it clear in my last point that "code is quality is important" but it's important not to confuse code quality with things that are more minor like idiosyncratic coding style.
Thanks for reading!
I probably won't like the variable names people chose but I won't comment on that because that's "my opinion". I will comment on even the smallest bug I see, because that's what we're paid to do. So line by line "perfection" is what I believe we need to strive for in terms of code quality. Maybe not so much "perfection" but "best practices" might be a better way of stating it. We always need to strive for best practices so that our code is predictably easy to maintain, read, etc.
Again without appropriate context, my bet here is that you guys are using Cassandra for other important features you won't get out of a typical RDBMS and as such you made a trade off to begin with and decided that Cassandra was "good enough".
Now the point I'm trying to illustrate (and I'm not just doing this to pick a fight, I promise), is that engineering is about trade offs and a big part of it is definitely related to likelihood of a problem occurring.
I think it also completely depends on the domain of the problem, the criticality of the process you're building and the outcome of a major failure of your assumptions.
So I'd just argue and say "good enough" is an entirely appropriate answer in many contexts and domains and it's important not to make a blanket assumption that it's wrong.
In fact Cassandra wasn't my first choice, but a strongly consistent database wasn't available to us. As I mentioned in another comment, making the very best decision you can given the constraints of your system is what one should strive for. Not stopping at "good enough" because of (poor) intuition that error conditions "probably won't happen".
We decided to go with LWT and eat the latency costs as a trade off to "stronger" consistency, realizing that Cassandra doesn't offer the same strong consistency as an ACID database. Not perfect, but it fit within our SLA, decreased the probability of encountering duplicate values, and if there was an error, it was easier to detect and the user could be directed to try again, vs having a completely silent error condition that would cause a small percentage of our users tremendous amounts of trouble.
Way back in the stone age, MySQL did not yet have ACID transactions. They got them about the same time they stopped bragging about how much faster they were than Oracle, but I digress. Anyway, I had to write a bunch of transactional code around it. Drove the dba and me nuts. We begged for Sybase (we both knew it well), but the startup CTO was an open source purist and hated his first contact with the Sybase sales machine.
Eventually they folded, and the point was moot.
And it's that level that we call "good enough". Or, I would say, acceptably bad.
One of the most important lessons I've learnt over my career is that there is no such thing as "good" software. Everything could always suck less — anything that takes over 0.0 seconds is bad, more than 0kB of memory is bad, more than 0 lines of code is bad. However, your level of badness for each of these metrics might be something you're willing to live with.
It's like hygiene. What you call "nice and clean" for your toilet is not clean enough that you'd cook on it, and even your "immaculate" kitchen is unacceptable for, say, an OR. Hygiene is always "bad", you're just looking for a point where it's no longer unacceptably bad for your purpose.
All software eventually gets rewritten. Either in full or in parts. So "good enough" means "will this keep it going until this software, or piece of this software, is thrown away and replaced". Because that is typically much more economical than code review infighting causes 2-4 rewites of every feature until its perfect. Or spending 3 times more time on a feature to make it perfect. Or having to hire very expensive developers that are capable of writing to that high standard.
There are obvious exceptions in specific industries, but this holds true for 80%.
It's up to the team and customers to decide on that however. Database integrity is particularly important, so with limited information, I'd say you made the right decision. Therefore the first draft of the code was not good enough.
So, we should all be in agreement now, right?
> It's up to the team and customers to decide on that however.
You said yourself:
>the consequences of a dupe would lead to a pretty bad customer experience.
I've seen many situation were a duplicate wouldn't matter to a customer. It would matter to me because like you, I'm a perfectionist, but at the end of the day, it's both the team and customers that decide together.
Generalizing here but assume 6 months later this rare duplicate happens for a very important customer so you can't just brush it off, you now really have to fix it. By then nobody remembers this code review so you don't even know if this duplicate is a one off rare event or if it is going to affect all customers. Fixing it in code review might have taken a few hours extra for one guy, now you sent the whole team scrambling weekend overtime just to find the issue and understand the implications.
I see a lot of posturing in this anecdote, what I think is missing are:
1 - an indication of how often would a customer experience the issue;
2 - how bad would his "bad experience" be;
Did you calculate the former and took in account the latter in forming your judgement?
Or, otherwise, was the "correct" solution simple and obvious enough that any non-junior developer would have picked that first without hesitation?
Sometimes, good enough is good enough. If you were able to push back hard in this case, I take it you are senior to the other guy and your decision is/was justified by the product/feature requirements. IOW, in this case, good enough was in fact not good enough.
most important thing, right there.