I've had to inform leadership that stability is a feature, just like anything else, and that you can't just expect it to happen without giving it time.
One leader kind of listened. Sort of. I'm pretty sure I was lucky.
Ask them if they're into pro sports. If so (and most men outside of tech are in some way), they'll probably know the phrase "availability is the best ability".
i got lucky at my last shop. b2b place for like 2x other customer companies. eng manager person (who was also like 3x other managers :/ ) let everything get super broken and unstable.
when i took lead of eng it was quite an easy path to making it clear stability was critical. slow everything down and actually do QA. customer became super happy because basically 3x releases went out with minimal bugs/tweaks required. “users don’t want broken changes immediately, they want working changes every so often” was my spiel etc etc.
unfortunately it was impossible to convince people about that until they screwed it all up. i still struggle to let things “get bad so they can get good”, but am aware of the lesson today at least.
tl;dr sometimes you gotta let people break things so badly that they become open to another way
You put effort into writing an unnecessary tldr on a short post, but couldn't be bothered to properly Capitalize your sentences in order to ensure the readability.
If a person tries to communicate, but his stylistic choice of laziness (his own admission!) gets in the way of delivering his message, it is very tangibly useful information to tell, so that the writing effort could be better optimized for effect.
I wasn't even demanding/telling him what to do. I simply shared my observation, but it's up to him to decide if he wants to communicate better. Information and understanding is power.
Your choice. The worst thing is not knowing ("Why are not posts with reasonable opinions are being downvoted and not engaged with?"). Now you know (you are welcome) and it's your choice what to do with that information.
Where have you worked where this was practiced if you don’t mind sharing?
I’ve seen very close to bug free backends (more early on in development). But every frontend code base ever just always seems to have a long list of low impact bugs. Weird devices, a11y things, unanticipated screen widths, weird iOS safari quirks and so on.
Also I feel like if this was official policy, many managers would then just start classifying whatever they wanted done as a bug (and the line can be somewhat blurry anyway). So curious if that was an issue that needed dealing with.
I'm not going to share my employer, but this is exactly how we operate. Bugs first, they show up on the Jira board at the top of the list. If managers would abuse that (they don't), we'd just convert them to stories, lol.
I do agree that it's rare, this is my first workplace where they actually work like that.
Frontend bugs mostly stem from usage of overblown frontend frameworks, that try to abstract from the basics of the web too much. When relying on browser defaults and web standards, proper semantic HTML and sane CSS usage, the scope of things that can go wrong is limited.
It's pretty wild that this is the case now (if it indeed is), given that for a long, long time, sticking to sane, standard stuff was the exact way you'd land in a glitch/compatibility hell. Yes, thanks mostly to IE, but still.
That requires business logic to run in the frontend in the first place though. One could argue it shouldn't. Anything that is checked in the frontend, needs to be re-checked in the backend anyway, because you cannot trust the frontend, because it is under control of the browser/user.
Bugs have priorities associated with them, too. It's reasonable for a new feature to be more important than fixing a lower priority bug. For example, if reading the second "page" of results for an API isn't working correctly; but nobody is actually using that functionality; then it might not be that important to fix it.
>For example, if reading the second "page" of results for an API isn't working correctly; but nobody is actually using that functionality; then it might not be that important to fix it.
I've seen that very argument several times, it was even in the requirements on one occasion. In each instance it was incorrect, there were times when a second page was reached.
I don't think, except for a direct regression, it's even possible to define a bug in a way that isn't the same as a feature request. They're identical: someone wants the software to do X, it doesn't do X, maybe we should make it do X. (Except, again, for it used to do X but now doesn't and that wasn't intentional.)
Treating bugs as different than features and automatically pushing them to the front of the line likely leads to a non-parsimonious expenditure of effort and sets up some nasty fights with other parts of the company which will definitely figure out that something being a "bug" gets it prioritized. Obviously this can be done poorly, and why even have engineers if you aren't listening to their prioritization as well.
A bug report does not mean that someone "wants" the software to do X, but rather that they -expect- the software to do X. If that expectation is correct, it's a bug, and if it's not correct then it's a feature request.
Most software is not formally specified, so it's not technically guaranteed that we can prove whether that expectation is correct or not. But, there is usually a collective understanding, reinforced by the software's own interface (e.g. "the button says Do X but I click it and X doesn't happen"), the documentation, and/or general technological norms (e.g. "it crashed" or "when I type text sometimes it disappears and I have to start over").
There are occasional ambiguous cases, but in practice these are uncommon in a well-run organization, and generally the job of a product manager is to have the final say on such matters via consultation with relevant stakeholders, contracts, etc.
IMHO the best way to deal with that situation is to mark the bug as wontfix. Better to have a policy of always fixing bugs but be more flexible on what counts as a bug (and making sure the list of them is very small and being actively worked on).
But it's not "wont fix", because it will get fixed when there's nothing of a higher priority. And it's priority could change at some point.
> Better to have a policy of always fixing bugs but be more flexible on what counts as a bug
I just disagree with this. It's entirely possible for something to not work correctly, but that fact be unimportant at the moment (or less important than something else).
The philosophy of fixing bugs first before implementing new features is not that the bugs you're fixing must be more "important" than the new features.
In fact, that's exactly the mindset that "bugs first" is designed to prevent. If you have a mindset where a bug has to be more important than a feature in order to get prioritized, then you will breed a culture in which bugs are rarely prioritized, if ever. (Especially if fixing them would be time-consuming.)
This is for the simple reason that, in isolation, any individual feature can almost always be argued to be more important than any individual bug which could've been worked on instead. Yet, in the aggregate, once you've dumped 50 individual low-priority bugs into the backlog, they all add up to a horrendous experience for the user.
It's sort of like running a restaurant. Cooking food is how we make money, but you still have to clean the floors. If you keep putting it off to get the food out faster, eventually you're going to be knee-deep in shit.
Any modern system with a sizeable userbase has thousands of bugs. Not all bugs are severe, some might be inconveniences at best affecting only a small % of customers. You have to usually balance feature work and bug fixes and leadership almost always favours new features if the bugs aren't critical to address.
I'd love to see an actual bug-free codebase. People who state the codebase in bug-free probably just lack awareness. Even stating we 'have only x bugs' is likely not true.
> The type that claims they're going to achieve zero known and unknown bugs is also going to be the type to get mad at people for finding bugs.
This is usually EMs in my experience.
At my last job, I remember reading a codebase that was recently written by another developer to implement something in another project, and found a thread safety issue. When I brought this up and how we’ll push this fix as part of the next release, he went on a little tirade about how proper processes weren’t being followed, etc. although it was a mistake anyone could have made.
Many of the bugs have very low severity or appear to small minority of users under very specific conditions. Fixing these first might be quite bad use of your capacities. Like misaligned UI elements, etc.
Critical bugs should be done immediately of course as a hotfix.
This is the 'Zero Defects'[1] mode of development. A Microsoft department adopted it in 1989 after their product quality dropped. (Balmer is cc'd on the memo.)
In your experience, is there a lot of contention over whether a given issue counts as a bug fix or a feature/improvement? In the article, some of the examples were saving people a few clicks in a frequent process, or updating documentation. Naively, I expect that in an environment where bug fixes get infinite priority, those wouldn't count as bugs, so they would potentially stick around forever too.
Thing is, if you follow a process like scrum, your product owner will set priorities; if there's a bug that isn't critical, it may go down the list of priorities compared to other issues.
And there's other bugs that don't really have any measurable impact, or only affect a small percentage of people, etc.
The way I learned the trade, and usually worked, is that bug fixing always comes first!
You don't work on new features until the old ones work as they should.
This worked well for the teams I was on. Having a (AFAYK) bug free code base is incredibly useful!!