Hacker News new | past | comments | ask | show | jobs | submit login

Because as the quote said, it depends on the client requirement.

"if you click anywhere in the left half of the screen during the 5th screen all the settings get deleted" might be a very severe/critical bug, but if the client says "thats fine for me don't worry, but can you fix the wrong color on screen one where the logo is purple instead of fuschia that's not acceptable for my needs" then this one has a higher priority, even though it's much less severe.

Your job is not to build the perfect/best product, it's to build the one your client want and is willing to pay money for.

The mixup between priority and severity only comes from the modern era when so many product get made for internal consumption, or where the user is not not the buyer.




> "if you click anywhere in the left half of the screen during the 5th screen all the settings get deleted" might be a very severe/critical bug, but if the client says "thats fine for me don't worry, but can you fix the wrong color on screen one where the logo is purple instead of fuschia that's not acceptable for my needs" then this one has a higher priority, even though it's much less severe.

The problem here is the made-up definition of severity.. basically, you decided that crash or losing settings equals severe and cosmetic stuff equals not severe. Why?

I could easily give it a different definition. For example, a crash that nobody cares about is not severe. A small "cosmetic" slip up that can cause big damage to brand is very severe.

So I agree with the GP here; why do we need to have a definition of severity that does not align with the things that we care about when it comes to actually fixing things?


> basically, you decided that crash or losing settings equals severe and cosmetic stuff equals not severe. Why?

Because you, as a developer, are clueless regarding business needs, and thus are unaware of why "cosmetic stuff" might be far more important than the risk of deleting someone's user profile. For example, perhaps the color scheme clashes with the one used by your client's main competitor and therefore might leave him vulnerable to lawsuits.


> Because you, as a developer, are clueless regarding business needs

Then that's the problem we should fix. Instead of creating an extra database field to capture somebody's incorrect opinion, the people who understand priority should be helping developers know enough to have useful opinions.


> Then that's the problem we should fix. Instead of creating an extra database field to capture somebody's incorrect opinion, the people who understand priority should be helping developers know enough to have useful opinions.

The priority database field is how you communicate this factor, but there is a point where reasonable people can disagree and the organization needs a way to make decisions clear.

To draw out the example even more: you could have a $600K invoice riding on customer acceptance, when the person at the customer site who signs off won't do so until the customer's logo color is correct. While that crasher when you enter a non-numeric character in a numeric field of a new feature? "We accept the functionality meets the milestone so will sign off but we don't plan to roll it out until next quarter after we have begun training personnel on the new feature."

Sure, every good organization should want everybody, not just developers, to understand the customer's business and such but sometimes you just need to get it shippable.


>The organization needs to make this clear

Why? What is the business value of having severity as essentially a protest field logged by people who don't understand business impact? You are basically in all your points explaining the business value of priority which nobody ever disagreed with and then going "and that's why we need two fields".


> Why? What is the business value of having severity as essentially a protest field logged by people who don't understand business impact?

I don't understand where you could possibly get the "protest field" idea. Severity is an objective statement regarding the known impact of a bug as verified by a developer. It summarizes the technical impact of a software defect. Stating that bug X is high-severity because it crashes is not a protest, and just because the PM decides to give priority to other more pressing issues it doesn't mean you should throw a tantrum.


What is the 'technical impact' of a defect, and how can you divorce it from the user impact? How can it be stated objectively?

Crash bugs aren't bad because crashes are inherently bad, they're bad because they have negative user impact - if the program crashes and loses user context, or data, or takes time to restart... those are bad things. If it crashes a little untidily when it receives a shutdown event from the operating system... maybe not so much.

Same goes for performance issues, scalability problems, security flaws, even badly structured code - they don't have technical impact unconnected to their user (or business, at least) impact.


> What is the 'technical impact' of a defect, and how can you divorce it from the user impact?

TFA provides a concrete definition and also the method to classify bugs based on severity.

Severity does not divorce a bug from "the user impact". There is, however, also the problem of low-severity bugs or even tasks having low user impact but high business impact.


> low user impact but high business impact.

But that's a contradiction. Unless the users aren't important (and the business is another entity, e.g, a CxO that has clout and demand a fix for a thing that users don't care about).


It could be useful if the folk prioritizing things are dealing with non-specific complaints about the software being unreliable or not working correctly.


Databases are a very bad communications medium. So if that's the major way devs and and product people are conversing about issues, it's no wonder the devs lack sufficient understanding of business context to understand what the real priorities are.

I do get that people have all sorts of adaptations to dysfunctional working conditions. So if a severity field is one of them, fine. But I don't want people to mistake that for healthy collaboration.


>>Databases are a very bad communications medium

Are they? That's how majority (all?) systems that are asynchronous work. The data to be communicated has to be persisted. I think asynchronous is a good communication method.


I am not talking about machine-to-machine API calls. I'm talking about human communication, which is clearly the topic of what I replied to.


> Then that's the problem we should fix.

There's nothing to fix. Developers assess severity but the project manager defines the priority. As the PM calls the shots, they pick which issue should be addressed first, and obviously it's expected that some low-severity issues should be given priority over some high-severity issues.

In fact, the only thing that needs fixing is any potential misunderstanding on behalf is some developers on how low-severity issues shall have priority over high-severity issues.


Why does severity need to be assessed at all if we're just going to use priority instead?


Because a crash is different than a button rendering with the wrong color, and although priority assessment might change with time, a crash is still a crash.

It seems that a recurrent theme in this thread is that some developers have a hard time understanding that PMs might have solid reasons to prioritize low-severity issues over high-severity issues. It's like these developers are stuck with a mindset where the forest doesn't exist and only the trees they personally have direct contact with should.be worth any consideration.


Why set a severity if you're not going to use it? A crash is still a crash if you don't set the severity and just write that it's a crash in the bug description.


So the PM can triage high severity issues quickly, because even though they may be p2 issues they’re probably worth serious consideration.


I know some people look at me like I have three heads every time I say this. But if a project is dropping so many balls that it's hard to keep track of them all, I think the real solution is to work smaller and close feedback loops faster, so the sheer number of bugs is not overwhelming.


> Why set a severity if you're not going to use it?

Your question misses the point entirely.

The point is that severity is an element that's used to classify an issue with regards to priority. Severity does not exist in a vacuum, and priority is given depending on context. Severity is an attribute, other attributes, that's used to determine the priority.


It's only used to determine it in a way that's divorced from the business context. If everybody understands the business context, that's no longer useful. Ditto if people are collaborating with actual discussion, rather than trying to mediate communications via a database.


> It's only used to determine it in a way that's divorced from the business context. If everybody understands the business context, that's no longer useful.

That's the point, everyone does not understand the business context. Nor are they expected to. That's the job of the PM it's his main responsibility, and it's the reason PMs are hired to manage teams of developers.


I understand some organizations work that way. I'm saying it's bad.

The point of developers is to make software that people use. So if we want to do our jobs well, we have to understand how we are creating value. Product managers may manage that information flow, and they may get the final say in decisions. But if they are specifying software in enough detail that developers can be 100% ignorant, then developers can (and should!) be automated out of that work.


What extra context does "severity 0" give you on top of a bug title like "Site crashes on action X"?


I think this thread is interesting and kind of funny, because it reminds me of work where I maintain some systems that keep track of projects for PMs, and I originally thought my job was to make everything consistent. But there are a whole slew of ways to express the "closedness" or "openness" of a project, and the PMs have evolved conventions where they want to be inconsistent and resist all efforts to make it all make sense. You have a project status which may be in progress or closed or something else. You have a closeout milestone which may be in progress or closed or something else. And you have a time entry code which may be open or closed. But it turns out there is no simple way to make these consistent, because people use inconsistent combinations to express things...but it's hard to tell what.


You guys are missing the corollary that low-severity bugs being escalated to high-priority is the edge case.

The point is that severity is not ignored; it does inform the priority. Most of the time there may even be a direct correlation between severity and priority.

But other (real-world business) factors also inform the priority; while severity will never change in the absence of new information about the bug, those other factors may change frequently. It doesn't make sense for a PM to reread every single ticket and reassess each one's severity when adjusting priorities, when the developer can just determine that once and record it in the ticket from the start.


Perhaps it's a problem of language. Instead of severity, may be it should be technical complexity.


Complexity sounds to me like more of an implementation-level concern.

e.g. A bug might be critical severity if it wipes the entire production database, but low complexity if the fix is to delete one line of code. And maybe its priority is P1 instead of P0 because the customer said they'll remember how to avoid triggering the behavior but they really need that logo color changed asap for an important demo.


The point i was trying to make is that the severity hardly changed the priority, if its user impact is low. But then that means the severity isn't high either!

So what's the point of severity?


Where are you getting "hardly" from? In this example, it normally would have been P0 (release-blocker) but was downgraded to P1 (still the second-highest priority) because of a special consideration on the customer's end.

The point of severity is that it's an objective metric determined in isolation based on specific technical guidelines that any developer on the team can follow (such as https://www.chromium.org/developers/severity-guidelines). Whereas priority is a higher-level determination that isn't purely technical.

It's like the difference between body fat percentage and attractiveness. Any "engineer" (doctor with a DEXA scanner) can tell you your BF%, and attractiveness will typically correlate with BF%, but ultimately your "customers" (romantic partners and targets) decide how attractive you are. Not a perfect analogy (priority is still something you'd decide internally, not your customers directly), but hope that clarifies things.


Hiring good programmers who are not too far out on Asperger/ introvert scale is an issue. So you can fix it by letting programmers only worry about the tech part and letting PM prioritise things. I think motivation will not be as high but it is a way to get things shipped and profitable.


Speaking as someone with Asperger's who considers himself both a good programmer and capable of navigating / leading cross-functional prioritization discussions, and who likes knowing the context behind his work: maybe you should re-evaluate your assumptions about neuroatypical people.

(...and if you're indeed in a position where you're responsible for hiring decisions or performance reviews: strike "maybe" from the preceding sentence.)


Agreed. Non-neurotypical people may or may not need to approach understanding the context differently than neurotypicals. But it's not like we're incapable of understanding the context!


I work in a niche field. Not so niche that there isn't a lot of money in it, but niche enough that we need to explain to all of our new hires what we do as a company.

For us, it is absolutely 100% necessary to hire domain experts to prioritize bugs and features. It's not a question of incompetent or dense developers, it's a question of things that are not obvious to someone who doesn't have tons of experience in the field.

It's a problem that I imagine developers working on Chrome, Call of Duty, iTunes, or Outlook don't have. You can hire recent college grads and expect them to understand what the software does, have reasonably good instincts how to prioritize bugs and put together the right user experience even if the description in the feature request is sparse on details.

By the way, I heartily recommend working for such a company. My company works very, very hard to retain people. Someone who's spent ten years getting used to the weird stuff our customers expect is far more valuable than someone with half the pay who needs someone to hold their hand through every single issue. Everyone has their own office, management is extremely permissive about the shit that doesn't matter, there's never deadlines or crunch time, everyone chooses their own work/life balance. (We're hourly, and the expectation is that you work more than forty hours a week and need manager approval if you want to work more than 60 for more than three pay periods in a row) If we want more vacation, we can bank hours and spend it on supplemental vacation. Everything's great.


Playing devil's advocate: Why does the developer need to know why one bug is more important than another, if the priorities are clearly defined? I.e., if the backlog manager sets the priorities according to the customer's/business' needs, then the developer just needs to know that the cosmetic bug has a higher priority than the crash bug, but they don't need to know why the priorities are ordered that way to accomplish their tasks. And if they do want to know for their own knowledge, they can just ask someone who understands the needs, without the need for a more complex set of bug report attributes.


I agree with your overall point I think, but I find that people in general just work better when they have some context for their task and its priority/relevance. Absent that, they sometimes -- consciously or not -- decide "this is stupid" and either rush to just check it off or slow-walk it by allowing themselves to be distracted by other things.


The person you are responding to is saying, yes, "cosmetic stuff" might be far more important. So it's more important! Why have another dimension of assessment where we label it less important? Why not only have the dimension of assessment that actually matches the clients' needs?


Like I said in my other comment, because it makes the difference between controlled and uncontrolled.

Eg "the color is wrong because we specified it wrong" and "the color is wrong because the app doesn't respect what we ask it to display" both ends up with the same bug (wrong color), same priority, but not the same severity because the second case is uncontrolled.

Severity is a dev tool, priority is a business tool.


How does setting a higher severity for one bug over the other help devs?


How would you signal the difference between:

1. We have the wrong .png asset in the database

2. Our entire rendering infrastructure is suspect


That would be extremely obvious from the bug title and description, which are presumably being read by the person who sets priority.


So instead of a severity rating, you are saying severity is encoded in the language of the description? Using descriptors a potentially non-technical PM can understand unambiguously?

I'm not saying this is the wrong approach by the way, it's just interesting how people approach this differently.


If the PM doesn't have enough expertise to understand how severe a bug is, how are they supposed to accurately assess the business impact?


It's not another dimension. It's a classification. Some issues matter more from one perspective but might not justify allocating resources to address them from other perspectives. To be able to do an adequate job prioritizing issues, you need to take in consideration their classification. You're arguing about a desired outcome without paying any attention to the process that leads you to that outcome.


Because there is expressed client needs and real needs. They say they care about this cosmetic thing now so it better be fixed. However you well know that if you don't get this other thing fixed soon internal politics at the client will mean they throw you out. Thus you fix the thing they demand be fixed now (it may only take a few minutes), but you ensure the other things are fixed in the next release even though the client doesn't know they care yet.


Sure, but you don't need separate priority and severity scales to do that: it's just one priority scale but you just assign the priority not entirely on the clients expressed needs but rather also factoring in your own assessment of their needs.


You don't need that, but you are not everybody. When you have a large organization having a simple way to capture this type of thing and make it clear what you are talking about matters.

Of course it does add complexity. It is the call of each organization which things are important enough to be worth the extra complexity and which are not. Maybe for yours it isn't worth the extra cost - there is nothing wrong with that - but other people have different needs and so they will have different answers.

In short, you are wrong in thinking there is a universal right answer.


Yes but what is the point of severity _and_ priority? Why not one field that's first estimated by QA and then updated by project manager when the clients needs are known?


So that they can be tracked independently and reviewed later. As I explain in a comment elsewhere, my company uses an app to generate severity, and it may not be adjusted outside of that. We can then track the number of low or high severity bugs in a delivery regardless of how the customer perceives the impacts of the bugs, using a more-or-less objective measure. We can compare that to the customer's view of the quality of the delivery by using the number of low or high priority bugs.


Makes total sense and my team does this as well. I think calling the value completely perpendicular to fix priority is hyperbolic. Fix priority should be some combination of severity, frequency, effort and stakeholder desire.


What benefits do this measurement and comparison provide?


We have dozens of customers worldwide for our software packages, and each package is highly customised for each customer's business. The severity measure lets us compare release quality across customers using an objective measurement defined and managed by us. The priority measure lets us refine that comparison per customised packaged. Generally, a release with a lot of high-severity issues will have a lot of high-priority issues (since by a default an S1 is a P1, an S2 is a P2, etc.), but some customers have different requirements, different custom features, and some are just more fussy.

If a base release that was customised for multiple customers has an expected number of P1, P2, P3, P4 issues for most customers but a high number of P1 and P2 issues for one particular customer, off of the same number of issues in the base release as measured by severity, then that will stand out in our measurements and we'll dive deeper into that customisation to see what's going on.

(Edited mid-stream... accidentally submitted too early.)


FTA, severity reflects "a bug’s impact to the functionality of the system", while priority reflect the "client’s business and product requirements".

The point of this system is that high-severity bugs might have lower priority than low-severity bugs if you take business requirements into consideration. Yet, this does not mean that severity should be ignored.


You nailed it. Stick with priority and the right stuff will get fixed. Severity just encourages more debate about what needs to be fixed vs. deferred. Inevitably you will end up with a list of defects which will never be fixed.


Developer knowledge of business needs is rarely on the low end of the spectrum.

For example in the teams I lead I make sure developers participate on PO + stakeholder meetings as observers.

This way when devs fix something or develop a new feature they know first-hand what the business expects.

A nice bonus is that the team often gets personal praises from our clients.


that's a strong argument for having a single priority... Even (generously, IMO) assuming severity represents a tangible thing like a threat to code quality or system stability/debugging, a developer should not be the one trying to balance those internal demands against a customers priorities. The important thing here is that devs know what needs to be done. Distributing that arbitrarily across two fields obscures that.


They're both measures of importance, but evaluated at different times by different parties. In the article's formulation, the QA engineer sets the severity during the initial investigation based on technical criteria. This is then one of the datapoints that the PM uses during triage to set the priority, which is the controlling value from that point in the process onward.


I guess the underlying source of confusion is that they're both measures of severity, but priority is severity from business viewpoint while severity is severity from the affected user's viewpoint.

A high priority bug that isn't fixed presumably has severe consequences for business (or it was prioritized wrong).

A high severity bug that isn't prioritized presumably has low consequences (severity) for business, but it probably really sucks for the unfortunate user who is affected.


Maybe this is something JIRA-workflow-specific?

Here in pivotal-tracker-land the product manager controls priority by ordering stories in the backlog. There is no need for a separate 'priority' field because that is implicit.

FWIW, there's also no explicit 'severity' field; the PM is expected to understand all the factors that make issues important and order the backlog appropriately. If you need more categorization, you can apply arbitrary labels.


Devs inform a ticket with a severity as a counter balance to bug fixing being purely complaint driven. The business benefits from having someone with a technical eye look at a bug and determine if it really is just a cosmetic issue or may allow system misuse - then the PM should take that severity into consideration along with client complaints when handing out priority.


Because one is controlled and the other is not.

If the color is indeed the one set by the dev/designer/... and it's juste not the right one, that's a controlled thing, it's not severe. The app is doing what you want it to, you just gave it the wrong instructions.

The crash on the other hand is uncontrolled. That's severe, because it's a case where the app is not doing what you want it to.

Severity is something for the dev team internally, priority is something for the product manager and business people that the dev team follows.

When you start mixing the two in a single thing, the dev specific needs are always what goes out the window first.


I think it's helpful to think of how bad a cosmetic issue could be. For example, imagine that it's a big launch of a product people will be using in stores worldwide and the cosmetic issue is that the logo now consistently looks like a sex act because of missing letter or development art left inside the release. Also imagine that that act is relevant to a recent revelation in the news about the CEO's personal life.

Which do you think the CEO cares about more, the logo or the fact that it crashes sometimes on an infrequently accessed menu?


> For example, a crash that nobody cares about is not severe.

That's a bad example, something not severe is not severe... can't argue with that. A crash nobody cares about can be quite severe though, there's quite a bit of crash that has been used to make some of the most important security vulnerabilities. When I tried to reproduce the Dirty CoW vulnerability over a VM in a school setting made the teacher VM crash multiple time, he didn't care about the crash either ;), yet that single vulnerability allowed me to skip 90% of the vulnerabilities he wanted us to try to find.

I think your comment point something important that you miss too, different client may have different priority, and you too may. I think severity is closer to YOUR own priority. The severity is the impact this MAY have on your business if it wasn't fixed. Like you said, a small "cosmetic" slip up may be important for our brand, thus severe for us. That "cosmetic" slip up could still not be important for all of our client, like over a one page made to sell, yet still be on top in both priority and severity, because it matters for the next client or the sale team, which does matter to you.

> So I agree with the GP here; why do we need to have a definition of severity that does not align with the things that we care about when it comes to actually fixing things?

You could say that about many information in a ticket. It is still informative though, maybe it's no longer needed, but at one point it did was relevant and in many case, still is.

A ticket with an higher severity is something that you need to put more time on to decide whether it deserve an higher priority. I'll go back to your crash example, why would you works on a crash that nobody care about, why would you see "crash" in the ticket with so many steps to reproduce and believe that this deserve an high priority when it never going to happen? In an ideal world sure you would know 100% of the system and know 100% of the tickets and can understands 100% of the impact of this kind of crash and be able to fit it well in the priority, but let be honest, that ideal world doesn't exist. You have a limited time that you can put on each ticket to decide their priority, you have a limited capacity to understands the impact and everything, you have hundreds of tickets to go through, but something that you can do is see that the high severity set by the developer means that you may need to put more time on that ticket to choose its right priority.

Sure once the priority has been set, does the severity matter? That's a much bigger question that I won't try to answer, probably not, but did it matter at one point? It did. Would you remove any information from a ticket because it no longer matter? I sure hope not.


> Your job is not to build the perfect/best product, it's to build the one your client want and is willing to pay money for.

So why have severity and priority as two distinct dimensions then? If customer impact is the only relevant metric for your process, why would a bug about which the client says "that's fine for me" ever be considered severe or critical?

If the only metric you care about is customer impact, why track two of them?


Because severity is usually a general measure (eg, to use an example from application security: XSS is usually a sev:med), but priority can be subject to a bunch of forces not known to the person who found the bug. You might find eg an exploitable dangerouslySetInnerHTML, but if that’s on a separate domain (in the authn and the origin sense), in some back-office page somewhere, it may not be particularly exploitable, the impact of exploitation may not be particularly high, and so may justify a lower priority. Meanwhile, lack of CSRF protection on an endpoint that half the internet is engaged with may be a higher priority, even though CSRF is typically lower sev.

Maybe the page with the bad bug is getting shut down next month anyway and we’re just accelerating that instead. Doesn’t make the bug any less bad: definitely makes fixing it less of a priority.

(In hindsight I should have used SSRF or SQLi or RCE as examples of the former style bug and XSS as the lower severity of the two to emphasize the difference.)


Well, good, but if customer impact is the only metric that counts, what relevance does severity have?

If you find a RCE vulnerability that never affects your customer, and how much it affects your customer is the only metric by which bugs are prioritized, what relevant information does severity add?

Either it affects your customer, in which case you prioritize a bug's fix by its severity (i.e. simply by how much it affects your customer), or it doesn't affect your customer, in which case you prioritize is by... how much it affects your customer. Either way, you end up prioritizing bugs just by customer impact, which is either measured by the bug's severity (if it affects a customer), or zero (if it doesn't, regardless of severity).

Just to be clear, I am absolutely not arguing that severity shouldn't be tracked, I'm arguing against the idea that purely functional customer requirements should be the only criteria used for prioritizing bugfixes and development tasks.

It's certainly true that "your job is not to build the perfect/best product, it's to build the one your client want and is willing to pay money for", as the parent mentioned. But it's also your responsibility to do the building in a sustainable manner.

Decoupling severity from priority is a great way to run a codebase into the ground. RCEs that are unexploitable today will be expoitable five years from now, after sufficient requirement changes accrue -- and at some point in the future, you're going to have to deal with five years' worth of previously unexploitable RCEs that you have to fix yesterday.

Edit: I've worked for several years on two codebases where customer impact has been the primary metric used for bug prioritization for 15+ years and 7+ years respectively. Both were absolute catastrophes. You could literally crash devices by blowing hot air on them (literally, there was an out-of-bounds access bug in the code that handled overheating alerts and they crashed after a few alerts). An entire team was there just to put out fires, virtually all of which were known bugs that had at one point been deferred because they had "no customer impact" -- until they did.

It's been a very useful experience. Disentangling these things is a very useful skill to have for an expensive, and highly sought-after on the market.


You need both because severity and priority aren't necessarily set by the same people who have the same information. They're (of course) not completely orthogonal: generally there's a 1:1 map between severities and priorities. I think we're in violent agreement: severity is also a close proxy for how much I think you should care about something. (We don't assign severities to hardening tasks, since they're, well, hardening, not bugs--but there are plenty of hardening tasks that I think are more important than vulnerabilities and should be prioritized sooner. E.g. "unencrypted EBS volume" is a finding I will totally let you ignore, but I will also be on your case every day if you're not using aws-vault.)

So, to rephrase: you need both precisely because that's often the way you can even start having the prioritization conversation.


We're in perfectly peaceful agreement, it's not even violent :-). As I mentioned in my post above:

> Just to be clear, I am absolutely not arguing that severity shouldn't be tracked, I'm arguing against the idea that purely functional customer requirements should be the only criteria used for prioritizing bugfixes and development tasks.


On the other hand, when you have multiple different users..

One of them sometimes suffers a crash that also loses all their work, and it really pisses them off. That's got to be bad, right? I'd like to call it severe based on its impact to that user.

But then the vast majority of your users have never seen that bug, and the bug is damn difficult to track down, so maybe you can't treat it like a stopper.


Then you are back classifying it by impact x probability... What is a different set of dimensions and make much more sense to use basically because filling those 2 dimensions tends to be easier than filling a severity dimension.

But that says nothing about why one would want 2 unrelated priority dimensions.


That doesn't answer the question, which was

« Given a specific priority, what additional benefit do you get from having an additional severity label on the bug? »

not

« Given a specific severity, what additional benefit do you get from having an additional priority label on the bug? »


The severity is a technical assessment of the issue that the PM, who doesn't necessarily have a technical background, weighs against business needs to establish the priority. Once there is a priority, the severity has already served its purpose.


Why would you weigh the technical "severity" ("what happened?") against the actual severity ("business needs")? The severity, by hypothesis, has no influence on whether you're going to work on the bug or not. You want to weigh the business needs against how easy the bug is to fix, not against what happened as a result of the bug. "What happened as a result of the bug" is purely within the domain of business needs.


I'm really confused by all this arguing against simply having more information.

Is the information about the criticality of the bug (whether or not the app crashes) relevant to the business decision?

Yes, of course: "When I press the 'more' button in the developer's bio the app crashes" is a more important bug than "When I press the 'more' button in the developer's bio I don't see more text."

Might the criticality of the bug not be relevant to the business decision? Again, yes: "When I press the 'more' button in the developer's bio -- ok, stop, I don't care about this bug."

In some cases the criticality is relevant to the business decisions, in others it may not be. It probably usually will be. The app shouldn't crash, because even if it only happens to 1% of people, they're the ones writing reviews on the app store.

So the criticality is a big red "Hey! This should probably be high-priority!"-flag, but it's not the only piece of information the PM is going to use.


> I'm really confused by all this arguing against simply having more information.

As with all information gathering, you need to weigh the cost of gathering and storing vs the gains of having the information.

People are saying there are no gains from having this information; which, if true means if there's any cost, the cost is too high.

People are saying it is hard to determine, which means the cost to gather it is high, it may include concensus building.

In my personal professional experience, there hasn't been a whole lot of retrospective analysis on the bugtracker data. So information gathering that doesn't immediately help with fixing the bugs, or communicating which bugs need to be fixed sooner is time wasted. Use of non-critical fields in the tracker is spotty and inconsistent, so analysis would require an expensive data cleaning phase.

Sure, you can say I didn't work in healthy organizations if they didn't do this sort of analysis. But that's the root cause of most of the bugs --- there were never any formal specifications and so many things didn't meet the specs; plus occasionally we wrote crashy code.


> I'm really confused by all this arguing against simply having more information.

What people are pointing out is that you're not arguing for having more information. The "severity" is a marginal zero information on top of the priority. There is no context where you would want to use "severity" instead of priority.

So...

GIVEN that priority trumps "severity" 100% of the time,

AND GIVEN that priority is already being recorded,

THEREFORE "severity" has no use, and shouldn't be recorded.

What's happening in the post is that someone wondered why there were two fields, and made up a justification for the second field instead of realizing it was a mistake to have two.

You can see the mistake happening whenever people choose a word that ordinarily refers to the concept of importance as their label for "severity", while simultaneously acknowledging that "severity" doesn't affect the importance of the issue.


They're saying that severity informs the decision of how to set priority. I'm not sure why that's necessary since severity is implicit in the description of the bug.


How is severity supposed to inform the decision of how to set priority? That decision is determined by business needs, not "severity".


As I said above

> In some cases the criticality is relevant to the business decisions, in others it may not be. It probably usually will be. The app shouldn't crash, because even if it only happens to 1% of people, they're the ones writing reviews on the app store.

> So the criticality is a big red "Hey! This should probably be high-priority!"-flag, but it's not the only piece of information the PM is going to use.

The point is that PM may be inundated with lots of bugs and new features and other stories that all need prioritization before the next release. If they had a perfect understanding of all of them, they wouldn't need any metadata at all (not even "bug" vs "feature" -- just read the long-form description!), but metadata helps people make decisions.

High-severity bugs should generally be prioritized higher, except in cases where PMs have gained a deep understanding of the situation and realize it's not business-critical, and override that.

At the point that they set the priority, the criticality is no longer important, just as the "bug" vs "feature" tag is no longer that important once it's been prioritized and assigned.


> I'm really confused by all this arguing against simply having more information.

I think the argument is that it's not actually more information; it's just smearing out the same information in a way that makes communication and decision making harder. (Note I'm not completely convinced by either side here, just trying to explain.)


> Yes, of course: "When I press the 'more' button in the developer's bio the app crashes" is a more important bug than "When I press the 'more' button in the developer's bio I don't see more text."

Of course? Really?

I mean, the upshot for the user is the same - they wanted to see more info about the developer, they clicked the button, they didn't see more info.

An app crash on most devices isn't actually a particularly dangerous problem. Start the app up again.

Arbitrarily deciding that a particular bug is worse because it causes more harm to the software rather than because it causes more harm to the user is precisely the problem. The importance of fixing those two bugs is probably exactly the same (unless the app crashing causes data loss or corruption or is exploitable or... you get it. Of course, the same could be true of the 'no more info appears' bug - behind the scenes it might be starting an infinitely retrying string of HTTP requests to download the developer bio that gradually leak memory and drain the user's battery. Maybe that's really the more serious bug?)

There isn't some separate measure of 'severity' that independently makes the crash bug magically worse than the functional failure. There's only the actual consequences of the bug. For the user, their data, their time their resources, and your business goals.


So isn't it useless to make it a cardinal? You are basically throwing information away from a field intended to inform a decision.


Your job is not to build the perfect/best product, it's to build the one your client want and is willing to pay money for.

If you tell the client that they're wrong and that you should fix the critical bug first how they react is a really good test of whether or not they're going to be a good client. If they insist on fixing the lower priority bug first then you should try to stop working with them, because at some point in the future they're going to demand more unreasonable and stupid things that blow up in their face, and you'll lose that client anyway. Save yourself the stress of working with bad clients and get rid of them at the earliest possible opportunity.

If they accept that you know what you're talking about and give you agency to fix things in the order that you think they should be fixed, do your absolute best to keep that client happy because they'll be a joy to collaborate with for years.


> If they insist on fixing the lower priority bug first then you should try to stop working with them

I'm all for firing bad clients. Fire the crazy ones. Fire the ones that don't pay. Fire the ones who make ridiculous demands at the last moment.

But the client wants you to deploy a fix to the logo ASAP, you do it. That's the client's priority. As long as they accept the disruption to the rest of the work, it's no problem.


That only works with one client

When your product is used by many clients you can’t rely on what one client says




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: