
The sad graph of software death - sandal
http://tinyletter.com/programming-beyond-practices/letters/the-sad-graph-of-software-death
======
zwischenzug
I was once put in exactly this position. Our IT Manager was fired and I was
put in charge of about a dozen sysadmin and helpdesk people. They had over 130
open tickets and claimed to be too busy to do anything. Bear in mind I knew
little about infrastructure (I was an applications dev and tech lead who had
become a 3rd line application support leader).

I spent a month doing triage, and sorting out their process (they were doing
triage, but gave it to a relatively weak member of staff who couldn't spot
patterns or kill off tickets effectively). The number came down to 30, and the
team was far more focussed. I also got to see who was helping and who wasn't,
and sacked another member of staff who bullied less knowledgeable members of
the team (while doing literally nothing but spreading FUD).

The point (obvious to anyone who's read The Goal, which I hadat that point) is
that the graph doesn't tell you jack about what's going on. You have to go and
look. The incoming tickets pipe was the first place to go because it's the
simplest firehose to plug. That gave people space to get off the ticket
treadmill and work on longer term tech debt.

~~~
Spooky23
Support orgs and networking groups are really vulnerable to this kind of
bullshit when the line management is too comfy or someone clueless or
inexperienced is put in charge without adult supervision.

One place a support manager told me that he couldn't keep his indcident queue
under control without 10-15 more people. The tickets weren't being assigned
and the staff was doing nothing. (The manager dude was trying to get people
promoted and gumming up the works for ticket assignment)

Another case a network org had a 20 business day turnaround time for trivial
firewall changes. The average request took 40 calendar days, and a small
sampling we did showed that many had to be reopened several times -- in one
case a DBA's request took ~200 days from open to close.

------
csytan
I see this pattern all the time on GitHub. A promising project starts to catch
a lot of stars. Issues start piling up, and the developer no longer looks
forward to working on the project because of they have a boatload of issues to
deal with first.

An issue is much like an email. Closing an issue requires communicating with
(sometimes unreasonable) people. Furthermore, when issues start piling up,
critical issues are hidden alongside less important ones making it even more
difficult to prioritize.

One way of dealing with things is to maintain inbox zero. Take action right
away - delegate, archive or save for a later date. This won't work forever
though as your project's userbase grows.

Two high quality libraries with benevolent dictators handle issues very
differently:

Peewee (coleifer):
[https://github.com/coleifer/peewee/issues](https://github.com/coleifer/peewee/issues)

Tornado (bdarnell):
[https://github.com/tornadoweb/tornado/issues](https://github.com/tornadoweb/tornado/issues)

------
win_ini
Please just declare "Backlog Bankruptcy".

Start a new project in JIRA or whatever. Get rid of the encumbrances of the
history of all the accumulated bugs (including some that list mundane
"misspellings" of "neighbour").

You mentioned triage. Triage takes place at different levels. Clear the table
and focus on the items that will really move your business and the project.
The issues that were previously removed - will re-surface if they are relevant
to your current customers.

Before you do that, ask - "what IS most important for our business in the next
3/6 months that we can impact with our limited resources"

However, I suspect that if your chart looks that nasty, you'd have a hard time
convincing people to do that. In the end, parts of your team could probably
now be put to work since they aren't just managing stories in JIRA that will
never be completed anymore.

Can't wait for the followup post.

~~~
shalmanese
Every stochastic inbox/outbox patterned software (todo lists, email, issue
tracker, rss reader) follows this same pattern where there are only two stable
outcomes: Inbox Zero or Bankrupt. Because the stream of incoming is random and
uncontrolled by you, unless the time you devote to clearing your inbox vastly
exceeds the incoming stream, all queues converge upon bankruptcy inevitably.

Given this, it's worthwhile thinking about bankrupt only software. Twitter is
a great example of bankrupt only thinking. One of the crucial things Twitter
did that differentiated itself from RSS is it never put an unread counter
anywhere. You expect to go into Twitter, reading only some small percentage of
your stream. You assume that if a link is important enough, enough people will
post it that you'll eventually see it. But also, if you don't, it's no big
deal.

It's interesting thinking about what a bankrupt only issue tracker would look
like. Perhaps a version could be that when an engineer logs in, he only sees
one issue at a time and has the choice of "I want to work on this issue", "I
know someone who should work on this issue" and "I don't want to work on this
issue". You can click around a couple of times until you find an issue to work
on and then get straight to work. Crucially, there's no way to see the global
list of issues and duplicates are encouraged, not discouraged. If an issue is
duplicated many times, it means more of a chance that someone will hit upon it
stochastically.

I have no idea if this will work, probably not. But I think people need to
start thinking outside of the todo list paradigm for issue tracking to make
any meaningful progress forward.

~~~
bojo
How would duplicates be tracked if you can't see the global list to link them
together? Sounds like you'd end up with a lot of potential wasted effort on
"interesting" issues being worked on simultaneously.

~~~
shalmanese
I don't know. I haven't thought closely enough about the issue. But what I do
know is that often, the difficulty of moving towards a new paradigm is that
constraints which are considered essential under an old paradigm turn out to
be not big deals under the new one. My term "bankrupt only software" takes
inspiration from "crash only software" which also faced this same shift in
mindset.

Maybe there are additional constraints to be added to make dupes not a big
deal (each issue can only be a day's worth of work max). Maybe it will turn
out that dupes aren't a big deal anyway and that the increase in productivity
offset the occasional dupe.

The thought experiment was basically RSS Reader:Twitter::Issue Tracker:???

------
vessenes
You can't tell too much from this graph, like you say in your essay sandal --
this could be the sign of a vibrant bit of software that's got exponential
growth in users and subexponential growth in issues (nice!)

An issue tracker will never replace a great product manager; the most
important thing is to make sure the team is working on the most important
things.

If urgent (actually urgent, e.g. critical fixes) things are taking all the
time from the important things, then you have more work to do; maybe in
prioritization, maybe in staffing, maybe in architecture.

Anyway, if you have this graph, and as you say, everything is on fire, then I
would, if I were put in charge of this project:

1) Spend at least a half day skimming/reading all open tickets, and seeing
what tickets each developer is closing

2) Meet with the team to see how they're prioritizing work

3) If the team has good intuitions about choosing their work, I would decide
on the most important task, make sure it's on post-it notes around the office
and on every monitor and tell the team: choose your issues, but make sure they
always are absolutely the most effective and efficient way to push forward
what's on the post-it note. Once that item was done, we'd pick a new one.

If the team has bad intuitions on choosing work, I'd probably take over task
assignment until I trained / found at least one senior dev who could do some
of that work.

If all the bugs are critical errors, e.g. not feature requests, but problems
with the code quality, well you have another project in front of you, and one
that most likely involves some staffing changes.

I'm looking forward to your review of what you did, and how it went!

------
williamstein
This is silly. Why is having an ever-increasing number of open items bad? As
long as you have a way to prioritize them, it is only beneficial to have a
large list of issues. Those issues _exist_ in some abstract sense whether or
not you acknowledge them in an issue tracker.

~~~
erikpukinskis
The basic principle is minimizing "work in progress", which is an idea that
comes from the Kanban production model.

It's certainly worth questioning whether it applies to software... I suspect
the idea behind your thought is that there is no inventory cost to issues
since they are just a few bits on a hard drive somewhere.

But I'm not so sure that's true. Every minute that passes after a bug report
has costs: the reporter's memory slowly degrades making it harder to
reconstruct the triggering scenario. The software itself slowly drifts away
from the state the bug was filed in, adding questions about whether it's still
applicable. And recent changes to the codebase that might have triggered the
bug also slowly fade from developers' memory. Lastly the larger the bug
database is, the harder it is to search and the slower de-duping becomes.

In a real sense the value of the report decreases AND the cost of addressing
it increases over time.

~~~
sandal
Spot on. These were huge issues in the organization I'm describing in this
essay.

------
sandal
I wrote this essay.

I linked the Reddit thread because there are some good thoughts there, but I'd
also love to hear what HN has to say!

[https://www.reddit.com/r/programming/comments/3z1pfp/the_sad...](https://www.reddit.com/r/programming/comments/3z1pfp/the_sad_graph_of_software_death/)

~~~
pajtai
Looking forward to the suggested answer.

Guess I would delete all issues except for the most critical bugs and any
remaining critical mvp features, and then I would say that from that point on
you can't add any feature requests without completing something - that is you
can only add a feature ticket after completing a bug or other feature
ticket.... for sprints I'd say 3 to 1 bug to feature ratio and no more than 1
feature at a time in progress / in sprint at any time.

Then if normalcy returns I'd ease up. I'd also make sure there's one lead dev
in charge long term on the project instead of just having whoever has "some
time to do it."

Not sure if this is ideal, which is why I'm looking forward to your answer.

~~~
mikekchar
I will suggest that bugs and features are not as different as most people
think. In both cases you are adding functionality that is not present in the
current implementation. The difference is that in a feature, nobody expects
the functionality to exist. In a bug, people expected the functionality to
exist (either because it existed in a previous version, or because it was
thought to have been implemented and it turns out that it wasn't).

People react badly to bugs because they feel it is a result of a mistake --
developers broke something they shouldn't, or they didn't do a good job in a
previous implementation. The feeling is that a bug has the highest priority,
but in reality, there is no such connection from "previous mistake" to
"important now".

If the only difference between a bug and a feature is that a feature is not
expected to already be present, as soon as you discover a bug it becomes a
feature request -- we now know that functionality is not present. It should be
prioritised in exactly the same way.

The key to solving the problem posed by the article is fairly clear. Get a
list of all the things the _you are not going to implement_. Remove them from
the list. Done. Whether these are bugs or features is irrelevant.

~~~
joepvd
I agree with you. One notable difference between a bug and a feature is who
will pay for development/testing/support time.

~~~
zer00eyz
This is huge. A capital expense can be amortized (feature dev falls firmly in
this category) where a defect (typically) ends up in operating expenses.

Many devs are blissfully unaware of the influence accounting (the other nerds)
has on an organization.

Lots of PM groups are driven almost exclusively by "new features" and poor
estimates on their value. I have been hard pressed to find a PM that will do a
one week project that adds 100k to the bottom line over a 10 week one that
adds 500k even though the former has a better ROI.

------
codingdave
The missing piece of information here is that while the quantity of items may
be increasing, it is likely that the severity of those items is decreasing.
The truly urgent major issues get identified and fixed early in an app's life.

But to answer the question, once you get to the point that you have a large
backlog of minor items, you stop treating them all as equal, and instead start
looking for categories of items that can lead you to architectural and
functional updates to the system that will both resolve a chunk of the backlog
at the same time as carrying your product forward in larger leaps.

~~~
sandal
> The missing piece of information here is that while the quantity of items
> may be increasing, it is likely that the severity of those items is
> decreasing. The truly urgent major issues get identified and fixed early in
> an app's life.

You can't assume this unless you freeze development. I wrote the essay... the
opposite was true: rushed work was creating a runaway defect density while the
product was externally experiencing growth, which in turn brought old tech
debt to light as for example... things with a few percent defect rate that
were easily to manually deal with before suddenly saw 10x-100x increase in
frequency.

So... if you allow for a cooling phase, yes... you end up with a "more issues,
but less severe" pattern. If you're in a high-growth situation with a
development team that's at near 100% capacity utilization... you get infinite
death graph doom. :-/

------
quizotic
I would give the same kind of advice that financial advisors give for getting
out of credit card debt: "find the card with the highest interest rate, make
minimum payments on all other cards, and pay as much as possible on the
highest interest rate debt until it is paid off... then recurse"

To make the analogy work the backlog needs to get rated on how much each open
issue costs the company. Most organizations use some stratified measure of
"severity". Few actually try to put a price on a problem, but I think it's
worth the effort to try.

~~~
sandal
> I would give the same kind of advice that financial advisors give for
> getting out of credit card debt: "find the card with the highest interest
> rate, make minimum payments on all other cards, and pay as much as possible
> on the highest interest rate debt until it is paid off... then recurse"

This is great advice and is ultimately what we focused on once I got past this
initial triage/prioritization problem in the org I was helping.

> To make the analogy work the backlog needs to get rated on how much each
> open issue costs the company. Most organizations use some stratified measure
> of "severity". Few actually try to put a price on a problem, but I think
> it's worth the effort to try.

This is hard to do when the backlog is _increasing_ by hundreds of open issues
every couple months.

But the trick is ultimately to find a way to very quickly trim the backlog and
see what crops back up, and then put each resurfacing issue through an
economic decision making framework (even if it's a back-of-the-napkin
calculation), as you suggest.

------
shalmanese
The core problem is that "number of issues" is not a meaningful metric, "rate
at which business value created" is.

At it's core, I believe an issue tracker is fundamentally the wrong pattern
for developing software and the same pathologies crop up over and over again.

Issue trackers seek to accomplish multiple goals: They are communication
mechanism between departments, they're a way of tracking the state of
progress, they're a means of prioritization and they're a way of evaluating
progress. Because these goals lie fundamentally in tension with each other,
the same problems predictably occur with long time use of an issue tracker.

Issue trackers are like shopping lists without price tags. if you imagine an
Amazon Wishlist with all the things you want to buy in life, it's not hard to
imagine the same graph appearing. Somewhere mixed into all those items is a
ferrari and also the washer you need to stop your tap from leaking. And, of
course, the list is going to get exponentially longer as time goes on, even as
you check items off. But the reason we don't get stressed about our ever
expanding wants list is because price serves as an intuitive gut check over
priority and, as long as we see our personal living standards improve, we
don't care that we can never fulfil all our wants and we don't bother applying
"won't buy" or "item does not exist in real life" tags on our list. (this is
an analogy, ok. Please don't try and fight the hypothetical on this, I'm aware
our economic wants are significantly more complicated than that).

One of the ideas I've been tinkering with is to treat issue tracking less like
a todo list and more like a storefront. A todo list with ever increasing
entries is stressful experience, a storefront with expanding inventory is not.
Items would have a base cost associated with it and a "shipping cost" so that
2 day shipping costs more than USPS ground (aka: Whenever we can get to it).
External clients get to create items on the storefront, set the price and the
shipping costs using some kind of artificial currency and engineering gets to
figure out which orders to fulfil and in what order to maximize revenue.

Of course, to paraphrase jwz, "You have a problem and you think "Oh, I know,
I'll solve it with an artificial currency". Now you have two problems".

Still, I think it's worthwhile getting back to the foundations and think of
what an issue tracker does in the context of software development and whether
there's a better thing that could serve the same purpose.

~~~
LoSboccacc
I went total opposite direction. My team is small and focused on a single
product so ymmv: real life bugs and 'papercuts', which are high number low
duration tasks directly influencing our software adoption go in the tracker to
avoid losing them on the road.

New tasks, requirements, features and nice to have go, on a macro level, into
a wiki page, where all the political improductive infighting takes place. I
asked the shareholders to keep a ordered list, since giving them priorities is
a sure way to have everything escalated to critical. A couple week before
every iteratiom three/four of the top macro features go to UX design and
whatever emerges is sent to built.

So far it's working well, it isolates the team nicely from the madness above,
while also allowing a single place where a roadmap exists, even if in a fluid
ever changing state.

Macro task don't get into the traker because they're hardly ever finished and
just pollute it; we do as much in a iteration of the feature x as needed to be
useful then throw the rest back on top of the queue as x part 2 or x
enhanchement.

~~~
shalmanese
Issue trackers work perfectly well in the small and there are plenty of
historical examples of successful issue tracker implementation among small,
tightly scoped products with good communication.

The problem is, as it scales, you have to start making tradeoffs between the
different desires everyone has over what an issue tracker should be.

Does every issue have to go into the issue tracker? If I spot a typo on the
about page, do I have to file an issue or can I just IM Lisa and she can apply
the fix?

Does the issue tracker have to reflect the ground truth at all times? If a
defect was assigned to Bob and Bob and Lisa have lunch in the cafeteria and
Lisa figures out she can fix the issue better, does Bob then have to go back
to his desk and reassign the issue to Lisa?

Is the issue tracker for employees only or for management as well? For product
companies, tracking time is generally unimportant but consulting companies
generally require each hour needs to be billed to a particular client. Tying
together the issue tracker with the time tracker seems like an initially
appealing solution to this problem but now introduces a whole new world of
pain.

Is the issue tracker used for prioritization or is a separate system used?
Both have pros and cons.

Generally, when you hear people's gripes around issue trackers, it's not
around the particular software but the policies surrounding it. But the reason
why you hear so many complaints about the policies is because the intrinsic
structure around how issue trackers are built actively encourage bad policy
because of how many conflicting goals it purports to solve.

------
e28eta
On this topic, I like Joel Spolsky's essay about Software Inventory:
[http://www.joelonsoftware.com/items/2012/07/09.html](http://www.joelonsoftware.com/items/2012/07/09.html)

He provides some suggestions on improvements.

------
henrik_w
About what to work on (said about no estimates, but works equally in the face
of a mounting backlog) from Kent Beck on Twitter:

"Alternative to estimates: do the most important thing until either it ships
or it is no longer the most important thing"

------
elwell
> Or in the "packrat project manager" scenario, there may be tons of low
> priority nice-to-have tickets that have been sitting around for months or
> years, even though everyone knows they'll never be worked on any time soon.
> If this is the case, the graph is still "sad", but it's not a death
> spiral... it's just a sign of a wasteful and broken issue tracking process.

What's wrong with keeping a few hundred "low priority" issues in your tracker
as long as they are labeled properly?

~~~
makmanalp
I guess what matters is: do they get tended to, cleaned up regularly, and do
people get to them eventually? In my experience, the answer is usually no.

~~~
ryandrake
If a bug has been in the software for years, and tons of people aren't
complaining about it, does it really need to be gotten to eventually? Those
should be resolved as "won't fix".

~~~
simoncion
I can't agree. When I find myself in the mood, and with a bit of time on my
hands, investigating -and sometimes fixing- these low-priority bugs is often a
fun and useful way to spend an otherwise wasted part of a workday.

------
such_a_casual
My reflex, give management and teams a way to put a deadline on important code
issues as soon as they are known. Management would need to support and enforce
these deadlines in order for them to mean anything.

If there is significant mental baggage with the old system for dealing with
code issues, throw it out and replace it from scratch (not literally, but
aesthetically in the minds of everyone from managers to interns so that they
may escape the frustration and lack of faith they had with the previous
system).

------
highCs
At my own surprise, I now often blame the developer (the team and me) and not
the business, in such a situation. There is 2 reasons:

1\. You can develop 10x faster than what you think. Learn functional
programming and more importantly, learn to code bottom-up. Even if you do
object-oriented code, that will do.

2\. As a developer, you must take the initiative and drive _aggressively_ the
projects. They must listen to you, not you listen to them. Literally put your
job at stake, often. If you do so and if you got results (see point 1), you
gonna gain respect very quickly, then take the initiative and become
unstoppable. But, before thinking about that, you need point 1.

You are the code pro, if things goes wrong about code, you are the one to
blame. The spiral is you thinking you are the victim. You must understand than
software must serve the business, not the opposite. Finally, nobody told
software development was easy.

EDIT: I should say something here: I'm talking from experience. This is what I
do, and I can testify it works. Also, it's not supposed to solve the problem,
it's supposed to prevent the problem to happen.

~~~
lubonay
It's nice to believe in superpowers, but this is a matter of scaling rather
than one of a high constant value - even if you are 10x 'faster' (which does
not mean better in many cases), when the issues piling up are scaling
superlinearly compared to your organization's development capacity, you will
still have a problem.

~~~
highCs
Sure, my point is you get there because developer didnt do 1 and 2. Once
there, I agree with you, solutions are different.

------
shadeless
That graph perfectly describes my personal TODO lists: they grow and grow,
until I switch to another app/website/notebook, then the cycle repeats. This
way only the most urgent and some of the important tasks get done.

The solution for me seems to be to say No to more things.

~~~
z3ugma
This was what came to mind when I read the call-to-action at the bottom asking
for what I'd do first.

I'm a software project manager. I'd push for focus from the team, and tough
decisions from management. If there are 10 "areas" of the app, each with 10
feature requests, I'd pick 2 or 3 of those areas and do an amazing job
developing those 20-30 requested features, and close the other 70 issues as
"not fixing".

We'd maybe lose some customers over it, but make the remaining customers all
the more loyal. If the issues were truly problems, they'd come back up again
in issue tickets over time. I bet most were nice-to-haves, and not having them
is the price we pay for good software.

------
nickzoic
What I'm a bit puzzled by here is that if I'm reading the X-axis right the rot
sets in at 3 weeks ... I mean, that's not very far in. It seems odd that the
graphs are both otherwise so linear.

I just EOLed a project which started 8 years ago ... at least one bug existed
for 7.5 years of that. It was a minor UI bug and just bumped along at Priority
Low with no-one really minding it until the company got acquired and the
project got merged into another one. There were others as well.

My point is: without splitting the "backlog" by priority it is hard to see if
this is really "software death" or just "bug fossilization" ...

Maybe I should draw my own graph.

~~~
sandal
This is an accumulation over the 4 month period, not a total issue count on
the tracker. So... there were already many issues in the backlog before the
measurement window started, and 500 new issues were opened during the period.
:-/

------
sandal
Here's the followup essay, for those interested:
[http://tinyletter.com/programming-beyond-
practices/letters/b...](http://tinyletter.com/programming-beyond-
practices/letters/beginning-to-climb-out-of-the-software-death-spiral)

------
marshray
The graph axis appear to start at 0 issues (x) and extend for 3.5 months (y).

The whole graph is so consistently linear and contrary to my experience I feel
the data is suspect.

I don't know how to interpret this, other than the team lost half it's
developers in early April, or the same team became very consistently half as
productive at that time.

~~~
sandal
When I created this graph I was looking at historical data that existed before
I got involved with the company, so I don't know the _complete_ story on all
of its details...

But it's important to note that this is an accumulation graph... so it starts
at zero but that doesn't mean a backlog of zero (my guess is it was massive,
because it was massive at the time I arrived)... this instead counts issues
opened and closed during the time period across the entire system. Some issues
closed during this period would be ones that were opened long before the start
of the period.

This means that the graph will _always_ increase, and that the space between
the two lines represents the growing net increase of unresolved issues in the
tracker.

I wish I still had the raw data on this, because it's a little hard to tell
from the scale that the differences from week-to-week can be pretty big (like
10 issues opened one week, 50+ another)

When I arrived, there were a few specific issues in play, along with all the
usual code quality / project management issues that plague any troubled
project:

(1) A high number of support requests that required manual setup work from
developers, which increased greatly due to overall growth in the customer
base.

(2) Some high defect density areas in the codebase that generated emergencies,
and fixes that would end up breaking other things in the process of dealing
with the acute issue, along w. infrastructure/architectural scaling problems.

(3) As you guessed, reduction and restructuring of team capacity, without a
corresponding change in the workload.

So these things... they happen more often when we wish in the software
industry, and it produces graphs like this.

(that said, keep in mind this was pretty much a napkin sketch. any issue
tracker is going to have a TON of noise, unless it's very well pruned -- the
sole purpose was to point out the wide gulf and relate it to the problems
already obviously observable onsite, and then use that to motivate real work
to change things.)

------
PostThisTooFast
The graph's axes aren't labeled. Therefore it isn't informative.

~~~
sandal
Well, sure... IF you don't read the essay title, the graph title, the
surrounding context, the paragraph directly after the graph, etc.

I plan to fix this when I use this graph elsewhere, but I really don't
understand this comment. Is it an automatic knee-jerk reaction because you
looked at the graph axes and didn't read the article?

Or did you read it, and have a genuinely hard time making sense of what was
going on?

If the latter... sorry about that. However, I'm surprised at just how many
people seem to have understood this idea, even without the labels, if it is
such a severe problem.

~~~
DrScump
On both this thread and the corresponding Reddit thread, people voiced on the
lack of clarity of the graph elements, in the spirit of helping you clarify
and expand on it... and you insult them.

There is a _lot_ of meaning that cannot be determined from the graph. Are any
of these tickets redundant/duplicate? Do they all reflect original features or
bugs in new features? (Remember, there is no zero-time point on the graph). Do
newer defects represent newly discovered bugs that were always there? Do any
represent _regessions_ that messed up your installed base?

I can't see any accounting for _severity_ of defect by any metric, e.g.

\- difficulty to fix

\- span/frequency of customers affected

\- severity level, e.g. effects/misbehaviors that produce just visual
artifacts, vs. those that crash, vs. those that display wrong results, vs.
those that corrupt data

I've worked in (and managed) support organizations in a number of realms of
IT, and a sheer "number of defect reports" metric is almost meaningless by
itself in terms of determining how to marshal resources to significantly
improve the product.

~~~
sandal
The point of the essay is simply this:

If you're seeing a massive amount of problems in your organization AND you
have what appears to be a badly broken prioritization/triage/issue tracking
process, you need to fix your triage process before you'll be able to solve
the real underlying problems.

There's no situation in which opening 500 issues in four months and only
closing a tiny fraction of that amount is healthy, regardless of severity or
whether they're feature requests or bug reports, or whatever.

The graph makes that point, and I'll hold the fact that this is on the front
page of HN, /r/programming, and lobste.rs as evidence of that point being
sufficiently informative and well understood by the vast majority of readers.

I would have updated the image on the first report of the issue if I was able
to edit the post, but this is an archive from an email newsletter entry. It is
worth fixing, and I will fix it in time for the followup essay.

I am just generally bothered by how incredibly, unbelievably pedantic it is to
fixate on this one point and act as if the whole essay isn't valuable because
it took an extra few seconds to read the graph.

But hey, this is the internet. We can ignore the larger points and focus on
fine-grained details, and that's normal, right?

