
Maintainers Don't Scale - diegocg
http://blog.ffwll.ch/2017/01/maintainers-dont-scale.html
======
zaptheimpaler
Here's an idea, somewhat tangential - modernize the freaking dev tools. It
frustrates me that these projects still use mailing lists and have no sane UI
to track issues, submit PRs like Github, or maybe a Gitter channel for ad-hoc
chat/questions.

It would bring a lot more attention to whats going on and let the community
start policing bad behavior by maintainers.

He even mentions how contributors will often not modify their PRs according to
maintainer and view code review as a rubber-stamp. I rarely see this happening
on Github - because PRs are a product feature. They've already spent time
designing the feature just right to convey what the contract of a PR is, so
the linux kernel maintainers don't have to. The README would also have a giant
"Contributors" heading outlining the guidelines that everyone can see.

This is really a UI problem. But projects from Linux/Apache consistently
refuse to acknowledge its importance.

~~~
chriswarbo
> It frustrates me that these projects still use mailing lists and have no
> sane UI to track issues, submit PRs like Github, or maybe a Gitter channel
> for ad-hoc chat/questions.

I'm not a kernel dev, but I have to ask: what's wrong with mailing lists?

Unlike your examples of GitHub and Gitter, they're unencumbered FOSS; this is
_very_ important for the Linux kernel, since Linus et al. invented git
_specifically_ to remove BitKeeper's proprietary sword of damocles from
hanging over the kernel. GitHub is basically BitKeeper 2.0.

UI-wise, each dev is free to access their mail however they like. It also has
advantages of being federated, accessible offline, easy to write bots for,
etc.

Presumably some devs have nice UI and automation scripts for their particular
workflows; it would certainly be nice if links to such things were collected
somewhere, but I don't see any dichotomy between using email and having a nice
UI. I certainly find my email UI (Emacs + Mu4e) far nicer than any Web site
I've ever seen.

As for GitHub pull requests, I've never actually seen the appeal. Once I've
cloned a repo and made a change, why do I then have to "fork" the repo, add a
new remote to my clone, push the changes then open a pull request; when I
could instead run `git send-email`?

I agree that bug tracking seems to be a bit lacking. I've dabbled with things
like bugseverywhere, but so far nothing's managed to stick :(

~~~
Ajedi32
> I'm not a kernel dev, but I have to ask: what's wrong with mailing lists?

No way to vote on comments, no formatting or embedded images (which work with
all clients, including the web UI), no way to edit comments after you've
posted them, no easy way to subscribe/unsubscribe to just one individual
thread... need I go on?

> Unlike your examples of GitHub and Gitter, they're unencumbered FOSS

So use GitLab and RocketChat then.

> It also has advantages of being federated, accessible offline, easy to write
> bots for, etc.

Fair point. I think RocketChat is working on federation, and GitLab and
RocketChat both have well documented APIs, but for the most part you're
correct that email is superior in these points.

> I don't see any dichotomy between using email and having a nice UI. I
> certainly find my email UI (Emacs + Mu4e) far nicer than any Web site I've
> ever seen.

That may be true for you, but not every new developer is going to come into
your project with that kind of setup. They'll be reading your plaintext, hard-
wrapped emails on Outlook or Gmail while struggling to set up filters so they
only have to read emails from the mailing list about threads or topics they're
actually interested in. A web UI ensures everyone has a great user experience
right out of the gate.

> As for GitHub pull requests, I've never actually seen the appeal.

All the advantages mentioned in my response to "what's wrong with mailing
lists" above, plus inline code review, immediate feedback from CIs, linters,
and code coverage analysis tools, and integration with the issue tracking
system.

> Once I've cloned a repo and made a change, why do I then have to "fork" the
> repo, add a new remote to my clone, push the changes then open a pull
> request; when I could instead run `git send-email`?

Personally I've never found the process of creating a fork to be any sort of
inconvenience. Maybe you'd be interested in the `hub` CLI though:
[https://hub.github.com/](https://hub.github.com/)

~~~
gumby
> A web UI ensures everyone has a great user experience right out of the gate.

That may be true for you, but it's in no way as universal as you seem to
think.

I have a productive workflow (that gives me enough free time to comment on HN
:-) that is effective for the several things I work on during the day (a
couple of development projects, some management, etc). Hopping over to some
web silo doesn't speed things up for me.

The real reason people stick to the current model is because it _works_. A
common baseline lets people use the tools that work for them rather than
trying to cram everyone into the same procrustean tools space.

~~~
Ajedi32
Yeah, I realize a lot of devs have already have their own setups that work
extremely well for them. Unfortunately anytime you transition to a new tool
you're obviously going to break _someone's_ workflow.

That said, switching to something like GitHub doesn't necessarily mean
developers all _have_ to use exactly the same workflow. For example, if you're
more comfortable in the command line, there are tools like [hub][1] and [git-
gitlab][2] available which let you perform common tasks in GitHub/GitLab using
the CLI.

I just think that long-term it's better to have a good baseline user
experience that devs can then extend with their own personal tools if they
choose than to have a not-so-great one that _requires_ you to extend it with
your own tools before it's anywhere near as good as what a well-designed Web
UI provides out-of-the-box; even if in the short-term that means breaking some
dev's workflows.

Obviously that doesn't mean you shouldn't try to maintain compatibility with
developer's existing workflows wherever possible though. For example, GitLab
and GitHub both let you reply to issues by email, and RocketChat has an IRC
plugin that lets you connect RocketChat rooms to an IRC channel.

[1]: [https://github.com/github/hub](https://github.com/github/hub)

[2]: [https://github.com/numa08/git-gitlab](https://github.com/numa08/git-
gitlab)

~~~
gumby
I feel like you're addressing a nonissue. If you want to start your own
project, you can set the standards for development process, coding standards,
language, etc. If I don't like those choices it's no big deal and no slur on
you: I'll just work on what I want and folks who agree will be delighted to
work with you.

But your note feels like you're saying "why don't the kernel dev switch to
Ruby?" Of course when I say it that way it sounds absurd, but really it's
simply an equivalent.

~~~
Ajedi32
By that reasoning though, isn't almost every problem in open source projects a
"non-issue", because you can always just fork and do things your way?

The point is that reducing the friction required to contribute to an open
source project is a great way to attract new contributors. Using more user-
friendly dev tools is a great way to do that, and is therefore something
existing maintainers would be wise to consider doing.

~~~
throwawayish
> By that reasoning though, isn't almost every problem in open source projects
> a "non-issue", because you can always just fork and do things your way?

Yes. Because that's true. Either you have a good point _and_ manage to
convince people in the project, or you have to fork.

Writing irrelevant comments on some news site is not going to convince anyone
to spend a combined hundreds to thousands of man-hours work (at the very
least) to move kernel development to GitHub or whatever platform you like.

Also notice how you are ranting about tooling in a project you're not
contributing to, while the submission here is about something entirely
different -- and written by a contributor.

~~~
Ajedi32
> Writing irrelevant comments on some news site is not going to convince
> anyone to spend a combined hundreds to thousands of man-hours work (at the
> very least) to move kernel development to GitHub or whatever platform you
> like.

So you're saying we shouldn't discuss this at all? Just shut up and stop
trying to convince anyone to adopt a viewpoint I believe would be beneficial
to open source projects in the future?

IMO open discussions like this are a good and healthy thing for any community
of developers (such as the one on Hacker News) to have. It's a great way to
share ideas with each other and collectively come up with ways to improve the
open source community. It's not something that should be discouraged.

------
woodruffw
The title is slightly misleading - the author mostly critiques the maintainer
model used by _Linux_ , not the broader model of "maintainer" used in OSS.

Speaking from personal experience as a somewhat new maintainer for a large
project (we were #3 in PRs for all of GitHub last year[1]), modern tooling and
aggressive automation decrease the amount of busywork substantially. I also
have a relatively large amount of independence in my decisions as a
maintainer, but this hasn't translated into an absence of oversight or
fractured quality standards.

[1]: [https://blog.jessfraz.com/post/analyzing-github-pull-
request...](https://blog.jessfraz.com/post/analyzing-github-pull-request-data-
with-big-query/)

~~~
throwawayish
Isn't the time needed for triage and review - since most PRs will be package
updates - much lower than in most other projects?

------
StillBored
First it should be understood that even in the LK there are different
maintainer models, depending on the subsystem. Many of the driver maintainers
don't even have public git tree's much less public postings reviewing peoples
code.

So, with any large project, many of the subsystems are really dysfunctional.
Overworked maintainers are just a symptom of someone not being able to
delegate (might even be the maintainership itself if they want to code more
than review). Frequently though, It all seems really territorial, with the
maintainer themselves bike shedding over function naming, hunk placement in a
patch series, comment wording, the list goes on. No wonder these "maintainers"
are overworked, instead of acting as architectural reviewers, or even bug
reviewers they force themselves and the people "just trying to scratch an
itch" to jump through hoops for patch revision after revision. Others are more
passive, and simply don't look at patches they disagree with, even if it adds
a significant piece of functionality. So, there is a wasteland of patches that
never make it, only to be rewritten a couple years later by the maintainer, or
a frequent contributor.

Frequently a lot of it boils down to what is effectively territorial
responses. How dare someone come in an move the tree i've been pissing on for
the last two years.

Frankly, I wish that more of the maintainers actually acted like Linus, who is
pretty clear about whether he is going to take a patch set, and back when he
actually did more than take pull requests at face value, would himself fixup
any minor issues he saw in the patches as he committed them rather than
waiting for the submitter to figure out what was wrong and repost a whole
series with some minor tweak.

~~~
ZenoArrow
>"Frankly, I wish that more of the maintainers actually acted like Linus"

Linus has a number of maintainers he trusts, and has indicated before he
doesn't need to question code that has been approved by these maintainers. So
for most of Linux development he's delegated work to others. This can be a
good approach, but it's a luxury to have other people who can do most of the
code quality work for you, it's not an approach you can rely on for all
projects.

~~~
StillBored
I'm not sure what your point is. He built that trust by accepting patches from
people without to much hassle. That was the difference between linux and the
BSDs and one of the reasons why we are not running [386|net|free|open]bsd
these days.

This means people got experience by writing code, and having it merged, bugs
and all, until they gained enough experience to be "trusted". Today, there is
a mindset frequented by maintainers that new developers should have to jump
through lots of hoops rather than being aided by the maintainers. Linus is/was
famous for bitching about something, and taking it anyway. Today, that is
incredibly rare. Most of the core maintainers had it easy, they didn't have to
setup complex SCM's, figure out how to split their patches into bisect-able
chunks, read a whole bunch of howto's, guess at the coding variation accepted
by the maintainer (no checkpatch is frequently not sufficient), and on and on.
They simply had to run diff, pipe it to a file and get it on the list somehow.
Frequently they would get bitched out, but it was rare to submit a patch more
than twice. These days you can find bikeshedders on many of the mailing lists
complaining about function naming in a patch that has a double digit version
number.

Consider what happened to my first kernel patch. I submitted it and Linus
pointed out what he didn't like, while simultaneously correcting it, verifying
with me that he didn't break anything and then merged it. What happens today
is a nightmare by comparison, and yes, that include me because I'm infrequent
enough, and my patches rarely land into the same subsystem, that no one really
recognizes or "trusts" me.

There is another whole discussion about whether "trust" even belongs in the
lexicon of an engineer. The old saying is "trust but verify", where the verify
part is the most important.

------
aksmith
This is a pretty one sided view of the problem. I don't see many arguments
apart from a) "our model avoids burnout" and b) "it is easier to apply
wholesale patches".

Regarding a) the opposite can be the case (for the maintainer!). Regarding b)
the question is (as always) whether the patches are actually necessary or just
some activity.

The problem is far more complex than this blog post acknowledges.

------
digi_owl
Well that explains the crapification of the Linux graphics stack...

------
howfun
Can anyone read this? It is white on white.

~~~
mikekchar
Foreground is #666666 Background is #FDFDFD. Gpick tells me the contrast is
56.1%. That should be quite legible in most circumstances, though personally I
would try to shoot for 70+ -- especially when you have absolutely no
constraints.

If it's not visible for you, then it is likely because there is something
wrong with your set up. Could be insufficient backlight. Could be improper
font rendering. Could be your browser is screwing up the CSS. Could be lots of
things.

I always get in trouble with this kind of discussion because the usual
response is "But this is a _stock_ system. I shouldn't have to adjust it to
make up for crappy web designers." And I sympathise with this sentiment (and
especially in this case where there is no particular reason for going with a
low contrast presentation), but... I'd really rather people complain that
their devices are broken/misconfigured-by-default. There is no reason this
website should appear illegible, even if I don't completely agree with their
colour choices.

~~~
Gracana
> Gpick tells me the contrast is 56.1%.

Which fails WCAG AAA.

It's body text. There's no reason for this.

> I'd really rather people complain that their devices are
> broken/misconfigured-by-default.

Eyes are expensive to fix. Someday I'll get the surgery. In the meantime I'll
bitch about people's shitty design choices.

~~~
mikekchar
As I said, I knew I would get in trouble ;-) FWIW, I have a vision problem and
have difficulty seeing things that have low contrast. In fact, I use a 24
point font and obsess with colours because if I don't, I get ocular migraines
(things that are difficult to read literally make me go blind). So, I'm not
insensitive to your argument.

Saying that 56.1% fails WCAG AAA is not a terribly convincing argument,
because that is the highest level of accessibility. If you want to have a
convincing argument, then why not point out that it also fails to meet the 3:1
contrast ratio of the _lowest_ recommended contrast level _for people with
healthy vision_?

Like I said, I'm not against helping web designers make accessible web pages,
but if you literally can't see something with a 56% contrast ratio, then it's
because your device is set up improperly (or you have vision problems that you
already know about). The frustrating thing for me is that people _tolerate_
these _completely broken by default_ systems and complain to web developers
that they don't have a 7:1 contrast ratio in their web pages.

What that means is that for people like me, I have to spend my life
configuring my blasted machine for situations where colour contrasts are lower
due to design constraints (rather than trendy, bone headed decisions). I want
a machine I can use every day and I specifically don't care if some random
blog writer decides to make their content unavailable to me.

So I will repeat: If you can not see that text, fix your computer and complain
to whoever set it up. If you also want to help the web designer make
accessible choices, then at least tell them what they should be aiming for
rather than complaining that the text is "white on white", which is completely
untrue.

Sorry for the rant, but it's a bit of big deal for me.

