Hacker News new | past | comments | ask | show | jobs | submit login
The struggles of an open source maintainer (antirez.com)
719 points by ngaut on May 17, 2019 | hide | past | favorite | 216 comments



He missed a big one: you have no way to stop Linux distributions from hacking up your software, and you'll suffer the consequences of whatever they do.

I hit this with procps. (a package with ps, top, vmstat, free, kill...) It was horrifically demotivating, helping to end my involvement as the maintainer from roughly 1997 to 2007. (the other big issue was real life intruding, with me joining a start-up and having 5 kids)

I had plans for command line option letters, carefully paying attention to compatibility with other UNIX-like systems, and then Red Hat would come along and patch their package to block my plans. They were changing the interface without even asking for a letter allocation. I was then sort of stuck. I could ignore them, but then my users would have all sorts of compatibility problems and Red Hat would likely keep patching in their own allocation. I could accept the allocation, letting Red Hat have control over my interface.

Red Hat sometimes added major bugs. I'd get lots of complaints in my email. These would be a mystery until I figured out that the user had a buggy Red Hat change.

Patches would often be hoarded by Linux distributions. I used to regularly download packages and open them up to look for new crazy patches. Sometimes I took the patches, sometimes I ignored the patches, and sometimes a wrote my own versions. What I could never do was reject patches. The upstream software maintainer has no ability to do that.

The backlog of unresolved troubles of this sort kept growing, making me really miserable. Eventually I just gave up on trying to put out a new release. That was painful, since I'd written ps itself and being the maintainer had become part of my identity. Letting go was not easy.

Maybe it had to happen at some point, since I now have more than twice as many kids, but I will be forever bitter about how Red Hat didn't give a damn about the maintainer relationship.


[Disclosure: I'm a long-term Red Hatter.]

Hmm. That's painful indeed. Sorry that you had such a dispiriting experience with the 'procps' package.

For what it's worth, allow me to share my experience (I joined after 2008; so I can only speak about that period onwards) of being at Red Hat. One of the reasons that keep me at Red Hat is the iron-clad (with some sensible exceptions, e.g. security embargoes) principle of Upstream First.

I see every day (and the community can verify -- the source out it there) maintainers upholding that value. And I've seen several times over the years maintainers, including yours truly, vigorously reject requests of 'downstream-only' patches or other deviations from upstream. When there are occasional exceptions, they need extraordinary justifications; either that, or those downstream patches are irrelevant in context of upstream.

I've learnt enormously from observing inspiring maintainers at Red Hat (many are also upstream maintainers) on how to do the delicate Tango of balancing the upstream vs. downstream hats.

So if it's any consolation, please know that for every aberration, there are thousands of other packages that cooperate peacefully with relevant upstreams.


A bit more detail on some Red Hat experiences:

I had reserved -M for security labels, to be compatible with Trusted Irix, and I think this was clear in the source code at the time. I intended to avoid any additional library dependency if possible, because ps is a critical tool that must not break. If I couldn't manage without a weird SELinux library, then I'd dlopen() it only if needed.

Red Hat swiped both -Z and Z, giving them the same meaning. For at least one of those, probably -Z but this was a long time ago, my plans were to use it for being compatible with a different feature of another OS. There are only 52 possible command option letters, not counting weirdness like non-ASCII and punctuation, and most are already taken. Now 3 of them, almost 6% of the possible space, are redundantly dedicated to an obscure feature. An added annoyance was that -Z got wrongly thrown into ps's list of POSIX-standard options, which can affect parsing in subtle ways.

One day I discovered this as it was being shipped in RHEL.

A more recent and amusing issue is with the recent storm of security bugs that hit procps. They actually predate my involvement with procps, likely going all the way back to the early 1990s. I ended up getting notice. I responded on the Bugzilla, correcting some misunderstandings and pointing out better ways to fix the problems. I even do software security work these days, professionally, so I would be the ultimate expert on security bug fixes for procps. My helpfulness got me blocked from looking at the Bugzilla and then Red Hat proceeded to ship slightly bone-headed patches for the security problems. BTW, last I checked there were still DoS vulnerabilities because Red Hat ignored my advice. Turning the 32-bit value into a 64-bit value may prevent an integer wrap-around exploit, but that just means the system will swap until the OOM killer strikes. The value should have stayed 32-bit, with protection added to avoid even approaching such a large value. You probably don't even need more than 17-bit. The stuff with escape expansion is also bad, differently. Instead of papering over the problem, the math should have been corrected.


Some of RedHat's dual upstream/downstream maintainers are possessively trying to take over the upstream projects while not doing very important changes.

Others are great human beings.

But on the whole, RedHat skepticism is healthy (this also applies to the NIH software that they push onto most Linux distributions).


> NIH software that they push onto most Linux distributions

This is likely off topic, but can you expand on that if you have a moment? Thanks.


Before systemd, there was pulseaudio.


Systemd comes to mind.


> the source out it there

[Typo: "out it there --> "is out there"]


That sounds really awful, and it's really disheartening to see the (ex)maintainer of such a core piece of Linux distributions being treated that way.

I work for a Linux distribution (SUSE) and I assure you that we don't all act this way. I've had to carry my fair share of downstream patches in openSUSE/SUSE packages, but I always make sure to submit the patches upstream in parallel. Quite a few people I know from Red Hat (and other distributions like Debian, Ubuntu, etc) do the same. It's possible that times have changed since then, or that it depends which package maintainers you are dealing with, but I hope it hasn't soured your opinion of all Linux distribution maintainers.

One thing that is a constant problem is that users of distributions keep submitting bugs to upstream bug-trackers. If there was one thing I wish I could change about user behaviour, it would be this. Aside from the fact that the bug might be in a downstream patch (resulting in needless spam of upstream), package maintainers are also usually better bug reporters than users because they are more familiar with the project and probably are already upstream contributors.


> If there was one thing I wish I could change about user behaviour, it would be this.

Speaking from a user perspective, I don't think that would be wise. If distributions would just stop feature patching, things would be a lot simpler.

I mean, if a feature is good, everyone should have it, not just the users of one distribution. So please submit the patch upstream, discuss it and wait for the next release. That way, there is no real problem when users submit to upstream bug-trackers. The only exception is (time-critical) security patching. But those patches should be removed again as soon as upstream solves the issue.

The scenario you are wishing for looks like an obvious solution to the issues you are having but will just make everything worse in the long run (slow and fragmented).


I am an upstream maintainer of quite a few projects as well as a downstream maintainer of quite a few packages, so I've seen this from both sides of the exchange.

Sometimes the user-facing issue isn't an actual patch, it's the default configuration (which might be distribution-specific) or any other host of non-code changes (default security policy and so on). These things should be reported to the distribution, but often get reported upstream which acts as spam.

And note that most downstream patches are backported bugfixes, not feature patches -- in my experience those are the exception.

> So please submit the patch upstream, discuss it and wait for the next release.

What if you're being paid for support and a customer needs a fix which you need to backport (it's not a CVE but it's a serious issue)? This is the problem we find ourselves in very often, and is why we push patches upstream but also patch our downstream packages so that the issue is resolved for our users. Some projects have release life-cycles that are ~6 months apart -- how do you explain to users that they need to wait 6 months in order for a fix which is already written and merged to be shipped to them? If we only ship it to those customers then it's not fair to the rest of our users nor the rest of the community (this is why SUSE has a Factory-first policy where all SLE changes must land in openSUSE first).

And finally (in rare cases) the upstream might not accept patches that are required in order for some distribution features to work because they fundamentally disagree with the feature. What are we supposed to do in those cases? Either way, someone will complain because the best solution (merge it upstream) is closed off.

While it would be ideal if downstream patching was unnecessary, that's not the world we live in. Again, I do submit patches upstream religiously -- but it's not as cut-and-dry as you might think.


How about using some of that revenue to pay the project maintainer to incorporate the patch upstream and make a release?

I've seen this very often in my personal experience that a company says exactly like you're saying now, "We have paying customers who demand certain patches, but the upstream project may be unable or unwilling to patch and release...so we downstream only." Alternatively, some downstream consumers simply "throw code over the wall" in the form of an upstream patch. However, those patches are sometimes duct-tape solutions which may not fit into the overall architecture/vision of the project maintainer(s). It's not fair to say, "accept this or else..." where the 'or else' is a downstream deviation (which in turn sometimes forces the upstream's hand).

The ethical way to do this work with upstream, whether it be direct compensation or more back-and-forth vice code-over-the-wall.


I think it's a incredibly unfair to assume that I'm not acting in good faith when I send patches upstream. I was contributing to the free software community long before I was getting paid to do so. I spend almost all of my working hours doing upstream maintainership work or writing patches. I have PRs that are several years old and I still am working on getting them merged upstream. I'm also just a developer, I'm not in a position to start spending money on projects on behalf of my employer. I do happen to donate to plenty of upstream projects and foundations, but that's money out of my own pocket.

The issue is that there are some (rare) cases where upstream is completely unwilling to merge a patch for philosophical reasons.

If you want me to be blunt -- the example I'm thinking of is Docker (which doesn't have a cash-flow problem), and has refused outright on many occasions to merge patches that allow for build-time secrets. On SLE we need this because in order to use a SLE image on a SLE machine you need to "register it" and the only way to sanely register it automatically is to bind-mount some files into all containers on-build as well as on-run (which you cannot do with upstream Docker today). Red Hat spent a long time trying to get a patch like this upstream, as did we.


To be clear, I'm not say you personally. I don't know you, or anything about your work so I'm speaking in purely a general sense about a position that you were representing.

I don't think all upstream contributions are in bad faith. I'm just saying there tends to be competing priorities which leads to some instances of hostility.

As for the Docker example, I don't know what the right answer is without digging deeper. My naive thought is to write a SLE/RHEL shim/plugin style component to allow functionality that's missing. This allows keeping the upstream vanilla, while not having to fork into something without the brand recognition. If that doesn't work, forking as `rhel-docker` or `sle-docker` doesn't seem that bad to me. Ubuntu does this with all the bcc-tools.

This is of course predicated on having tried all previous solutions first (paying an upstream developer, work with upstream with a good back-and-forth to incorporate a patch, etc). In the end, if the project decides something against their philosophical viewpoint, they're perfectly entitled to not accept patches. It's at that point, I think it's not the best solution to fork, downstream patch, and release as if it's the vanilla upstream.


All these problems are very easy to solve. Suse and Red Hat could stop trying to have it both ways: either you get the flexibility of a fork, or you get the brand recognition of the upstream project. But you want both - the upstream brand to get users, and the downstream flexibility so that you can “differentiate” and sell support.

This problem is entirely created by distributions and their “packagers know best” philosophy.

You say downstream patching is necessary. Go ahead and strip the upstream trademark from every package which you patch downstream, replace it with new Suse-specific trademarks, and let’s see what users prefer!


This isn't a SUSE or Red Hat caused issue, it's a users thing. Users want it both ways. Users want the flexibility to assume a project is the same as the upstream (but maybe feature frozen at a specific version), while also wanting all the security and bugs to not exist. Users are paying to have it both ways, and whether or not it's feasible, the market for it exists, and Suse/Red Hat are just trying to fill the gap.

If it was entirely the vendors, then we would see significantly more adoption in fast moving OSes, but that's still not a very popular model for production servers, even in the age of the cloud, containers, etc.


Fork the project, prefix the command with your company name and tell them to use that command with the patch applied if they need it today. This solves all your problems.


I think you're over-estimating how many users are willing to change their workflow or scripts to accommodate things like this. They will switch away from your distribution to one which patches the actual package.

I also don't agree this would result in fewer upstream bug reports -- "suse-foobar" will still result in bug reports for "foobar" (I've seen some cases of this happening). You'd need to rename the project entirely so that users don't know what the upstream GitHub repo is, and that's even more anti-community than any other suggestion.


> and that's even more anti-community than any other suggestion.

Shipping a fork that is different from the version that is created by the original author is also very anti-community. At least make it clear that you ship something different from the program that the author maintains.


The key phrase was "even more".

I disagree that all patches are somehow ethically wrong (bugfixes and security patches are obvious counterexamples). Not to mention that if the author felt otherwise, they wouldn't have released the code under a license that allowed you to modify and redistribute your modifications.

But making massive changes to a project that are incompatible with upstream is definitely not a good thing to do without reason.


> I disagree that all patches are somehow ethically wrong (bugfixes and security patches are obvious counterexamples). Not to mention that if the author felt otherwise, they wouldn't have released the code under a license that allowed you to modify and redistribute your modifications.

The fact that the author does not forbid it does not mean that he/she wants changes or even encourages them. It just means that the author believes that the downsides of completely disallowing changes are even worse.


There are free software licenses that require you to rename the project if you modify it.

But I digress. This whole discussion is about trade-offs -- if you cannot get the patch upstream but you need to ship it what is the next best thing. I would contend that patching it is better than patching and renaming because renaming doesn't help solve the problem (unless you are very radical and rename the project entirely) and makes things less convenient for users.

And note that distribution users are part of the community of people using the software.


The solution is to either 1) patch it and rename your fork completely, or 2) ship upstream unmodified and use the same contribution process as everyone else to get patches in.

That is how everyone else does it. Only distributions are somehow exempt from fork etiquette. Hold yourself to the same standards as everyone else, and the problem goes away.


I disagree that everyone else does this except distributions.

Companies apply their own patches to projects all the time (as an upstream maintainer I've been asked several times to help debug a patch that some company has used internally). Almost every company using Linux has patches on top of it that are for their specific project (all versions of Android have a forked Linux kernel). GitHub uses patched versions of Git (though one of their engineers is also incredibly prolific upstream). And so on.

The reason why people think distributions are the only ones doing it is because we maintain all of the software that is available for a full Linux system. So instead of only having patches for just one or two projects, we have patches for (probably) ~50% of packages in our distribution (most are bugfixes but there are plenty of not-just-bugfix examples). I think some folks just like to bash distributions because no matter what decisions we make we're going to piss someone off.

But again, we don't apply downstream patches because we like it. In fact downstream patches are an outright headache because we have to rebase them on version upgrades and so on.


Of course companies patch open-source software for their own use. But they either don’t distribute it, or they do so under a different trademark.

Just to focus on your own examples:

- Github patches Linux for their own private use. They do not distribute any Linux derivatives, and they don’t profit from the Linux trademark.

- Android does distribute a Linux derivative, and it is heavily patched, but it is distributed and marketed under the trademark “Android”. Google does not profit from the Linux trademark.

And that’s the difference. People don’t buy Android phones because they’re running Linux. But they buy Suse and Red Hat distributions specifically to get Linux.

So Suse and Red Hat are the only businesses which I know of, that are allowed to fork upstream software, modify it aggressively, and still profit from the upstream trademark.


The vast majority of free software projects do not have a registered trademark. In cases where a free software project does have a trademark, distributions usually will rename the package (distributions do have lawyers and they will usually kick up a fuss in cases like this).

The case of the Linux mark is really weird, because basically all distributions are given license to use it but almost everyone still specifies that the trademark is owned by Linus.

(Also my example for GitHub was their fork of git, not Linux.)


Didn’t you mention elsewhere in this thread that Suse and Red Hat patched Docker, not just to backport fixes but to add features which upstream explicitly didn’t want merged? Surely Docker has a registered trademark. So following your reasoning, Suse and Red Hat should have stopped using the Docker trademark. Yet they didn’t. That example seems to contradict your argument that distributions are very careful to respect registered trademarks, while considering unregistered trademarks to be basically a free-for-all.


This is meant to be temporary. If it is truly crucial they will update happily. If it's not they can wait. Submit the patch upstream, if it is accepted you can then deprecate your code and create an alias, if it's not congratulations you get to maintain a fork.

This changes the culture from you clobbering the original package maintainer, to one where you can adapt when necessary but are still a good community member. The "foobar" people can point people to use "suse-foobar" as a solution until everything has been resolved.

It's not the number of bug reports that matter, it's that you can easily and quickly come to a speedy resolution as an end user.


This is also how non-standard and standards-track CSS features work . The vendors promotes CSS keys like -webkit-marquee-rainbow-animation-spectrum while waiting until all the browser vendors agree on the final spec for marquee-rainbow-animation-spectrum.

For unix tools, downstream vendors could choose to rename the binary (redhat-procps) or rename the flags (procps -- rh-c) if it wasn't incredibly ugrent to choose a single-letter flag name.

https://developer.mozilla.org/en-US/docs/Glossary/Vendor_Pre...


How happy would you be to use a system where 50-75% of commands start with "debian-" or "ubuntu-" or "redhat-" or "suse-" (assuming that all patches have to obey this renaming rule)? As a user, I personally wouldn't want to use a system like that and I would switch distributions to one where I don't have to repetitively type out the name of my distribution into a shell.


I think you've completely missed the point. It should be very rare to need to ship a fork of a project to end users. Patches should first be targeted at the upstream project, but in the rare case where it needs to be used today you should follow this path instead of messing up the API. By doing this you can explicitly set expectations that the changes you are making are temporary and will be deprecated immediately once you get back in sync with master.

As an end user this is ideal, I can get a fix shipped today with the tradeoff that I will have to do a bit of maintenance in the future. Any other way leads to long waits, or chaos.


If you would personally not use a system like this and would prefer one that didn't rename things, even if they were missing the patches, I think that goes to show that the patches are not that valuable.

I'm not an open source developer, but it seems like a good solution is for the original publisher of the package to maintain their vision, take whatever feedback they deem useful, and ignore what they don't feel is useful. If RedHat or the other distributions want to keep maintaining their patches, let them; that's what they're being paid for. If it ends up fragmenting the Linux ecosystem, which IMO it does, then the distributions should do more introspection and cooperate more to reduce fragmentation.

While distributions give variety and diversity - sometimes a good thing - I would love it if Linus would get all distributions in a room and force them to agree on a whole set of issues to eliminate silly differences between distributions. And if they don't/won't agree, they don't get to use the Linux trademark.

It's human nature to think your way of doing things is best. But I'm not sure the multitude of idiosyncratic differences between distributions is really advantageous to users. It does lock users into a distribution, because as you said, who wants to go through all their scripts and rename every instance of ps to rh-ps, then go back and rename everything again when the patch is accepted.

I do think the idea of paying the original maintainers (from the company, not from you personally), has a lot of merit. After all, that's where the stuff was born; it's RedHat's "raw materials" supply.


Are there any distros with a strict upstream-only policy?


Fedora comes very close. But I'm biased, as a user of (and contributor to) it for about ten years.

Don't take my word for it, read the related documentation[1]:

"The Fedora Project focuses, as much as possible, on not deviating from upstream in the software it includes in the repository. The following guidelines are a general set of best practices, and provide reasons why this is a good idea, tips for sending your patches upstream, and potential exceptions Fedora might make. The primary goal is to share the benefits of a common codebase for end users and developers while simultaneously reducing unnecessary maintenance efforts."

The linked[1] documentation also answers, with specific examples, the question of: "What are deviations from upstream?"

[1] https://fedoraproject.org/wiki/Staying_close_to_upstream_pro...


I doubt it. It's unreasonable. "Upstream first, patch it if that fails" is good enough. There are widely used packages that did not have release in 5 years and distros have to carry around patches just to make them compile with openssl 1.1, or fix known CVEs, or whatever.


This is the correct answer.

Distributions don't apply patches just for the sake of it or because we enjoy it -- it makes packaging more annoying for us when we have to rebase patches each version bump. But sometimes it's necessary and it would be a silly limitation to not allow yourself to patch software which is under a license that explicitly gives you permission to modify it.

I would argue any rolling-release or bleeding-edge distribution is pretty much the closest you'll get to "pure upstream". Stable distributions have more patches by necessity, and enterprise distributions even more so.


https://www.archlinux.org/about/

> Arch [Linux] strives to keep its packages as close to the original upstream software as possible. Patches are applied only when necessary to ensure an application compiles and runs correctly with the other packages installed on an up-to-date Arch system.


Slackware prefers not to make changes to the upstream version.


Specifically, Slack patches to get software to build, because often software will work for the original developer, but not with a different set of dependencies. Otherwise everything else stays default. I think this is the closest anyone can reasonably get to strict-upstream and still have stable releases.


As the parent of a startup and one singular demanding child, I have to ask... 10?! Wow! Nevermind why, I want to know how do you get anything done? Do you have staff? Do you have personal time to yourself, or with your wife? How many soccer games do you attend in a given week? How many bedrooms does your house have? How many of your kids are turning out to be programmers?

Forget maintaining software, I want to know how you maintain your existence. I don't think I could survive.


I only have four small kids, but you're approaching this from the single kid direction.

Once you have two, you stop caring as much about the first precious one - because you have two precious ones. So sometimes one has to wait.

Repeat that a few times, and you'll arrive at your answer.

With more kids, it becomes clear that there are things you simply cannot do, e.g. own a 11-bedroom house or drive each single kid to soccer. So the kids will have to do something else. Like playing with each other.

That's just the way it is.

There's virtually no free time, though. :)


Ironically, we get a whole lot of free time when we invite the kids' friends over. They disappear somewhere and play for hours and hours.


After they learn to walk and talk you don't have less time with two kids especially if the time diffecene is small enough to play with each other. A single kid has only his parents to interact with.

At ten kids I guess you have a - somewhat misbehaving - staff. Half of the kids are capable to help out with the other half or other chores at the house. All they need is a manager :)

I cannot say much about the personal time though - I'm still in the 'learn to walk and talk' phase and with 3 toddler in a small house there is virtually none. However I read on reddit that after 20 years every kid be at some kind of university and I finally will have free time for playing games and talking to my wife. That sounds nice:)


I also want to know how he manages to do any work with 5 or even 10 kids.

I realised I no longer had any spare time anymore once I had my first daughter. Then we had another and I realised I must have had so much spare time with just the one kid...

How lots of my family and friends manage time with 3 or more kids I don't know as you are then outnumbered.

5 or more seems impossible. You must either be a drill sergeant super efficient or the opposite, super relaxed and let the kids sort themselves out. Or rich and delegate it to nannies/au-pair etc.

I do appreciate now that by having two kids that they mostly spend their time at home playing with each other, as opposed to when we had just the one when we had to always play with them. I guess that scales well with more kids.

But I still do feel guilty that I am not always joining in. And have to be more selective of which school performance etc either one of us can attend. I am not sure my conscience could handle missing out on lots of these with more kids by allocating much less of my full time to each kid.


From zero to one child the change in regular life is astounding. You become a parent and in the process you lost a lot of your free time and lose all spontaneity from your life. From one to two children you will notice that you had some free time left with one but now you really _really_ don't have any more. No more evening movies or hour long coffees with wife. You wonder what childrenless people even do with the ultra metric shitton of time they have as you cannot imagine anymore. You simply don't have enough time for two kids and you have to learn to manage and optimize what you have to do the best you can.

From two to three children... nothing really changes anymore! You won't get more responsibilites as you already have them and no less time as you already don't have any :)


I got a cinematic education from my dad one movie a week on Saturday night. Children can be involved in adult activities.


> I do appreciate now that by having two kids that they mostly spend their time at home playing with each other, as opposed to when we had just the one when we had to always play with them. I guess that scales well with more kids.

The traditional way is to delegate age-appropriate amount of caring for the younger children to the older children.


With 5 kids myself, this is indeed true. Delegation is important and also a "zone defense" became really important. "man to man" was out of the question.

But if you are already making 2 school lunches, how much longer is 5 really? You are already making breakfast lunch and dinner each day, how hard is it to add more food really. There is cost involved but making the meal isn't that much harder, at least for us.


It's 10 to 13 kids, depending on how you count. (adult dependent, unborn, miscarriage)

It's just my wife and I, her staying home, without staff or personal time. We homeschool them until they can get a 3 or better for AP Chemistry or AP Biology. My kids get funny looks going in for those tests at age 10 to 12. After that, as early as 6th grade, they can start dual-enrollment at the local college. It's free until high school graduation, which is sometimes enough time to get an AA degree. We don't do organized sports, but they all have unicycles. There are some organized activities to attend: scouting (BSA and AHG), a homeschooling group that does history/gym/art/writing together, a free band day camp in summer, and a big road trip every few years.

I only have 4 bedrooms. I ended up putting the kids all in a much bigger room, leaving the bedrooms for other purposes. More interesting is the "car", with 5 rows of seats. It is 3 tons empty, 5 tons full. We go through 2 gallons of milk per day. I can spend $1000 on 3 or 4 carts of groceries. We can finish a pair of chickens or a mid-size turkey in one sitting.

I haven't had much luck turning kids into programmers. At one point I got several to enjoy Scratch, but then I found that the computers were severely abused to waste time on junk like the "Tanki Online" video game and the "Annoying Orange" videos. I had to take away the computers. Recently kid #2, age 17, decided to choose the career. I've mostly taught him C now. He does that on Ubuntu and for his TI-84 Plus CE. I think this has created a programming aversion in some other kids, because they saw how excited I got to be finally teaching C and didn't want to spend all their time with me. For the oldest 5 the career choices are unknown (hurry up...), programmer, lawyer (physics undergrad), unknown, and midwife. BTW, programming the TI-84 Plus CE in C is pretty wild. You get a 24-bit int.

It helps to be close to work. I'm less than a mile away, 3 minutes by car or maybe 16 if walking. It helps to work only 40 hours per week, with an extremely flexible schedule.

Right now the main source of stress is kids wandering away from homework and chores. I don't want to have to stand over them in one room, waiting and watching as they work. I want to go do other things.


> I have to ask... 10?!

> How many soccer games do you attend in a given week?

One: souprock's kids + one friend vs. some other team.


I know... A lot of people who come from huge families, and from what their parents tell me, when you have more kids they are a bit easier to handle because they sort of take care of themselves. And of course you don't drive them to 10 football games every weekend, because you're not made of time :)


It's true. It's not possible to completely structure the time of so many kids, to helicopter over them all, to lawnmower for everyone.

They just have to grow up the time-tested traditional way through play, discovery, and experience. Shame really, brings a tiny tear to my eye.


He/she posted to another thread yesterday that they had 12.


He/she posted that they had "a dozen kids", could just be exaggerating a bit.


I don't know how to count. You try:

10 dependents who are legal minors (includes a set of twins)

1 adult dependent who ought to move out soon

1 miscarriage

1 unborn

There are eight possible ways to count that up. The resulting sum is 10, 11, 12, or 13.


That's 11. I don't think the miscarriage counts, and the unborn won't be a kid until it's born.


Busted


One of the things I really like about Arch Linux is their policy of not making unrequired changes to upstream software. The only changes usually made are those necessary to keep the software compatible with the other software on a rolling release system, and these changes are usually temporary.


I love this about Arch. It gives everything a very vanilla, organic feel and you never get caught off guard by some distro specific change.


Ditto. Arch has the right attitude. I still have a bitter taste from a 0 day hack of one of my Debian servers, due to what I believed was a Debian only patch. I wish more distros followed the Arch approach. Redhat is particularly bad I think - I have had problems with the centos frankenkernel that I could never reproduce on other systems.


So, maybe this is not quite the same thing, but wasn't it Arch (and only Arch?) that made python point to python3? And in doing so went against what the upstream Python project explicitly specified to do? I remember dealing with a few users who ran into this.


I believe they made this change before the official advice was contrary to it.

Also, if you download and compile Python from python.org, isn't it also just 'python' that you get as your executable? If so arch is just following upstream, and the Python devs should follow their own advice.

Arch doesn't seem to be patching this, though you can see in the PKGBUILD that arch explicitly creates 'python3' as a link to 'python' (with a comment lamenting that this isn't done by default) :

https://git.archlinux.org/svntogit/packages.git/tree/trunk?h...


> I believe they made this change before the official advice was contrary to it.

Arch: ~2010 PEP: ~2011-2012

https://bugs.archlinux.org/task/28639


Off topic but I have to say as an occasional dabbler this is something that drives me crazy whenever I take a look. So much talk of Python 2 being obsolete and irrelevant to new devs. Yet the command 'python' points to Python 2. Seriously?


From PEP 394:

"in preparation for an eventual change in the default version of Python, Python 2 only scripts should either be updated to be source compatible with Python 3 or else to use python2 in the shebang line."

So it will change eventually. But why rush to break things?


Similar story here. And it's not just distributions. There are web dev shops out there that edit OSS to add some functionality without even bothering to change the version or the user facing information. It makes for plenty of head scratchers when their customers report a bug in a piece of software you wrote.


I'm very happy with downstream patches. They come 90% from Redhat, rarely from Debian or Suse. The Redhat patches are usually 100x better than other contributors patches. Debian is usually the worst.

When I maintained ~250 cygwin packages I acted as downstream also, and had to deal with upstream maintainers also. Usually a horror, maybe because I came from cygwin which was a crazy hack. Only postgresql said "such a nice port". For perl, gcc, python,... you were just a crazy one to be ignored.


> Only postgresql said "such a nice port".

It is a crazy port ;). But the native windows port is much crazier :)

/me wishes for a fountain of time to change postgres' threading model.


> Debian is usually the worst.

Can you elaborate?


Haha! What are the odds? Maintainer of ps spawns too many subprocesses, runs up against parallelism... I think it's the Unix story in a nutshell.


I've been packaging for Debian for a decade and I ran into a lot of unfriendly upstreams that would refuse to make any change to allow packaging and I've never seen other Debian Developers treating upstream developers poorly.


> He missed a big one: you have no way to stop Linux distributions from hacking up your software, and you'll suffer the consequences of whatever they do.

I remember the frustration when I found that NixOS maintainers downright crippled certain build systems to force people to use Nix...


Who's forcing you to use Nix?

Their goal was probably to get the thing building within the constraints of their own time, and the constraints of the Nix ecosystem.

You can choose whether you want to invest the time to fix those build scripts and re-enable the other build systems, or not -- totally up to you. But it's uncharitable to suggest that the Nix maintainers have some kind of secret agenda.


Yes, they do have an agenda, because how else you explain patches like this:

https://github.com/NixOS/nixpkgs/blob/1137200d6b7fcf8fc401b5...

Note that they didn't fork the project under some different name, they shipped it like this under the original name.


Of course they have an agenda. Nix is one of the most opinionated Linux distros in existence. They have an absolute insistence on reproducible builds.

What I said was that they didn't have a secret agenda. They are completely up front about reproducibility. Code that violates this principle needs to be fixed (or disabled, as in this Rebar example).

If that doesn't meet your needs, then just move on. Nix isn't a distro that will please everyone.


rebar3 itself supports reproducible builds via lock files. Applying patches like this instead of embracing the right way that the upstream designed for this exact purpose is just ignorance.

I would be totally cool with any incompatible patches that distro makes for its needs, as long as the binary is renamed. But they shipped a broken binary and called it rebar3. Not "rebar3-nix", but "rebar3". People try using this binary for development and it doesn't work, they go to the upstream and the upstream spends time investigating someone else's hacks.


That's a fair point.


Dude, Debian does exactly this same exact thing to get reproducible builds. They generally submit things upstream, so I'm not aware of Nix attempts do the same, but I can also imagine their patches for reproducible builds might not make sense outside of Nix. This particular patch is done differently than how Debian attempts to do it (they try to make no changes to the code if possible), but this patch doesn't seem like a bad approach either (makes it obvious at least).


rebar3 itself supports reproducible builds, so applying patches like that and breaking it for anyone who uses it for local development is a problem.


Wow. Are you Albert mentioned in the man page? Seems like some unneeded snark after your entry. I think I would have left too.

"Albert rewrote ps for full Unix98 and BSD support, along with some ugly hacks for obsolete and foreign syntax."


I am, but I wrote that.

Something like "ps -axu" will fail the UNIX/POSIX/SysV parsing, then restart in a pure BSD mode with the "-" ignored. Something like "ps -aux" will too, unless there is a user named "x". There is also a PS_PERSONALITY environment variable to force the parser to act in a particular way.

It was needed to transition people over to a standards-compliant syntax. I couldn't support all the old syntax. Prior to 1997, something like "ps -ef" would parse as "ps ef" does today, which is not standards-compliant. People were unwilling to just switch over without the compatibility hacks. There were instances of things like "ps -aux" all over the place, including in people's private shell scripts. People couldn't resist typing it.

I got pushback just for including the warning on stderr when somebody caused the parser to kick in to compatibility mode.


More of a general question: is there any license or addendum clause which could have remedied this? Linux distribution creators love adhering to specific license requirements. Would a choice between distributing vanilla software vs not distributing have been more appealing?


That would make the program not free/open source (e g. CC-BY-ND is not considered free), so free-only distros such as Fedora would have simply not considered it.


Trademarks can control this while remaining open source. If you ship an official package it's Firefox™ but if you ship a hacked-up version it's Iceweasel. Of course everyone will tell you you're evil if you try to do this.


Without trying to minimize this experience, I can't help but wonder if newer tools and techniques have mitigated some of this problem. For example GitHub pull requests make it much easier to collaborate on one version rather than an upstream, downstream relationship.


Crazy how social relationship leaks back into the open source concept.

Sorry you had to endure all this.. not fun.


People seem to have a serious lack of understanding when making requests of others.

- When you send an E-mail, at least imagine that the recipient may have literally hundreds of other messages to dredge through and the time required to respond (and detail of response) may reflect that. Not personal, don’t get mad at them.

- When you send an “instant message”, it may be instant for you but for all you know the recipient is deep in the middle of something and won’t respond for awhile. Not personal, don’t get mad at them.

- The recipient may be on the other side of the planet. Err on the side of extra information so you don’t wait days for a response that just has to ask you for more.

- When you file a bug, “thank you but do some homework”. A person dealing with 1000 other things will not have time to hand-hold you through all the things you’re not telling them yet. Be precise and complete. Be reasonable about when/if to expect a fix.

- And for that matter, in retail, or traffic, or 100 other things in life, you don’t know as much as you think you do about what other people are dealing with. Stop for a second. Imagine their situation. That person not instantly serving you and only you has a dozen other things going on.


> - When you send an “instant message”, it may be instant for you but for all you know the recipient is deep in the middle of something and won’t respond for awhile. Not personal, don’t get mad at them.

People don't get it. If they said a DM / IM or any program which is considered a "chat" app they expect instant replies.

I run a single person software company. I used to allow people to contact me for support any and all chat programs. So for paying customers you can magnify that "expect an instant reply" factor. People would get quite annoyed, regardless of whether it was Sunday, or 3am in the morning for me.

For that reason, only email and forum support is no offered. Never had a complaint since about wait times. People expect to wait after sending an email. They expect to wait for a forum post. They do not expect to wait for a chat / IM / DM response.


>When you send an E-mail, at least imagine that the recipient may have literally hundreds of other messages to dredge through and the time required to respond (and detail of response) may reflect that. Not personal, don’t get mad at them.

It's refreshing to see someone on HN say this. If you look at my comment history, you'll find a number of people who've replied to my comment who say it is incredibly rude not to answer someone's email promptly. My stance over the last few years has become: If anyone (including automated services) can put an email in my inbox, I am not obligated to read it. Until I can get a reliable heuristic that distinguishes well thought out emails from the crud, I don't feel obligated to spend my limited time on tending to my inbox. The only realistic solution is to have sending emails incur a cost, in order to curb the quantity of emails. Your sending me an email doesn't obligate me to read or respond to it.

I'm not an open source maintainer who gets a ton of support emails. Yet if I feel my stance is necessary for my sanity, imagine how much worse it is for those folks.

For IM's, at work, my status says something like: "If you're physically in the building, come talk to me in person if urgent or send me an email if not. If you are remote, email me if not urgent, or let's set up a voice chat if urgent."

The only useful thing about IM at the work place is in things like active debugging, or a coworker in an adjacent cubicle needing to send me a link related to a live conversation we're having. Other than those, it becomes a conversation that drags out and never ends. They'll often wait multiple minutes before responding to me (they are the ones who initiated the IM). Having IM windows open and randomly flashing takes up my mental space and distracts.

(Note: We do not use Slack).

(Note 2: My saying "send email" contradicts with my first paragraph, which I wrote with my personal email in mind. The SNR is much higher for my work email).


I agree with a lot of these.

One thing i've noticed in life is that our use of aschynchronous communication methods is much more in a "respond now" fashion.

Also as a contributor to StackOverflow your point about asking questions with the right amount of information is really spot on.


I don't maintain anything as big as Redis, but I've faced many similar problems all the same and I think I have an approach which makes it palatable. I wrote about my approach at length here:

https://drewdevault.com/2018/06/01/How-I-maintain-FOSS-proje...

But the main thing is that almost all bug reports, feature requests, and so on, get sent to /dev/null. Users who care about a problem are expected to work on that problem themselves. In the case of software like Redis, pretty much everyone reporting a bug is also qualified to fix that bug, so it works particularly well.

Then I focus only on helping new contributors get their bearings and making regular contributors happy and comfortable with their work on the project. So far this approach has been very successful for me - I don't get burned out, and neither do my contributors, and we have happy, healthy communities where people work at a pace which suits them best and aren't stressed or overwhelmed.

Sure, lots of feature requests and bug reports get neglected, but I think the net result is still a very positive impact on the project. The occasional drive-by bug submitter provides far less value to the project as someone who writes even one patch. Focusing on keeping the people who create the most value happy makes for a more productive project and a better end result. Some people are put out by the fact that their bug report, feature request, etc goes unanswered, but I can quickly put the guilt out of my mind by reminding myself that ignoring them is doing a benefit to the project. And in practice, I generally have time to give people some words of encouragement and a nudge in the right direction towards writing a patch without burning myself out.


>In the case of software like Redis, pretty much everyone reporting a bug is also qualified to fix that bug, so it works particularly well.

There are a good bunch of people who have never written any C and even more people who have no idea about the internals of Redis, yet they are Redis users and they do find bugs. I wouldn't expect those users to write a patch for every bug they find to be honest.


I think the idea is that given enough users, the bugs who will be encountered by non-fixers will also be encountered by fixers so they'll be fixed, otherwise they are so rare as to not be worth the effort of wasting time with useless non-fixer reports to find the useful non-fixer ones.

I'm not sure how i feel about that TBH, personally i'd like to know about bugs even if they do not come with patches but at the same time i understand not wanting to be bothered by wading through useless reports.

Perhaps there should be end user oriented bugtrackers that look more similar to something like a hybrid between a forum and a reddit/HN post with voting (no downvotes though, kinda like GOG's wishlists) so that developers will have a rough idea where to focus next and "locked" (but perhaps visible, at least for FOSS projects) traditional bugtrackers where end user entries migrate (and linked to) once developers decide that this will be worked on.

It wont solve all issues, especially with people potentially going "why aren't you fixing that bug-but-really-a-feature-that-requires-rearchitecting-the-entire-thing that has 897482959204 votes and instead work on whatever-else that Nobody Asked For(tm)?", but i guess you can't fix entitlement with technical means.


All properly filled bug reports are useful.

This mentality that only writing code matters needs to stop. Open source projects (or any project for that matter) is way more than just writing code and the people doing the "other" work need to be recognized and encouraged to continuing that.

The lack of such people is exactly the major cause of burnout from maintainers who just want to code.

Not replying to all your points directly, just making a general statement about this whole thread.


If you aren't replying to my points why are you making a reply to my comment?

Yes, all properly filled bugs reports are useful. The important part here is that "properly" word though and on a popular project they can be hard to find. What is worse, this incentivizes people to file the same or similar bugs twice (or more) since they can't find the duplicate (in which case, properly or not doesn't matter, you already demonstrated your belief that your time is more important than whoever will have to wade through the bugs to find your duplicate).

Also while code isn't the only thing that matters, it is the thing that matters the most - the software doesn't exist without code, but it can exist without everything else - and the programmers who write the code are those who make the final decision on what ends up in the software (assuming FOSS made by volunteers here, of course, not FOSS or closed source software made by employees - although even in that case, often the developer who writes the code is the one making the decision anyway).


It's always an option to go the proprietary route and release your code as shareware. It worked for a long time. Why bother to release the source if it's such a hassle?


Sadly it worked until users were taught to expect everything for free (see how every single time someone posts something here or on Reddit that costs money, there is at least one post - often more - with free "alternatives" upvoted near the top) and other developers spreading FUD against you if you do not release the code - even if you give your software away for free.

Of course if you are going to receive harassment either way, might as well get paid for it.


Anyone who can write code can learn C, and be proficient enough to fix their bug in a few hours - or at least proficient enough to ask the right questions. The biggest barrier is unwillingness to try.

I've seen this played out a dozen times. My projects are often many people's first exposure to C, ever.


I'm interested in your approach here because I generally disagree with ignoring bug submissions from "drive-by"s.

As a developer, I might have the time to investigate and submit a bug for your code, but at the end of the day, is probably just one more bug in 100 other pieces of code preventing me from writing the code I need to.

Getting familiar with the code of a project you're using, to the extent that your can submit a reasonable PR is a not-insignificant time dedication.

I've found a variety of attitudes from upstream code I've reported to, ranging from "you're wrong" to the bug being fixed and pushed to master within an hour.

I've formed an opinion, amongst the projects I work with, about whom I should give more of my (professional) time to, that usually begins with how receptive they are to a perfectly good bug report which I don't have the (professional) time to fix.

The ones that have proven "pleasant" to work with have made it much easier for me to convince my bosses that their money is well spent giving back.


So that is why so many OSS projects are full of bugs and horrible usability. In the past, I have tried to help with clear, reproducible bug reports. Quite often I spent an hour tracking down a bug, writing up why it matters, and then got ignored or dismissed.

Those OSS projects that deliver a quality product, also took my bug reports seriously (and sometimes disagreed).


> So that is why so many OSS projects are full of bugs and horrible usability.

Possibly, but not necessarily. What is certain is that no developer who gives you free stuff has any obligation to fix bugs nor to submit themselves to any low-key bullying from users of said free stuff about how to spend their time and develop the software they decide to share with others.


I agree! But let’s not discount the value being provided by bug reporters who give a superb description and excellent reproducibility. They just wrote a codeable test case, which is valuable free stuff, ie like for like.


TL;DR;

FOSS Zeolots: everyone should use FOSS! Micro$oft is evil!

Also FOSS Zeolots: Oh you have a bug? Screw you. You're getting it for free. Stop complaining. I'm busy making a new icon.

OSS is great. But until this attitude is killed, it will continue to hover just above "who gives a shit" for anything other than developers


Your claim is common, but wrong. Approximately none of the major technology companies do more than rudimentary call-center service for bug reports from consumers.

If anything, the entitlement you express is counterproductive, because sometimes people like you convince open-source developers that their time is better spent catering to your sense of entitlement instead of their software.


My claim is common bc it's the truth.

Source: I am a maintainer of several projects. Some that are used extensively. And I take bug reports seriously, even if I can't always get to them right away.

"it's free, go away" is a real problem. Denying it won't make anything any better or help FOSS get adopted for anything other than servers.

I have experienced it many times before, and almost every time I've gone back to propriety software and breathed a sigh of relief.


Your source is inapplicable. You are not a statistical universe.

Also, "it's free, go away" is a straw man, because the "go away" part is also nearly universal within proprietary software. Try opening a ticket regarding a bug in Word, or AutoCAD, or so on.

The real difference? Those closed products also have closed bug trackers, so you don't get to see all the times users were ignored or told to pound sand.

This "real problem" exists entirely in your head.


> "it's free, go away" is a real problem.

Characterizing this as a problem is like accepting free candies from someone and then complaining that you were not given more candies.


No, it's like telling the candy maker that their candy made them sick or otherwise left something to be desired. If the candy maker cares about making good candy, that can be useful information, even if they may not have time to do something about it. If the candy maker cares about making mediocre-to-bad candy freely available, then they will say "it's free, go away."


No, it is exactly as i wrote above, the idea isn't to complain back, it is that you get shit for free and you have no standing to demand for anything more than what you got. If free candy makes you sick maybe next time do not accept free candy from strangers, hm?

(of course i knew someone would try to reply with a "No, it is like <insert post ignoring the point here>" but decided to go with it anyway)


Where did 'StaticRedux say anything about demanding anything?

It's perfectly possible to report a bug without demanding that it be fixed.


FWIW i am not a FOSS zealot. If anything some of the software i find enjoyable to use is not FOSS (but sadly a lot of it is older software, largely because of user hostile modern trends - see Electron, phone-home DRM, adware, etc - but also because newer versions or alternatives simply have worse UX and/or bloated to almost unusability) and really my comments come more from a "you are entitled to what you pay for" stance.


I certainly won't disagree with hostile modern software trends. I just think if FOSS maintainers are accepting the responsibility of maintaining the project, they should accept the responsibility of taking bug reports seriously. Everyone knows they are no fun. And everyone knows they aren't flashy. And the maintainers are more than welcome to ignore them. But in doing so, they can't complain about people not accepting consumer facing FOSS. It's a package deal.


People are not a hive mind, some developers may ignore bugs, others may complain about end user oriented FOSS acceptance and perhaps some do both, but chances are these two groups are separate (and in larger projects you may even have people under more than one of these groups).

Also unless someone has explicitly expressed they are taking such a responsibility you describe, such a responsibility only exists in some people's minds. A lot of programmers want to share free stuff they made and stop there.


If you do write a clear bug report, with a stack trace (with symbols) and reproduction steps and such, then a developer is much more likely to give your bug the time of day. In the end, though, a patch is worth a thousand tickets.


I found the writings of Peter Hintjens (creator of ZeroMQ) very inspiring, where he describes the merits of Optimistic merging (OM). Quoting at length from http://hintjens.com/blog:106:

Standard practice (Pessimistic Merging, or PM) is to wait until CI is done, then do a code review, then test the patch on a branch, and then provide feedback to the author. The author can then fix the patch and the test/review cycle starts again. At this stage the maintainer can (and often does) make value judgments such as "I don't like how you do this" or "this doesn't fit with our project vision."

In the worst case, patches can wait for weeks, or months, to be accepted. Or they are never accepted. Or, they are rejected with various excuses and argumentation.

PM is how most projects work, and I believe most projects get it wrong. Let me start by listing the problems PM creates:

    It tells new contributors, "guilty until proven innocent," which is a negative message that creates negative emotions. Contributors who feel unwelcome will always look for alternatives. Driving away contributors is bad. Making slow, quiet enemies is worse.

    It gives maintainers power over new contributors, which many maintainers abuse. This abuse can be subconscious. Yet it is widespread. Maintainers inherently strive to remain important in their project. If they can keep out potential competitors by delaying and blocking their patches, they will.

    It opens the door to discrimination. One can argue, a project belongs to its maintainers, so they can choose who they want to work with. My response is: projects that are not aggressively inclusive will die, and deserve to die.

    It slows down the learning cycle. Innovation demands rapid experiment-failure-success cycles. Someone identifies a problem or inefficiency in a product. Someone proposes a fix. The fix is tested and works or fails. We have learned something new. The faster this cycle happens, the faster and more accurately the project can move.

    It gives outsiders the chance to troll the project. It is a simple as raising an objection to a new patch. "I don't like this code." Discussions over details can use up much more effort than writing code. It is far cheaper to attack a patch than to make one. These economics favor the trolls and punish the honest contributors.

    It puts the burden of work on individual contributors, which is ironic and sad for open source. We want to work together yet we're told to fix our work alone.
Now let's see how this works when we use Optimistic Merge. To start with, understand that not all patches nor all contributors are the same. We see at least four main cases in our open source projects:

    Good contributors who know the rules and write excellent, perfect patches.
    Good contributors who make mistakes, and who write useful yet broken patches.
    Mediocre contributors who make patches that no-one notices or cares about.
    Trollish contributors who ignore the rules, and who write toxic patches.
PM assumes all patches are toxic until proven good (4). Whereas in reality most patches tend to be useful, and worth improving (2).

Let's see how each scenario works, with PM and OM:

    PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings. OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.
    PM: contributor retreats, fixes patch, comes back somewhat humiliated. OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.
    PM: we get a flamewar and everyone wonders why the community is so hostile. OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.
    PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through. OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.
In each case, OM has a better outcome than PM.

In the majority case (patches that need further work), Optimistic Merge creates the conditions for mentoring and coaching. And indeed this is what we see in ZeroMQ projects, and is one of the reasons they are such fun to work on.


Broken quotes reformatted for readability:

----

> It tells new contributors, "guilty until proven innocent," which is a negative message that creates negative emotions. Contributors who feel unwelcome will always look for alternatives. Driving away contributors is bad. Making slow, quiet enemies is worse.

> It gives maintainers power over new contributors, which many maintainers abuse. This abuse can be subconscious. Yet it is widespread. Maintainers inherently strive to remain important in their project. If they can keep out potential competitors by delaying and blocking their patches, they will.

> It opens the door to discrimination. One can argue, a project belongs to its maintainers, so they can choose who they want to work with. My response is: projects that are not aggressively inclusive will die, and deserve to die.

> It slows down the learning cycle. Innovation demands rapid experiment-failure-success cycles. Someone identifies a problem or inefficiency in a product. Someone proposes a fix. The fix is tested and works or fails. We have learned something new. The faster this cycle happens, the faster and more accurately the project can move.

> It gives outsiders the chance to troll the project. It is a simple as raising an objection to a new patch. "I don't like this code." Discussions over details can use up much more effort than writing code. It is far cheaper to attack a patch than to make one. These economics favor the trolls and punish the honest contributors.

> It puts the burden of work on individual contributors, which is ironic and sad for open source. We want to work together yet we're told to fix our work alone.

----

> Good contributors who know the rules and write excellent, perfect patches.

> Good contributors who make mistakes, and who write useful yet broken patches.

> Mediocre contributors who make patches that no-one notices or cares about.

> Trollish contributors who ignore the rules, and who write toxic patches.

----

> PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings. OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.

> PM: contributor retreats, fixes patch, comes back somewhat humiliated. OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.

> PM: we get a flamewar and everyone wonders why the community is so hostile. OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.

> PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through. OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.


The purported benefits of Optimistic Merge appear entirely unrelated to the merging strategy used. The scenarios could easily unfold completely differently:

> PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings.

> OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.

alternatively:

PM: The patch is merged as soon as CI passes, unless code review uncovers any serious issues. Good contributors feel happy and appreciated.

OM: depending on unspecified, arbitrary criteria, patch may be reverted or modified to suit the tastes of the project maintainer. At least sometimes, a good contributor will be left with bad feelings.

> PM: contributor retreats, fixes patch, comes back somewhat humiliated.

> OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.

alternatively:

PM: second contributor notices the CI failure, joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.

OM: the patch breaks master, someone posts an angry rant, contributor retreats, fixes patch, comes back somewhat humiliated.

> PM: we get a flamewar and everyone wonders why the community is so hostile.

> OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.

alternatively:

PM: the mediocre contributor is largely ignored. Contributor loses interest and eventually the PR is closed.

OM: the patch breaks master, someone posts an angry rant, we get a flamewar and everyone wonders why the community is so hostile.

> PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through.

> OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.

alternatively:

PM: existing contributor immediately closes the PR. There is no discussion, because comments are disabled. Troll may try again, and eventually may be banned. Toxic patches don't ever make it into git.

OM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through and remain in git history forever.

My takeaway is that you need to have a great community and everything will work out in the end.


Man. I can relate. Open source projects are roller coasters. It's great when loads of people start using it... and terrible at the same time. It's validating to see people using your work, but sometimes you just want to submit your paper to NIPS rather than deal with what you think in the moment (and, frankly, in hindsight too) is a dumb issue: https://github.com/mholt/caddy/issues/1680#issuecomment-3027...

I wrote up my experience from one wave of negativity a couple years ago: https://caddy.community/t/the-realities-of-being-a-foss-main...

It still haunts me to this day, but my attitudes are finally trending more positive about the whole thing.


One of the major benefits of Open Source is that the user doesn't need to get a hold of the developer. It's always possible (or should be, if they're leveraging open source well) to patch a problem locally and build a fixed version their self. If they don't have the ability to do so, they really should at least have someone they can contact and pay (probably an exorbitant amount if it's an emergency) to do that for them. It's the open source equivalent of an application support contract or warranty.


How would you go about finding someone to contract and pay for say, Firefox?


If you've got a business that requires Firefox as some component (in a way that isn't easily swapped out for some other browser), you really should have someone on staff or available that knows how to build and change stuff in it. That said, it's odd for Firefox to be used this way, to my knowledge, which is nice because it's on the short list of open source projects that are so large and complex that depending on the problem your only recourse is likely outsourcing help. In other words, if Firefox is integral to your business, befriend and get some consultancy retainers from some Firefox devs if you don't have one on staff. Not doing so is probably irresponsible from a business perspective.


> If you've got a business that requires Firefox as some component (in a way that isn't easily swapped out for some other browser), you really should have someone on staff or available that knows how to build and change stuff in it.

That's incredibly excessive. If a website design agency runs Wordpress on Linux servers, that certainly doesn't justify having a kernel developer on staff.


Linux wouldn't be integral to their business. They could also easily run wordpress on BSD, or a Solaris derivative (Illumos), or Windows. Either Apache or Nginx specifically also wouldn't be integral, as they could use one or the other, or some other web server that works with PHP. PHP is integral to their business, but they're not using any special feature of it that others aren't, most likely (and it's not a small community, so there are plenty of people that would be affected by any one problem).

Wordpress is integral to their business. If they run into a WordPress bug, it would be useful for them to have access to someone that could track it down and diagnose it.

Any heavily used WordPress plugins, or custom WordPress plugins might be integral to their business. They maybe should have someone that can look into problems with those.

If a million other people are using the same feature as you that might cause problems, you're probably safe letting the community provide a fix. If 1000 other people are using the feature, who knows how long that fix will take in coming? If the developer is busy, it could be days.

It's a function of how important it is to a business and how rare/replacable it is.


The thing he said before that it's a non-swappable integration of a highly-complicated, web browser. Firefox is a mix of Rust, C++, and C that Mozilla Corp. develops with over a 100 million dollars a year worth of developers. The standards it implements are numerous with many complex interactions, optimizations, security features, extensions, and so on. And you are saying it's excessive to pay a Firefox developer to made the change or consult on how it works? That instead use an internal developer with no clue about it?

A Firefox consultant in that scenario sounds more like a necessity than an excess to me. If anything, one expert might not be enough given they're probably specialized to different parts of Firefox. Also, a reason to never want a non-swappable dependency as big and complicated as Firefox. Then, you might become one of those companies that get locked-in like customers of IBM, Microsoft, and Oracle.


That's taking it a bit far, no? A more charitable allegory would be a website design agency that relies on Wordpress for websites, so they have Wordpress developers on staff (or on retainer). That doesn't seem unreasonable.


I worked with people who were patching a firefox version for an embedded device. Put out an ad... these people exist.


Putting together people for a project and don’t have the advantage of a strong network for the technologies I’m interested in.

Could you suggest where one might go about putting this ad? Facebook? Freelancer sites?


Depends on the quality / budget. Go cheap put out a project on upwork. Go big and put an ad on weworkremotely.

Put an ask hn question with contact details and people will connect.


Some folks I know are building polyglot.network as a FOSS-only software development agency. Early stages but you might wanna check them out.


There are a few such agencies out there: RedHat, Igalia, Credativ come to mind. There are some more listed on employers and freelancing sections of the FOSSjobs wiki:

https://github.com/fossjobs/fossjobs/wiki/resources#employme... https://github.com/fossjobs/fossjobs/wiki/resources#freelanc...


They seems to be insanely Cheap at $1999 for 160 hours of Dev work.

How do the make money?


They only pick up FOSS tasks for which they can find multiple customers.


Site's down / unresolvable. Sounds good - I'd work for them.


I feel for you guys. People lobe to get upset when open source projects try to make money.


That is because users are increasingly getting used to not paying for the work of others, be it because of open source software (do not be deluded, for the vast vast majority of users free software=free shit), because of freemium software, because of adware or because of we-mine-you-to-death-and-you-like-it software.


People and companies are ready and willing to pay. The resistance comes from open source projects doing unscrupulous things like changing the license terms of the original project. As a contributor to some projects which eventually removed the open source option and went full proprietary, it's hard not to feel slighted. I didn't sign a CLA and wasn't consulted in the unilateral decision. The damages aren't large enough to justify taking legal action, but it hurts to watch your contributions essentially stolen from you.


We (https://sheetjs.com) maintain some reasonably popular projects (our most popular, https://github.com/SheetJS/js-xlsx/, has over 15K stars and sees millions of downloads per month).

It's important to remember why you are involved in open source. If those circumstances change, you should ask why you continue to remain involved. As soon as you lack a satisfactory answer, it's your cue to stop.

Many large open source projects start out as a passion project or a solution to a specific problem that the original developers faced. Over time, other people face similar issues and rally around your solution and it is really easy to fall into the trap of bearing their burdens. This is a bad response. Other people using your open source offerings do not create any sort of obligation on your part to care or respond to their concerns.

If someone really cares enough, they will incentivize your continued effort (like money or other considerations). It is unfortunately cultural taboo to ask some of the more vocal critics to pay, but setting up that dialogue at least shuts down most of the comments.

IMHO the origin of most of these issues comes from the very thing that drives many people to open source in the first place: personal branding. If you remove your personal identity from the equation, it's a lot easier to "turn off notifications". Rants and criticisms are directed at this mystery character, not you personally. Since you are not personally tied to the project, working on open source feels like a distinct activity and is judged on its merits. You don't feel the same sense of obligations since you don't personally feel like you are disappointing the user base.


The sunk cost (maybe wrong term) is really tough. It is hard to walk away because you have put so much effort in. Especially if you don't have a company behind a project it will probably falter.

I really enjoy Open Source 99.99% of the time. But that 0.01% I just remember how much work I put in.


> trap of bearing their burdens

yep, don't bear other people's burdens without asking for money, or contributions (of labour - such as code)


Off-topic: I've used js-xlsx before, and didn't have any problems with it. Thank you for your efforts.


Same! Js-xlsx is lovely, thanks for your efforts.


It sounds like this is similar to the harassment that he received over redis using the terms "master"/"slave". (Even DHH jumped in and used his platform to harass)

http://antirez.com/news/122

For someone that is giving parts of their life to help others, the amount of entitlement that people expect is unbelievable.


I think people are getting way too upset over GitHub features like issues & PRs, and the desire to please others. If you work on an open source project and you're not getting paid for it, you really need to divorce yourself from the demands of users. You have to assume that literally no one will use the software and just make it for yourself. If it becomes popular and you get feedback and contributions, great! Hopefully you can develop relationships that will lead to co-maintainers and such. But if it starts to feel like a struggle, just re-focus on what you want to get done, on your timeline.


Too many people aren't aware that you can say no, and not do something if you don't want to or if it is a bad idea. Especially if nobody is paying you.

Even for people that are paying you for a product, sometimes you really need to say no, that is how it works, and no, we are not going to customize it for your particular workflow.


Just saying "no" already can cost too much effort, because it involves making a decision that you'll want to justify. It takes effort to report an issue and even more to make a pull-request, so you don't want to reject them without a good reason, without really understanding it. Except for some obviously sloppy reports that would be just rude.

The easier option is to simply not respond. Even this leaves some level of stress, because we've all had our own bug report or pull request ignored. We don't want to this uncaring person on the other side. But sadly, ignoring other people's requests is the best option for involuntary maintainers to protect themselves.

The final option is to just hand out the keys to whoever is the most active contributor, and stop trying to control the direction of the project. This will get the stress-level down. You may not like this option, but it's much better than burning out.


1. I've never upped my game enough to contribute to a project, much less put something on github. So, to the vast swath of people who are cooler than I: thank you.

2. TFA really gets at the classic issue of working vs. managing. Technical people tend toward being better at the former. Context switching between doing the work and managing the work is hard. Punting the management and diving into the work almost seems an escape.

3. The code itself makes this point to us. We can define an integer, and there it is. But when we need a list of integers? Look at how the management required just exploded.

4. Management sucks, but a management vacuum is a void, indeed. Let us be thankful for managers who don't suck.


(I've been in the situation he describes, unfortunately)

I agree a lot of that article feels like two things (but i'm not going to remove all nuance - there are clearly other issues):

1. The issue of being forced to be a manager when you want to be an individual contributor, and even worse, feels like being forced to be a TLM when you want to be an individual contributor.

2. Even if you take away the aspects of #1, it feels like a lack of notion that individuals don't build software past a certain point. I mean that in the sense that, IMHO, past a certain point, all software becomes team built (or fails), whether you are the manager or not. High level software engineering is a team sport, not an individual one. That is true even outside of management - it's also about technically mentoring and growing the team that builds your software so you can rely on them instead of you (which is not just a manager task).

Every person you mentor into someone capable of doing the kind of work you want done is going to increase productivity on the software a lot more than you trying to do it yourself. Over time, they will also be able to build your team.

Over time and scale, if you don't have a team to rely on, and actually start relying on them, you will just feel crushed. Utterly utterly crushed.

There is no path out of it that involves getting better yourself. You simply cannot scale past a certain point and there is no way out of it.


> High level software engineering is a team sport, not an individual one.

#Truth

Floating out there in the cloud, you have to be a DBA, a network engineer, a browser jockey, a middleware dev, a sysadmin, a security specialist, an architect. . .

. . .and that's without being handed someone else's effort that is probably not in the desirable state for reasons you'll never know.

It's blatantly obvious to even the most casual observer that anybody claiming to know "all that stuff" is having you on.

That the internet works AT ALL is somewhat amazing when you ponder it.


It's not for everyone, but there's a way out: accept existential "futility", as Antirez puts it.

Let your project be less than it could be.


In my experience, this doesn't actually work. You basically have to close up shop.

Otherwise, the negative vitriol/etc of people who won't accept that is amazingly overwhelming.

Put another way, imagine, in a housing thread on HN, any time someone says "yeah, but i'm just happy to have my town stay the way it is. We don't want growth, we are happy, and we should be allowed to have that choice", what the response is.

While different situations for sure, it's the same type of response you get when you say "yeah, i'm happy with my open source project the way it is, sorry".


> You basically have to close up shop.

If you can't come to peace with that possibility, this approach isn't for you. Having to close up isn't an inevitability, but grappling with that prospect is an essential exercise.

It's truly not easy for ordinary humans to withstand the vitriol of a furious userbase. So, remove yourself from the channels where that vitriol gushes and roils.

The key is avoid making promises you can't keep -- either explicitly or implicitly. Putting a project on Github, home of "social coding", makes it hard to tune out the din. So... maybe don't keep your project on Github.


"If you can't come to peace with that possibility, this approach isn't for you."

Sure, but i mean in the sense that what you suggest is not a viable method for a lot of projects, unless, as you say, they also disconnect completely from the world.

That is not the same as existential futility - it is going off grid.


I vehemently disagree with this false dichotomy that either a project must strive endlessly or exile itself. Gradations are possible, and the way you achieve them is by strategically choosing the terms of engagement.

Embracing existential futility puts you in a state of mind to accept the consequences of subsequent concrete actions which limit future prospects of glory.


You can vehemently disagree.

" Gradations are possible, and the way you achieve them is by strategically choosing the terms of engagement."

In my experience, you are simply wrong. You don't get to actually make this happen. You may want it to happen, and i agree it would be very nice. But it is also unrealistic, in the same way lots of polarizing things are but shouldn't be. So you may not like that dichotomy, but in my experience, it is the practical effect.

I'd love to see your examples of projects that are both fairly successful and choose somewhere along the gradient you have in mind.


There are projects out there on GitHub that have issues disabled entirely. It's fairly simple to not accept user feedback, and still distribute source code and binaries without support. It's more difficult with larger projects that require big reports, but it's certainly possible to not give out direct contact information (use forums or built-in bug reporter, etc)


> I've never upped my game enough to contribute to a project, much less put something on github.

Well, if you want to, just find a project you like and search their bug tracker. Usually there's a lot of shit that's trivial which no one could find time for.

You know, the stuff that people will be like "OpenOffice's About Page hasn't been disableable for over fifteen years. This bug report is untouched!" Or whatever. You can fix it in five and be happy.


'Just' read the source of something you like:

https://blog.codinghorror.com/learn-to-read-the-source-luke/

Or not – you're not obligated to contribute!


I'm hoping that finishing the excellent https://shop.jcoglan.com/building-git/ will give me enough git-fu to start making some modest contributions.


I have only been doing Pion WebRTC(https://github.com/pion/webrtc) for a year, but the hardest ones for me have been

* Relationships tied to the project

It sort of burned me out when the first contributors moved on. I have always been on the other side though, interests change or you change jobs.

* Every user issue is urgent.

It is hard figuring out what the most important thing to work on is. If a user takes time to file an issue it is causing them issues. I just have been on the other side and it doesn't feel great to be ignored.

* Community members with different communication styles

People aren't so much assholes, but just communicate differently. It is really hard to mediate, no one is at fault but I just hate seeing people leave/get burned out from it.


As to hard figuring out what the most important thing to work on is:

I have some code on Github and got a change request. I just replied that I can do it for my currently hourly rate. That makes priorities clear.


I don't want to bring money into it though. It personally makes me happy that is free of any financial pursuit.

It is nice to suspend disbelief and just all work together on making things that make people happy :)


Yeah, sometimes money takes away the joy :)


This is really nice. I wrote my own (probably bad) webrtc implementation for a personal project using gopherjs months ago. If I ever switch that project to webassembly, I will take a look at pion. Thanks for your efforts!


So very true!

I also maintain 10,000+ GitHub starred OSS database (https://github.com/amark/gun), and can relate to a lot of these points.

Worse than the jerks, though, are the bike-shedders. It is emotionally much easier to confront jerks, even though it is hurtful.

But the people who "Why isn't this written in X language?" or "Here's a PR to turn tabs into spaces" are honestly well-intended but naive and short-sighted.

Explaining to those people why you have to reject their PR because it provides no additional value and is purely an arbitrary distraction takes quite a bit of time trying to be delicate, but is nearly as much waste of time.


I own a 2.3K star Github repository (https://github.com/sergiotapia/magnetissimo), that while trivial in it's implementation, it's pretty useful to a ton of people.

There are a few core contributors to the project now and a Discord server as well and everything in this article rings true to me.

It becomes an obligation of sorts and you definitely feel like you're letting people down, especially when it comes to people who have taken the time to contribute and share feedback, let alone people who actually open PRs.

What I think can help is having more core contributors with write permissions, share the load. Or be up front in your readme and say, "I only check this once a quarter".


FYI, it seems that your https://sergio.dev/ is down.


Thanks - it's not set up yet, I'm trying to figure out a free blogging solution that requires zero code (like Hugo and such), more like hosted Wordpress for free.


> Sometimes I believe that software, while great, will never be huge like writing a book that will survive for centuries. Note because it is not as great per-se, but because as a side effect it is also useful… and will be replaced when something more useful is around.

That's an interesting point and thought experiment: what piece of software currently exists that will still be widely used in a century? And what's the oldest software that's still being used currently?


> And what's the oldest software that's still being used currently?

I am very interested in this question, but much of software is proprietary and you don't have much visibility into their history.

For software in the public, trivial lower bound is GCC 1.0 in 1987. Another case I know is Community Climate System Model, in continued development since 1983.


The oldest software still used is operated by banks, government agencies, schools and businesses. A large number of them are black boxes, and very expensive, so nobody will replace them. Their hardware just gets maintained ad infinitum, and the software doesn't care how old it is, so it just keeps running.

So basically, the age of software is limited by its hardware. (It's similar with animals: if nobody bumps them off and the hardware keeps ticking, the software probably will too... fish and mammals can live for hundreds of years)



This is way more than a thought experiment. Stable, mature software runs the world.

E.g. the CIP project aims to maintain released kernels for 25 years. https://www.cip-project.org/


Unix. Email. TCP/IP. Just taking an initial stab.


All of those will be around not because they are good, but because so many other things are built against their interfaces.


Ultimately, what else is 'good' other than being the foundation of further far-flung flourishing?


Those are standards and protocols, not software.


Spider Solitaire


I would place money on people still using Vim and Emacs in a hundred years.


`ed`, of course


Git.


I have a small (2k+ stars) niche game-related project. The hardest part for me is saying "no". If someone proposes a change that I think is out of scope, or is not something I want to maintain (maybe the introduced complexity is greater than the benefit of the feature) I find it difficult to explain that in a way people understand. Often times people get offended by it and then become angry at me, and over time it becomes very draining.


Salvatore's feeling that he should be thinking about big ideas makes me think of Fabrice Bellard. He drops amazing things, let's the community manage them and concentrates on starting the next amazing thing. Maybe that's something good for Salvatore too.


It's ironic that Git was intentionally designed to decentralise the process of software development, so that there was no single "blessed" repository, but instead of a network of peers, each of which could evolve at a different rate and in a different way, with the community deciding which would "win" in terms of popularity.

And yet, via Github, the community has reinvented the centralised model of software development. For any project, there is a single instance that is the "blessed" version, by virtue of having the most stars or collaborators, or owning the associated package name. 99% of forks are nothing more than a place where you create a feature branch that you intend to PR to the blessed project.

I hope that, one day, somebody invents a git-based project ecosystem that solves the problem of decentralised project management. I don't know exactly what this would look like, but I think it would need to embed the idea of community and contributor consensus and democracy at a fundamental level, separating the concept of the project and project ownership from any individual fork.


Git was designed to decentralise work, not power - they’re very different.

Think about the author of Git himself - Linus. He didn’t write Git because he decided one day that he was tired of being the final decision maker for the Linux kernel, and that he now wanted a thousand decentralized kernels to bloom, each with their own management and maintainers and trust levels.

It was designed to give everyone a way to work with full featured version control on their own machines, and set up adhoc networks of version control between any consenting parties. But that did not imply that the power structures that make up an organization would automatically be demolished.

Github is the extension of Git as it was designed. Maintainers on Github follow similar organzational policies as the Linux kernel and some are even codified into Github itself. But none of this is counter to the purpose of Git or even its ethos.


I think if your idea about decentralizing can accurately be described as fragmenting, then it’s probably not a good thing. I don’t know anybody that wants to have multiple separately maintained versions of the same project. When I use a piece of software, I want to have a good idea about how it will behave. I want to pick a project that I can be somewhat assured is fit for purpose, and that will provide stability. That’s why people investing in maintaining and using the most widely used and adequately maintained projects. If I don’t like the direction maintainers are taking a project in, then I can fork it. If enough people feel the same way as I do, then that fork might also become widely used and adequately maintained. There’s plenty of examples of exactly that happening, and I can’t see anything that’s undemocratic about it.

Regarding having a centralised service like github, I can’t see how that impacts the democraticness of anything at all. I can pull from it, I can push to it, I can fork on it. The centralizedness of it doesn’t seem to have any impact on participation to me. It just seems to be a matter of convenience. The less places I have to look for the thing I want, the better my experience is.


>Git was intentionally designed to decentralise the process of software development, so that there was no single "blessed" repository

I don't think this is true. It was developed by Torvalds to maintain the Linux kernel, which has always had a "blessed" repository at kernel.org.


Yes and no. The main idea of a dvcs is that you could send patches and pull requests to other people's forks. So in principle there is technically no blessed repository, but that doesn't mean there isn't a blessed repository from a social perspective.

(Also, there is more than one "blessed" Linux repo that a lot of people use. There's linux-rt, linux-stable, etc.)


But it does help for decentralized development, or not? And you have this single instance of authority in almost all cases? E.g. for Linux itself, you also have that, and Git was developed for Linux.

I think the issue is more about how to structure pull requests, issues, etc., in a better way (maybe hierarchical), such that this matches the exponential number of more communication. GitHub does not help too much with this, but it also does not really fight against this. This is basically a problem every big project has to manage in some way. For comparison, I would look at other big projects like TensorFlow, Firefox, Chromium, Linux, CPython, etc.


i am wondering this probably because of two factors:

1. accountability: where do I go looking if something is broken? 2. and discoverability: where to search, and whom to believe as being the correct owner of the copy of the codebase.

this is similar to block-chain currency (based on my limited understanding.. correct me if i am wrong). While it is meant to be decentralised, entities like coinbase effectively create a central entry/exit point somewhere in the ecosystem.


I immediately thought of Bitcoin when reading the OP. It can, and has been "hard" forked via miner consensus. That is, of course, one level removed from git, but I guess git"hub" could be more like gittorrent or something where instead of hosting a repo on one central server, the repo is hosted by a swarm and the version with the most seeds is the winner.


I don’t know why so many open source users feel so entitled when they have done nothing for the project. I believe that anybody can help a project from code, documentation, financial, graphic design, etc.


Most projects don't actually want help. Go and try it: report a few bugs or point out usabilty issues. See what happens.


Most of my bug reports have been very well accepted. The worst response I can remember is just not responding for like 3 months.

And on the other side as a maintainer, any bug report that demonstrates a clear issue (crash, hang, etc) with a reproducible setup is appreciated.

What can be really frustrating are bug reports that aren't bugs and a better understanding of the software would allow them to solve the problem. For example, there was one issue on Slic3r that was asking for a PDF output of config files. There is really no reason to do that.


That's not really help that contributes to the project, that's communication or (often in the case of usability issues) opinion that might be useful, and even if it is, it just piles on more work on the project instead of helping with that work.

Every project has accepted everything whenever I've actually tried to help with the actual work they're doing, i.e. solving a bug or submitting a pull request fixing some documentation, something that reduces their todo-list instead of adding to it.


IRS could help out Open Source projects and make it way easier for them to apply for grant money. But they make it very tough to get 5013c status.

I’m for quite strict tax enforcement but that should be changed.

Also of course private companies need to start stepping up and providing the often only small sums of money needed to fix these projects via bug bounties and fees to the maintainers.


> Sometimes I believe that software, while great, will never be huge like writing a book that will survive for centuries. Note because it is not as great per-se, but because as a side effect it is also useful… and will be replaced when something more useful is around.

I once saw a sig somewhere that was something along the lines of the author's goal being of writing some piece of software that would outlive them. That seemed like a neat, but incredibly ambitious goal, very very few pieces of software will meet it.

That said, Redis is probably one of them. It isn't too hard to imagine that there will be Redis instances running somewhere in 40 years, and though I wish antirez a long life, I think it is likely there will still be Redis instances chugging along after he passes.

So be proud antirez! You've most certainly made a dent in the universe.


Data migration is difficult. I'm sure there are a lot of very old databases still chugging along in some dusty office corner because no one wants to touch them since they've worked adequately for the last 30 years.


This is a vicious cycle of a mentoring problem.

If the maintainer never learned and practiced "role shifting" as a part of the developer process, they are going to have a bad time if the project gains any traction whatsoever.

Worse, there's no easy way for a mentor to swoop in and teach them basic role shifting tactics. Because such an attempt would likely be interpreted as just a user giving yet another uninformed opinion. Edit: And such a maintainer is unlikely to be able to delegate to newcomers since mentoring isn't part of their skillset. (Thus the vicious cycle.)


The best thing creators should do in my eyes, and I always do it myself, is trying to make yourself jobless. Solve that problem, find people who are good at maintaining stuff, move on to the next problem. You don't need to sit there looking through bugs 12 hours a day. In fact there are people who love this continuous struggle more than you, and who therefore also have developed skills at handling these situations without having sleepless nights.

Find these people once your project reaches a stable point from a creative POV. Then help these people get started, not the users. Then you will see that you need to do less and less. And when you are at about 30% load start using your time to find the next problem that you can solve.

No reason to be ashamed of either. There are loads of people who really love this continous lifestyle and there are few people who actually love finding a new solution to a problem that nobody can solve. Everybody will be more happy in the end, even the users.

PS: You can also see it in a different way. There's not just jenkins and github and AWS to setup as a system to get your project running. People are also part of the system. People who move the issues along these pipelines. There is no system if people aren't part of it. And there are not just devs and users, there are also maintainers, communicators etc.


I've been maintaining my OSS project for almost 6 years. It's well used in the industry but nowhere near as popular as Redis. But I can relate to pretty much everything in the article even the thinking approach. It makes me think that it takes a certain personality to be an OSS developer.


> Moreover I’m sure I’m doing less than I could because of that, but this is how things work.

It doesn't have to be. What if all the people who "pay" for Redis, their money want into a collection bucket, and nobody got the money until they earned it. Then you work on your own schedule and "earn" that money at a fixed dollar rate per hour, and claim the money at the end. Other trusted contributors can join this effort and claim the money after working the hours. This way, hours are up to the contributors, and money is more like the "pledge" in Kickstarter for example, or a check that's not yet cashed.


I guess I will be the one to complain about something that makes me look bad. Since no one else complains about this, it makes me think I might be the only one. However I doubt that.

The problem I have had with my open source projects so far is that most of them seem to not be very popular and they receive very little if any feedback at all.

It's gotten to the point where I feel like popularity and merit are not even directly related. It seems like maybe part of it is tied to some kind of social networking popularity contest. Certainly I have to think that way otherwise I would never try to contribute anything again.


We're working to address some of the issues raised in this piece, by creating a new way of working on Open Source.

Companies post their issues on our platform and contributors are incentivised by getting to know a company, a monetary reward and the potential of raising their profile with the company to get hired.

We're looking for feedback so if anyone has any thoughts - https://works-hub.com/issues


Others have tried what you seem to be trying and didn't work, how are you going to convince people who decide to use tech because it is free, to pay for it and pay for more than insultingly low scraps? And once someone does accept and the work is done, what happens if they are not the main developer of the software that was worked on and the developers decide they do not want to accept the work? Where does that leave your customers who most likely (especially if their initial major motivation was free stuff) wont want to pay someone fix/update the patches every time a new version is released nor be willing to have their own fork?

I might be missing something of course since your site doesn't work. Every time i tried to click on a link nothing happened. Firefox's dev console had this warning:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.segment.io/v1/p.

Looks like your tracking is broken.

BTW, i'm kinda curious, how come you have "25,000+ software engineers on WorksHub" but only 8 issues, all of them from your own company?


For those struggling to access the site: https://archive.fo/10IpE


I don't remember if I get the idea elsewhere or what, but:

"Worse than a failed project/business, is a successfully one"

I part of "small scale" projects and one thing that some fear, very much, if that the project GET traction. This have killed some business before.


It seems like there's an opportunity for a tool to help solve this (ie something other than GitHub issues).


I'm sure different tools could help somewhat but ultimately anything replacing GitHub issues would still involve some kind of social, i.e. communication, structure and that's just a hard thing to build and maintain.


Really great article bows down


[flagged]


Please don't accost someone personally like this on HN.

We detached this subthread from https://news.ycombinator.com/item?id=19936320 and marked it off-topic.


I find that when people preface a statement with a negation, it has the opposite effect and draws focus to that aspect. "With all due respect," or "no offense but," or "not to be rude but..."


That may be true, but I think the poster didn't frame their actual question charitably enough, causing the phatic expression prefacing the question to be rendered completely useless.

IMHO, it was quite confrontational if you mentally leave it out: "... under what moral framework do you find it reasonable to bring more than 10 children into this world?"


I encountered most of the issues described so I set out to solve this problem. I did the horrible bad thing: I rewrote my projects from scratch. It took me about a year, pretty much all of 2018. I could afford to do this because I was separated from both work and family by military education and a military deployment immediately following.

Now I have software that is stupid fast to extend and maintain. My software now executes faster than comparable applications coming out of Facebook with more options and at a very tiny size after accounting for dependcies. Now I finally have time to watch all the shows I have missed and play games since my software has fewer defects compared to similar applications and is faster to patch.

I did get tired of the asshole factor though and even deleted my Reddit account as a result. Now it’s harder to promote my software online but deleting my reddit account again opened up more personal time.


I'm threading the needle in a project I've been working on which is both Open Source and has premium features around cloud sync:

https://getpolarized.io/

... it's been interesting. I'm trying to give everyone the best of both worlds but since I'm catering to a large user base I have to make sure to keep everyone happy.

Some people DO NOT want to do anything cloud. Some want their own cloud. Others want cloud and they want it easy.

I think part of what I'm learning is that in order to get the trust from users I'm going to either have to be around for a long time or get the thumbs up from other organizations that can vouch for my positive intentions.

For example, I want to write up a document about our commitment to Open Source but of course that takes work and I don't have a ton of time.


There was a pop-up that asked for money. The language of the bar at the top gives the impression the program will go away soon unless money comes in, plus the fact that someone was willing to inject pop-ups made it very difficult to trust - what if it goes commercial and someone is willing to inject ads or malware?


Hey nice! I was not expecting to click that and get a working page without JS!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: