I hit this with procps. (a package with ps, top, vmstat, free, kill...) It was horrifically demotivating, helping to end my involvement as the maintainer from roughly 1997 to 2007. (the other big issue was real life intruding, with me joining a start-up and having 5 kids)
I had plans for command line option letters, carefully paying attention to compatibility with other UNIX-like systems, and then Red Hat would come along and patch their package to block my plans. They were changing the interface without even asking for a letter allocation. I was then sort of stuck. I could ignore them, but then my users would have all sorts of compatibility problems and Red Hat would likely keep patching in their own allocation. I could accept the allocation, letting Red Hat have control over my interface.
Red Hat sometimes added major bugs. I'd get lots of complaints in my email. These would be a mystery until I figured out that the user had a buggy Red Hat change.
Patches would often be hoarded by Linux distributions. I used to regularly download packages and open them up to look for new crazy patches. Sometimes I took the patches, sometimes I ignored the patches, and sometimes a wrote my own versions. What I could never do was reject patches. The upstream software maintainer has no ability to do that.
The backlog of unresolved troubles of this sort kept growing, making me really miserable. Eventually I just gave up on trying to put out a new release. That was painful, since I'd written ps itself and being the maintainer had become part of my identity. Letting go was not easy.
Maybe it had to happen at some point, since I now have more than twice as many kids, but I will be forever bitter about how Red Hat didn't give a damn about the maintainer relationship.
Hmm. That's painful indeed.
Sorry that you had such a dispiriting experience with the 'procps' package.
For what it's worth, allow me to share my experience (I joined after 2008; so I can only speak about that period onwards) of being at Red Hat. One of the reasons that keep me at Red Hat is the iron-clad (with some sensible exceptions, e.g. security embargoes) principle of Upstream First.
I see every day (and the community can verify -- the source out it there) maintainers upholding that value. And I've seen several times over the years maintainers, including yours truly, vigorously reject requests of 'downstream-only' patches or other deviations from upstream. When there are occasional exceptions, they need extraordinary justifications; either that, or those downstream patches are irrelevant in context of upstream.
I've learnt enormously from observing inspiring maintainers at Red Hat (many are also upstream maintainers) on how to do the delicate Tango of balancing the upstream vs. downstream hats.
So if it's any consolation, please know that for every aberration, there are thousands of other packages that cooperate peacefully with relevant upstreams.
I had reserved -M for security labels, to be compatible with Trusted Irix, and I think this was clear in the source code at the time. I intended to avoid any additional library dependency if possible, because ps is a critical tool that must not break. If I couldn't manage without a weird SELinux library, then I'd dlopen() it only if needed.
Red Hat swiped both -Z and Z, giving them the same meaning. For at least one of those, probably -Z but this was a long time ago, my plans were to use it for being compatible with a different feature of another OS. There are only 52 possible command option letters, not counting weirdness like non-ASCII and punctuation, and most are already taken. Now 3 of them, almost 6% of the possible space, are redundantly dedicated to an obscure feature. An added annoyance was that -Z got wrongly thrown into ps's list of POSIX-standard options, which can affect parsing in subtle ways.
One day I discovered this as it was being shipped in RHEL.
A more recent and amusing issue is with the recent storm of security bugs that hit procps. They actually predate my involvement with procps, likely going all the way back to the early 1990s. I ended up getting notice. I responded on the Bugzilla, correcting some misunderstandings and pointing out better ways to fix the problems. I even do software security work these days, professionally, so I would be the ultimate expert on security bug fixes for procps. My helpfulness got me blocked from looking at the Bugzilla and then Red Hat proceeded to ship slightly bone-headed patches for the security problems. BTW, last I checked there were still DoS vulnerabilities because Red Hat ignored my advice. Turning the 32-bit value into a 64-bit value may prevent an integer wrap-around exploit, but that just means the system will swap until the OOM killer strikes. The value should have stayed 32-bit, with protection added to avoid even approaching such a large value. You probably don't even need more than 17-bit. The stuff with escape expansion is also bad, differently. Instead of papering over the problem, the math should have been corrected.
Others are great human beings.
But on the whole, RedHat skepticism is healthy (this also applies to the NIH software that they push onto most Linux distributions).
This is likely off topic, but can you expand on that if you have a moment? Thanks.
[Typo: "out it there --> "is out there"]
I work for a Linux distribution (SUSE) and I assure you that we don't all act this way. I've had to carry my fair share of downstream patches in openSUSE/SUSE packages, but I always make sure to submit the patches upstream in parallel. Quite a few people I know from Red Hat (and other distributions like Debian, Ubuntu, etc) do the same. It's possible that times have changed since then, or that it depends which package maintainers you are dealing with, but I hope it hasn't soured your opinion of all Linux distribution maintainers.
One thing that is a constant problem is that users of distributions keep submitting bugs to upstream bug-trackers. If there was one thing I wish I could change about user behaviour, it would be this. Aside from the fact that the bug might be in a downstream patch (resulting in needless spam of upstream), package maintainers are also usually better bug reporters than users because they are more familiar with the project and probably are already upstream contributors.
Speaking from a user perspective, I don't think that would be wise. If distributions would just stop feature patching, things would be a lot simpler.
I mean, if a feature is good, everyone should have it, not just the users of one distribution. So please submit the patch upstream, discuss it and wait for the next release. That way, there is no real problem when users submit to upstream bug-trackers. The only exception is (time-critical) security patching. But those patches should be removed again as soon as upstream solves the issue.
The scenario you are wishing for looks like an obvious solution to the issues you are having but will just make everything worse in the long run (slow and fragmented).
Sometimes the user-facing issue isn't an actual patch, it's the default configuration (which might be distribution-specific) or any other host of non-code changes (default security policy and so on). These things should be reported to the distribution, but often get reported upstream which acts as spam.
And note that most downstream patches are backported bugfixes, not feature patches -- in my experience those are the exception.
> So please submit the patch upstream, discuss it and wait for the next release.
What if you're being paid for support and a customer needs a fix which you need to backport (it's not a CVE but it's a serious issue)? This is the problem we find ourselves in very often, and is why we push patches upstream but also patch our downstream packages so that the issue is resolved for our users. Some projects have release life-cycles that are ~6 months apart -- how do you explain to users that they need to wait 6 months in order for a fix which is already written and merged to be shipped to them? If we only ship it to those customers then it's not fair to the rest of our users nor the rest of the community (this is why SUSE has a Factory-first policy where all SLE changes must land in openSUSE first).
And finally (in rare cases) the upstream might not accept patches that are required in order for some distribution features to work because they fundamentally disagree with the feature. What are we supposed to do in those cases? Either way, someone will complain because the best solution (merge it upstream) is closed off.
While it would be ideal if downstream patching was unnecessary, that's not the world we live in. Again, I do submit patches upstream religiously -- but it's not as cut-and-dry as you might think.
I've seen this very often in my personal experience that a company says exactly like you're saying now, "We have paying customers who demand certain patches, but the upstream project may be unable or unwilling to patch and release...so we downstream only." Alternatively, some downstream consumers simply "throw code over the wall" in the form of an upstream patch. However, those patches are sometimes duct-tape solutions which may not fit into the overall architecture/vision of the project maintainer(s). It's not fair to say, "accept this or else..." where the 'or else' is a downstream deviation (which in turn sometimes forces the upstream's hand).
The ethical way to do this work with upstream, whether it be direct compensation or more back-and-forth vice code-over-the-wall.
The issue is that there are some (rare) cases where upstream is completely unwilling to merge a patch for philosophical reasons.
If you want me to be blunt -- the example I'm thinking of is Docker (which doesn't have a cash-flow problem), and has refused outright on many occasions to merge patches that allow for build-time secrets. On SLE we need this because in order to use a SLE image on a SLE machine you need to "register it" and the only way to sanely register it automatically is to bind-mount some files into all containers on-build as well as on-run (which you cannot do with upstream Docker today). Red Hat spent a long time trying to get a patch like this upstream, as did we.
I don't think all upstream contributions are in bad faith. I'm just saying there tends to be competing priorities which leads to some instances of hostility.
As for the Docker example, I don't know what the right answer is without digging deeper. My naive thought is to write a SLE/RHEL shim/plugin style component to allow functionality that's missing. This allows keeping the upstream vanilla, while not having to fork into something without the brand recognition. If that doesn't work, forking as `rhel-docker` or `sle-docker` doesn't seem that bad to me. Ubuntu does this with all the bcc-tools.
This is of course predicated on having tried all previous solutions first (paying an upstream developer, work with upstream with a good back-and-forth to incorporate a patch, etc). In the end, if the project decides something against their philosophical viewpoint, they're perfectly entitled to not accept patches. It's at that point, I think it's not the best solution to fork, downstream patch, and release as if it's the vanilla upstream.
This problem is entirely created by distributions and their “packagers know best” philosophy.
You say downstream patching is necessary. Go ahead and strip the upstream trademark from every package which you patch downstream, replace it with new Suse-specific trademarks, and let’s see what users prefer!
If it was entirely the vendors, then we would see significantly more adoption in fast moving OSes, but that's still not a very popular model for production servers, even in the age of the cloud, containers, etc.
I also don't agree this would result in fewer upstream bug reports -- "suse-foobar" will still result in bug reports for "foobar" (I've seen some cases of this happening). You'd need to rename the project entirely so that users don't know what the upstream GitHub repo is, and that's even more anti-community than any other suggestion.
Shipping a fork that is different from the version that is created by the original author is also very anti-community. At least make it clear that you ship something different from the program that the author maintains.
I disagree that all patches are somehow ethically wrong (bugfixes and security patches are obvious counterexamples). Not to mention that if the author felt otherwise, they wouldn't have released the code under a license that allowed you to modify and redistribute your modifications.
But making massive changes to a project that are incompatible with upstream is definitely not a good thing to do without reason.
The fact that the author does not forbid it does not mean that he/she wants changes or even encourages them. It just means that the author believes that the downsides of completely disallowing changes are even worse.
But I digress. This whole discussion is about trade-offs -- if you cannot get the patch upstream but you need to ship it what is the next best thing. I would contend that patching it is better than patching and renaming because renaming doesn't help solve the problem (unless you are very radical and rename the project entirely) and makes things less convenient for users.
And note that distribution users are part of the community of people using the software.
That is how everyone else does it. Only distributions are somehow exempt from fork etiquette. Hold yourself to the same standards as everyone else, and the problem goes away.
Companies apply their own patches to projects all the time (as an upstream maintainer I've been asked several times to help debug a patch that some company has used internally). Almost every company using Linux has patches on top of it that are for their specific project (all versions of Android have a forked Linux kernel). GitHub uses patched versions of Git (though one of their engineers is also incredibly prolific upstream). And so on.
The reason why people think distributions are the only ones doing it is because we maintain all of the software that is available for a full Linux system. So instead of only having patches for just one or two projects, we have patches for (probably) ~50% of packages in our distribution (most are bugfixes but there are plenty of not-just-bugfix examples). I think some folks just like to bash distributions because no matter what decisions we make we're going to piss someone off.
But again, we don't apply downstream patches because we like it. In fact downstream patches are an outright headache because we have to rebase them on version upgrades and so on.
Just to focus on your own examples:
- Github patches Linux for their own private use. They do not distribute any Linux derivatives, and they don’t profit from the Linux trademark.
- Android does distribute a Linux derivative, and it is heavily patched, but it is distributed and marketed under the trademark “Android”. Google does not profit from the Linux trademark.
And that’s the difference. People don’t buy Android phones because they’re running Linux. But they buy Suse and Red Hat distributions specifically to get Linux.
So Suse and Red Hat are the only businesses which I know of, that are allowed to fork upstream software, modify it aggressively, and still profit from the upstream trademark.
The case of the Linux mark is really weird, because basically all distributions are given license to use it but almost everyone still specifies that the trademark is owned by Linus.
(Also my example for GitHub was their fork of git, not Linux.)
This changes the culture from you clobbering the original package maintainer, to one where you can adapt when necessary but are still a good community member. The "foobar" people can point people to use "suse-foobar" as a solution until everything has been resolved.
It's not the number of bug reports that matter, it's that you can easily and quickly come to a speedy resolution as an end user.
For unix tools, downstream vendors could choose to rename the binary (redhat-procps) or rename the flags (procps -- rh-c) if it wasn't incredibly ugrent to choose a single-letter flag name.
As an end user this is ideal, I can get a fix shipped today with the tradeoff that I will have to do a bit of maintenance in the future. Any other way leads to long waits, or chaos.
I'm not an open source developer, but it seems like a good solution is for the original publisher of the package to maintain their vision, take whatever feedback they deem useful, and ignore what they don't feel is useful. If RedHat or the other distributions want to keep maintaining their patches, let them; that's what they're being paid for. If it ends up fragmenting the Linux ecosystem, which IMO it does, then the distributions should do more introspection and cooperate more to reduce fragmentation.
While distributions give variety and diversity - sometimes a good thing - I would love it if Linus would get all distributions in a room and force them to agree on a whole set of issues to eliminate silly differences between distributions. And if they don't/won't agree, they don't get to use the Linux trademark.
It's human nature to think your way of doing things is best. But I'm not sure the multitude of idiosyncratic differences between distributions is really advantageous to users. It does lock users into a distribution, because as you said, who wants to go through all their scripts and rename every instance of ps to rh-ps, then go back and rename everything again when the patch is accepted.
I do think the idea of paying the original maintainers (from the company, not from you personally), has a lot of merit. After all, that's where the stuff was born; it's RedHat's "raw materials" supply.
Don't take my word for it, read the related documentation:
"The Fedora Project focuses, as much as possible, on not deviating from upstream in the software it includes in the repository. The following guidelines are a general set of best practices, and provide reasons why this is a good idea, tips for sending your patches upstream, and potential exceptions Fedora might make. The primary goal is to share the benefits of a common codebase for end users and developers while simultaneously reducing unnecessary maintenance efforts."
The linked documentation also answers, with specific examples, the question of: "What are deviations from upstream?"
Distributions don't apply patches just for the sake of it or because we enjoy it -- it makes packaging more annoying for us when we have to rebase patches each version bump. But sometimes it's necessary and it would be a silly limitation to not allow yourself to patch software which is under a license that explicitly gives you permission to modify it.
I would argue any rolling-release or bleeding-edge distribution is pretty much the closest you'll get to "pure upstream". Stable distributions have more patches by necessity, and enterprise distributions even more so.
> Arch [Linux] strives to keep its packages as close to the original upstream software as possible. Patches are applied only when necessary to ensure an application compiles and runs correctly with the other packages installed on an up-to-date Arch system.
Forget maintaining software, I want to know how you maintain your existence. I don't think I could survive.
Once you have two, you stop caring as much about the first precious one - because you have two precious ones. So sometimes one has to wait.
Repeat that a few times, and you'll arrive at your answer.
With more kids, it becomes clear that there are things you simply cannot do, e.g. own a 11-bedroom house or drive each single kid to soccer. So the kids will have to do something else. Like playing with each other.
That's just the way it is.
There's virtually no free time, though. :)
At ten kids I guess you have a - somewhat misbehaving - staff. Half of the kids are capable to help out with the other half or other chores at the house. All they need is a manager :)
I cannot say much about the personal time though - I'm still in the 'learn to walk and talk' phase and with 3 toddler in a small house there is virtually none. However I read on reddit that after 20 years every kid be at some kind of university and I finally will have free time for playing games and talking to my wife. That sounds nice:)
I realised I no longer had any spare time anymore once I had my first daughter. Then we had another and I realised I must have had so much spare time with just the one kid...
How lots of my family and friends manage time with 3 or more kids I don't know as you are then outnumbered.
5 or more seems impossible. You must either be a drill sergeant super efficient or the opposite, super relaxed and let the kids sort themselves out. Or rich and delegate it to nannies/au-pair etc.
I do appreciate now that by having two kids that they mostly spend their time at home playing with each other, as opposed to when we had just the one when we had to always play with them. I guess that scales well with more kids.
But I still do feel guilty that I am not always joining in. And have to be more selective of which school performance etc either one of us can attend. I am not sure my conscience could handle missing out on lots of these with more kids by allocating much less of my full time to each kid.
From two to three children... nothing really changes anymore! You won't get more responsibilites as you already have them and no less time as you already don't have any :)
The traditional way is to delegate age-appropriate amount of caring for the younger children to the older children.
But if you are already making 2 school lunches, how much longer is 5 really? You are already making breakfast lunch and dinner each day, how hard is it to add more food really. There is cost involved but making the meal isn't that much harder, at least for us.
It's just my wife and I, her staying home, without staff or personal time. We homeschool them until they can get a 3 or better for AP Chemistry or AP Biology. My kids get funny looks going in for those tests at age 10 to 12. After that, as early as 6th grade, they can start dual-enrollment at the local college. It's free until high school graduation, which is sometimes enough time to get an AA degree. We don't do organized sports, but they all have unicycles. There are some organized activities to attend: scouting (BSA and AHG), a homeschooling group that does history/gym/art/writing together, a free band day camp in summer, and a big road trip every few years.
I only have 4 bedrooms. I ended up putting the kids all in a much bigger room, leaving the bedrooms for other purposes. More interesting is the "car", with 5 rows of seats. It is 3 tons empty, 5 tons full. We go through 2 gallons of milk per day. I can spend $1000 on 3 or 4 carts of groceries. We can finish a pair of chickens or a mid-size turkey in one sitting.
I haven't had much luck turning kids into programmers. At one point I got several to enjoy Scratch, but then I found that the computers were severely abused to waste time on junk like the "Tanki Online" video game and the "Annoying Orange" videos. I had to take away the computers. Recently kid #2, age 17, decided to choose the career. I've mostly taught him C now. He does that on Ubuntu and for his TI-84 Plus CE. I think this has created a programming aversion in some other kids, because they saw how excited I got to be finally teaching C and didn't want to spend all their time with me. For the oldest 5 the career choices are unknown (hurry up...), programmer, lawyer (physics undergrad), unknown, and midwife. BTW, programming the TI-84 Plus CE in C is pretty wild. You get a 24-bit int.
It helps to be close to work. I'm less than a mile away, 3 minutes by car or maybe 16 if walking. It helps to work only 40 hours per week, with an extremely flexible schedule.
Right now the main source of stress is kids wandering away from homework and chores. I don't want to have to stand over them in one room, waiting and watching as they work. I want to go do other things.
> How many soccer games do you attend in a given week?
One: souprock's kids + one friend vs. some other team.
They just have to grow up the time-tested traditional way through play, discovery, and experience. Shame really, brings a tiny tear to my eye.
10 dependents who are legal minors (includes a set of twins)
1 adult dependent who ought to move out soon
There are eight possible ways to count that up. The resulting sum is 10, 11, 12, or 13.
Also, if you download and compile Python from python.org, isn't it also just 'python' that you get as your executable? If so arch is just following upstream, and the Python devs should follow their own advice.
Arch doesn't seem to be patching this, though you can see in the PKGBUILD that arch explicitly creates 'python3' as a link to 'python' (with a comment lamenting that this isn't done by default) :
"in preparation for an eventual change in the default version of Python, Python 2 only scripts should either be updated to be source compatible with Python 3 or else to use python2 in the shebang line."
So it will change eventually. But why rush to break things?
When I maintained ~250 cygwin packages I acted as downstream also, and had to deal with upstream maintainers also. Usually a horror, maybe because I came from cygwin which was a crazy hack. Only postgresql said "such a nice port". For perl, gcc, python,... you were just a crazy one to be ignored.
It is a crazy port ;). But the native windows port is much crazier :)
/me wishes for a fountain of time to change postgres' threading model.
Can you elaborate?
I remember the frustration when I found that NixOS maintainers downright crippled certain build systems to force people to use Nix...
Their goal was probably to get the thing building within the constraints of their own time, and the constraints of the Nix ecosystem.
You can choose whether you want to invest the time to fix those build scripts and re-enable the other build systems, or not -- totally up to you. But it's uncharitable to suggest that the Nix maintainers have some kind of secret agenda.
Note that they didn't fork the project under some different name, they shipped it like this under the original name.
What I said was that they didn't have a secret agenda. They are completely up front about reproducibility. Code that violates this principle needs to be fixed (or disabled, as in this Rebar example).
If that doesn't meet your needs, then just move on. Nix isn't a distro that will please everyone.
I would be totally cool with any incompatible patches that distro makes for its needs, as long as the binary is renamed. But they shipped a broken binary and called it rebar3. Not "rebar3-nix", but "rebar3". People try using this binary for development and it doesn't work, they go to the upstream and the upstream spends time investigating someone else's hacks.
"Albert rewrote ps for full Unix98 and BSD support, along with some ugly hacks for obsolete and foreign syntax."
Something like "ps -axu" will fail the UNIX/POSIX/SysV parsing, then restart in a pure BSD mode with the "-" ignored. Something like "ps -aux" will too, unless there is a user named "x". There is also a PS_PERSONALITY environment variable to force the parser to act in a particular way.
It was needed to transition people over to a standards-compliant syntax. I couldn't support all the old syntax. Prior to 1997, something like "ps -ef" would parse as "ps ef" does today, which is not standards-compliant. People were unwilling to just switch over without the compatibility hacks. There were instances of things like "ps -aux" all over the place, including in people's private shell scripts. People couldn't resist typing it.
I got pushback just for including the warning on stderr when somebody caused the parser to kick in to compatibility mode.
Sorry you had to endure all this.. not fun.
- When you send an E-mail, at least imagine that the recipient may have literally hundreds of other messages to dredge through and the time required to respond (and detail of response) may reflect that. Not personal, don’t get mad at them.
- When you send an “instant message”, it may be instant for you but for all you know the recipient is deep in the middle of something and won’t respond for awhile. Not personal, don’t get mad at them.
- The recipient may be on the other side of the planet. Err on the side of extra information so you don’t wait days for a response that just has to ask you for more.
- When you file a bug, “thank you but do some homework”. A person dealing with 1000 other things will not have time to hand-hold you through all the things you’re not telling them yet. Be precise and complete. Be reasonable about when/if to expect a fix.
- And for that matter, in retail, or traffic, or 100 other things in life, you don’t know as much as you think you do about what other people are dealing with. Stop for a second. Imagine their situation. That person not instantly serving you and only you has a dozen other things going on.
People don't get it. If they said a DM / IM or any program which is considered a "chat" app they expect instant replies.
I run a single person software company. I used to allow people to contact me for support any and all chat programs. So for paying customers you can magnify that "expect an instant reply" factor. People would get quite annoyed, regardless of whether it was Sunday, or 3am in the morning for me.
For that reason, only email and forum support is no offered. Never had a complaint since about wait times. People expect to wait after sending an email. They expect to wait for a forum post. They do not expect to wait for a chat / IM / DM response.
It's refreshing to see someone on HN say this. If you look at my comment history, you'll find a number of people who've replied to my comment who say it is incredibly rude not to answer someone's email promptly. My stance over the last few years has become: If anyone (including automated services) can put an email in my inbox, I am not obligated to read it. Until I can get a reliable heuristic that distinguishes well thought out emails from the crud, I don't feel obligated to spend my limited time on tending to my inbox. The only realistic solution is to have sending emails incur a cost, in order to curb the quantity of emails. Your sending me an email doesn't obligate me to read or respond to it.
I'm not an open source maintainer who gets a ton of support emails. Yet if I feel my stance is necessary for my sanity, imagine how much worse it is for those folks.
For IM's, at work, my status says something like: "If you're physically in the building, come talk to me in person if urgent or send me an email if not. If you are remote, email me if not urgent, or let's set up a voice chat if urgent."
The only useful thing about IM at the work place is in things like active debugging, or a coworker in an adjacent cubicle needing to send me a link related to a live conversation we're having. Other than those, it becomes a conversation that drags out and never ends. They'll often wait multiple minutes before responding to me (they are the ones who initiated the IM). Having IM windows open and randomly flashing takes up my mental space and distracts.
(Note: We do not use Slack).
(Note 2: My saying "send email" contradicts with my first paragraph, which I wrote with my personal email in mind. The SNR is much higher for my work email).
One thing i've noticed in life is that our use of aschynchronous communication methods is much more in a "respond now" fashion.
Also as a contributor to StackOverflow your point about asking questions with the right amount of information is really spot on.
But the main thing is that almost all bug reports, feature requests, and so on, get sent to /dev/null. Users who care about a problem are expected to work on that problem themselves. In the case of software like Redis, pretty much everyone reporting a bug is also qualified to fix that bug, so it works particularly well.
Then I focus only on helping new contributors get their bearings and making regular contributors happy and comfortable with their work on the project. So far this approach has been very successful for me - I don't get burned out, and neither do my contributors, and we have happy, healthy communities where people work at a pace which suits them best and aren't stressed or overwhelmed.
Sure, lots of feature requests and bug reports get neglected, but I think the net result is still a very positive impact on the project. The occasional drive-by bug submitter provides far less value to the project as someone who writes even one patch. Focusing on keeping the people who create the most value happy makes for a more productive project and a better end result. Some people are put out by the fact that their bug report, feature request, etc goes unanswered, but I can quickly put the guilt out of my mind by reminding myself that ignoring them is doing a benefit to the project. And in practice, I generally have time to give people some words of encouragement and a nudge in the right direction towards writing a patch without burning myself out.
There are a good bunch of people who have never written any C and even more people who have no idea about the internals of Redis, yet they are Redis users and they do find bugs. I wouldn't expect those users to write a patch for every bug they find to be honest.
I'm not sure how i feel about that TBH, personally i'd like to know about bugs even if they do not come with patches but at the same time i understand not wanting to be bothered by wading through useless reports.
Perhaps there should be end user oriented bugtrackers that look more similar to something like a hybrid between a forum and a reddit/HN post with voting (no downvotes though, kinda like GOG's wishlists) so that developers will have a rough idea where to focus next and "locked" (but perhaps visible, at least for FOSS projects) traditional bugtrackers where end user entries migrate (and linked to) once developers decide that this will be worked on.
It wont solve all issues, especially with people potentially going "why aren't you fixing that bug-but-really-a-feature-that-requires-rearchitecting-the-entire-thing that has 897482959204 votes and instead work on whatever-else that Nobody Asked For(tm)?", but i guess you can't fix entitlement with technical means.
This mentality that only writing code matters needs to stop. Open source projects (or any project for that matter) is way more than just writing code and the people doing the "other" work need to be recognized and encouraged to continuing that.
The lack of such people is exactly the major cause of burnout from maintainers who just want to code.
Not replying to all your points directly, just making a general statement about this whole thread.
Yes, all properly filled bugs reports are useful. The important part here is that "properly" word though and on a popular project they can be hard to find. What is worse, this incentivizes people to file the same or similar bugs twice (or more) since they can't find the duplicate (in which case, properly or not doesn't matter, you already demonstrated your belief that your time is more important than whoever will have to wade through the bugs to find your duplicate).
Also while code isn't the only thing that matters, it is the thing that matters the most - the software doesn't exist without code, but it can exist without everything else - and the programmers who write the code are those who make the final decision on what ends up in the software (assuming FOSS made by volunteers here, of course, not FOSS or closed source software made by employees - although even in that case, often the developer who writes the code is the one making the decision anyway).
Of course if you are going to receive harassment either way, might as well get paid for it.
I've seen this played out a dozen times. My projects are often many people's first exposure to C, ever.
As a developer, I might have the time to investigate and submit a bug for your code, but at the end of the day, is probably just one more bug in 100 other pieces of code preventing me from writing the code I need to.
Getting familiar with the code of a project you're using, to the extent that your can submit a reasonable PR is a not-insignificant time dedication.
I've found a variety of attitudes from upstream code I've reported to, ranging from "you're wrong" to the bug being fixed and pushed to master within an hour.
I've formed an opinion, amongst the projects I work with, about whom I should give more of my (professional) time to, that usually begins with how receptive they are to a perfectly good bug report which I don't have the (professional) time to fix.
The ones that have proven "pleasant" to work with have made it much easier for me to convince my bosses that their money is well spent giving back.
Those OSS projects that deliver a quality product, also took my bug reports seriously (and sometimes disagreed).
Possibly, but not necessarily. What is certain is that no developer who gives you free stuff has any obligation to fix bugs nor to submit themselves to any low-key bullying from users of said free stuff about how to spend their time and develop the software they decide to share with others.
FOSS Zeolots: everyone should use FOSS! Micro$oft is evil!
Also FOSS Zeolots: Oh you have a bug? Screw you. You're getting it for free. Stop complaining. I'm busy making a new icon.
OSS is great. But until this attitude is killed, it will continue to hover just above "who gives a shit" for anything other than developers
If anything, the entitlement you express is counterproductive, because sometimes people like you convince open-source developers that their time is better spent catering to your sense of entitlement instead of their software.
Source: I am a maintainer of several projects. Some that are used extensively. And I take bug reports seriously, even if I can't always get to them right away.
"it's free, go away" is a real problem. Denying it won't make anything any better or help FOSS get adopted for anything other than servers.
I have experienced it many times before, and almost every time I've gone back to propriety software and breathed a sigh of relief.
Also, "it's free, go away" is a straw man, because the "go away" part is also nearly universal within proprietary software. Try opening a ticket regarding a bug in Word, or AutoCAD, or so on.
The real difference? Those closed products also have closed bug trackers, so you don't get to see all the times users were ignored or told to pound sand.
This "real problem" exists entirely in your head.
Characterizing this as a problem is like accepting free candies from someone and then complaining that you were not given more candies.
(of course i knew someone would try to reply with a "No, it is like <insert post ignoring the point here>" but decided to go with it anyway)
It's perfectly possible to report a bug without demanding that it be fixed.
Also unless someone has explicitly expressed they are taking such a responsibility you describe, such a responsibility only exists in some people's minds. A lot of programmers want to share free stuff they made and stop there.
Standard practice (Pessimistic Merging, or PM) is to wait until CI is done, then do a code review, then test the patch on a branch, and then provide feedback to the author. The author can then fix the patch and the test/review cycle starts again. At this stage the maintainer can (and often does) make value judgments such as "I don't like how you do this" or "this doesn't fit with our project vision."
In the worst case, patches can wait for weeks, or months, to be accepted. Or they are never accepted. Or, they are rejected with various excuses and argumentation.
PM is how most projects work, and I believe most projects get it wrong. Let me start by listing the problems PM creates:
It tells new contributors, "guilty until proven innocent," which is a negative message that creates negative emotions. Contributors who feel unwelcome will always look for alternatives. Driving away contributors is bad. Making slow, quiet enemies is worse.
It gives maintainers power over new contributors, which many maintainers abuse. This abuse can be subconscious. Yet it is widespread. Maintainers inherently strive to remain important in their project. If they can keep out potential competitors by delaying and blocking their patches, they will.
It opens the door to discrimination. One can argue, a project belongs to its maintainers, so they can choose who they want to work with. My response is: projects that are not aggressively inclusive will die, and deserve to die.
It slows down the learning cycle. Innovation demands rapid experiment-failure-success cycles. Someone identifies a problem or inefficiency in a product. Someone proposes a fix. The fix is tested and works or fails. We have learned something new. The faster this cycle happens, the faster and more accurately the project can move.
It gives outsiders the chance to troll the project. It is a simple as raising an objection to a new patch. "I don't like this code." Discussions over details can use up much more effort than writing code. It is far cheaper to attack a patch than to make one. These economics favor the trolls and punish the honest contributors.
It puts the burden of work on individual contributors, which is ironic and sad for open source. We want to work together yet we're told to fix our work alone.
Good contributors who know the rules and write excellent, perfect patches.
Good contributors who make mistakes, and who write useful yet broken patches.
Mediocre contributors who make patches that no-one notices or cares about.
Trollish contributors who ignore the rules, and who write toxic patches.
Let's see how each scenario works, with PM and OM:
PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings. OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.
PM: contributor retreats, fixes patch, comes back somewhat humiliated. OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.
PM: we get a flamewar and everyone wonders why the community is so hostile. OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.
PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through. OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.
In the majority case (patches that need further work), Optimistic Merge creates the conditions for mentoring and coaching. And indeed this is what we see in ZeroMQ projects, and is one of the reasons they are such fun to work on.
> It tells new contributors, "guilty until proven innocent," which is a negative message that creates negative emotions. Contributors who feel unwelcome will always look for alternatives. Driving away contributors is bad. Making slow, quiet enemies is worse.
> It gives maintainers power over new contributors, which many maintainers abuse. This abuse can be subconscious. Yet it is widespread. Maintainers inherently strive to remain important in their project. If they can keep out potential competitors by delaying and blocking their patches, they will.
> It opens the door to discrimination. One can argue, a project belongs to its maintainers, so they can choose who they want to work with. My response is: projects that are not aggressively inclusive will die, and deserve to die.
> It slows down the learning cycle. Innovation demands rapid experiment-failure-success cycles. Someone identifies a problem or inefficiency in a product. Someone proposes a fix. The fix is tested and works or fails. We have learned something new. The faster this cycle happens, the faster and more accurately the project can move.
> It gives outsiders the chance to troll the project. It is a simple as raising an objection to a new patch. "I don't like this code." Discussions over details can use up much more effort than writing code. It is far cheaper to attack a patch than to make one. These economics favor the trolls and punish the honest contributors.
> It puts the burden of work on individual contributors, which is ironic and sad for open source. We want to work together yet we're told to fix our work alone.
> Good contributors who know the rules and write excellent, perfect patches.
> Good contributors who make mistakes, and who write useful yet broken patches.
> Mediocre contributors who make patches that no-one notices or cares about.
> Trollish contributors who ignore the rules, and who write toxic patches.
> PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings. OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.
> PM: contributor retreats, fixes patch, comes back somewhat humiliated. OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.
> PM: we get a flamewar and everyone wonders why the community is so hostile. OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.
> PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through. OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.
> PM: depending on unspecified, arbitrary criteria, patch may be merged rapidly or slowly. At least sometimes, a good contributor will be left with bad feelings.
> OM: good contributors feel happy and appreciated, and continue to provide excellent patches until they are done using the project.
PM: The patch is merged as soon as CI passes, unless code review uncovers any serious issues. Good contributors feel happy and appreciated.
OM: depending on unspecified, arbitrary criteria, patch may be reverted or modified to suit the tastes of the project maintainer. At least sometimes, a good contributor will be left with bad feelings.
> PM: contributor retreats, fixes patch, comes back somewhat humiliated.
> OM: second contributor joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.
PM: second contributor notices the CI failure, joins in to help first fix their patch. We get a short, happy patch party. New contributor now has a coach and friend in the project.
OM: the patch breaks master, someone posts an angry rant, contributor retreats, fixes patch, comes back somewhat humiliated.
> PM: we get a flamewar and everyone wonders why the community is so hostile.
> OM: the mediocre contributor is largely ignored. If patch needs fixing, it'll happen rapidly. Contributor loses interest and eventually the patch is reverted.
PM: the mediocre contributor is largely ignored. Contributor loses interest and eventually the PR is closed.
OM: the patch breaks master, someone posts an angry rant, we get a flamewar and everyone wonders why the community is so hostile.
> PM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through.
> OM: existing contributor immediately reverts the patch. There is no discussion. Troll may try again, and eventually may be banned. Toxic patches remain in git history forever.
PM: existing contributor immediately closes the PR. There is no discussion, because comments are disabled. Troll may try again, and eventually may be banned. Toxic patches don't ever make it into git.
OM: we get a flamewar which troll wins by sheer force of argument. Community explodes in fight-or-flee emotions. Bad patches get pushed through and remain in git history forever.
My takeaway is that you need to have a great community and everything will work out in the end.
I wrote up my experience from one wave of negativity a couple years ago: https://caddy.community/t/the-realities-of-being-a-foss-main...
It still haunts me to this day, but my attitudes are finally trending more positive about the whole thing.
That's incredibly excessive. If a website design agency runs Wordpress on Linux servers, that certainly doesn't justify having a kernel developer on staff.
Wordpress is integral to their business. If they run into a WordPress bug, it would be useful for them to have access to someone that could track it down and diagnose it.
Any heavily used WordPress plugins, or custom WordPress plugins might be integral to their business. They maybe should have someone that can look into problems with those.
If a million other people are using the same feature as you that might cause problems, you're probably safe letting the community provide a fix. If 1000 other people are using the feature, who knows how long that fix will take in coming? If the developer is busy, it could be days.
It's a function of how important it is to a business and how rare/replacable it is.
A Firefox consultant in that scenario sounds more like a necessity than an excess to me. If anything, one expert might not be enough given they're probably specialized to different parts of Firefox. Also, a reason to never want a non-swappable dependency as big and complicated as Firefox. Then, you might become one of those companies that get locked-in like customers of IBM, Microsoft, and Oracle.
Could you suggest where one might go about putting this ad? Facebook? Freelancer sites?
Put an ask hn question with contact details and people will connect.
How do the make money?
It's important to remember why you are involved in open source. If those circumstances change, you should ask why you continue to remain involved. As soon as you lack a satisfactory answer, it's your cue to stop.
Many large open source projects start out as a passion project or a solution to a specific problem that the original developers faced. Over time, other people face similar issues and rally around your solution and it is really easy to fall into the trap of bearing their burdens. This is a bad response. Other people using your open source offerings do not create any sort of obligation on your part to care or respond to their concerns.
If someone really cares enough, they will incentivize your continued effort (like money or other considerations). It is unfortunately cultural taboo to ask some of the more vocal critics to pay, but setting up that dialogue at least shuts down most of the comments.
IMHO the origin of most of these issues comes from the very thing that drives many people to open source in the first place: personal branding. If you remove your personal identity from the equation, it's a lot easier to "turn off notifications". Rants and criticisms are directed at this mystery character, not you personally. Since you are not personally tied to the project, working on open source feels like a distinct activity and is judged on its merits. You don't feel the same sense of obligations since you don't personally feel like you are disappointing the user base.
I really enjoy Open Source 99.99% of the time. But that 0.01% I just remember how much work I put in.
yep, don't bear other people's burdens without asking for money, or contributions (of labour - such as code)
For someone that is giving parts of their life to help others, the amount of entitlement that people expect is unbelievable.
Even for people that are paying you for a product, sometimes you really need to say no, that is how it works, and no, we are not going to customize it for your particular workflow.
The easier option is to simply not respond. Even this leaves some level of stress, because we've all had our own bug report or pull request ignored. We don't want to this uncaring person on the other side. But sadly, ignoring other people's requests is the best option for involuntary maintainers to protect themselves.
The final option is to just hand out the keys to whoever is the most active contributor, and stop trying to control the direction of the project. This will get the stress-level down. You may not like this option, but it's much better than burning out.
2. TFA really gets at the classic issue of working vs. managing. Technical people tend toward being better at the former. Context switching between doing the work and managing the work is hard. Punting the management and diving into the work almost seems an escape.
3. The code itself makes this point to us. We can define an integer, and there it is. But when we need a list of integers? Look at how the management required just exploded.
4. Management sucks, but a management vacuum is a void, indeed. Let us be thankful for managers who don't suck.
I agree a lot of that article feels like two things (but i'm not going to remove all nuance - there are clearly other issues):
1. The issue of being forced to be a manager when you want to be an individual contributor, and even worse, feels like being forced to be a TLM when you want to be an individual contributor.
2. Even if you take away the aspects of #1, it feels like a lack of notion that individuals don't build software past a certain point.
I mean that in the sense that, IMHO, past a certain point, all software becomes team built (or fails), whether you are the manager or not. High level software engineering is a team sport, not an individual one. That is true even outside of management - it's also about technically mentoring and growing the team that builds your software so you can rely on them instead of you (which is not just a manager task).
Every person you mentor into someone capable of doing the kind of work you want done is going to increase productivity on the software a lot more than you trying to do it yourself. Over time, they will also be able to build your team.
Over time and scale, if you don't have a team to rely on, and actually start relying on them, you will just feel crushed. Utterly utterly crushed.
There is no path out of it that involves getting better yourself. You simply cannot scale past a certain point and there is no way out of it.
Floating out there in the cloud, you have to be a DBA, a network engineer, a browser jockey, a middleware dev, a sysadmin, a security specialist, an architect. . .
. . .and that's without being handed someone else's effort that is probably not in the desirable state for reasons you'll never know.
It's blatantly obvious to even the most casual observer that anybody claiming to know "all that stuff" is having you on.
That the internet works AT ALL is somewhat amazing when you ponder it.
Let your project be less than it could be.
Otherwise, the negative vitriol/etc of people who won't accept that is amazingly overwhelming.
Put another way, imagine, in a housing thread on HN, any time someone says "yeah, but i'm just happy to have my town stay the way it is. We don't want growth, we are happy, and we should be allowed to have that choice", what the response is.
While different situations for sure, it's the same type of response you get when you say "yeah, i'm happy with my open source project the way it is, sorry".
If you can't come to peace with that possibility, this approach isn't for you. Having to close up isn't an inevitability, but grappling with that prospect is an essential exercise.
It's truly not easy for ordinary humans to withstand the vitriol of a furious userbase. So, remove yourself from the channels where that vitriol gushes and roils.
The key is avoid making promises you can't keep -- either explicitly or implicitly. Putting a project on Github, home of "social coding", makes it hard to tune out the din. So... maybe don't keep your project on Github.
Sure, but i mean in the sense that what you suggest is not a viable method for a lot of projects, unless, as you say, they also disconnect completely from the world.
That is not the same as existential futility - it is going off grid.
Embracing existential futility puts you in a state of mind to accept the consequences of subsequent concrete actions which limit future prospects of glory.
" Gradations are possible, and the way you achieve them is by strategically choosing the terms of engagement."
In my experience, you are simply wrong.
You don't get to actually make this happen. You may want it to happen, and i agree it would be very nice. But it is also unrealistic, in the same way lots of polarizing things are but shouldn't be.
So you may not like that dichotomy, but in my experience, it is the practical effect.
I'd love to see your examples of projects that are both fairly successful and choose somewhere along the gradient you have in mind.
Well, if you want to, just find a project you like and search their bug tracker. Usually there's a lot of shit that's trivial which no one could find time for.
You know, the stuff that people will be like "OpenOffice's About Page hasn't been disableable for over fifteen years. This bug report is untouched!" Or whatever. You can fix it in five and be happy.
Or not – you're not obligated to contribute!
* Relationships tied to the project
It sort of burned me out when the first contributors moved on. I have always been on the other side though, interests change or you change jobs.
* Every user issue is urgent.
It is hard figuring out what the most important thing to work on is. If a user takes time to file an issue it is causing them issues. I just have been on the other side and it doesn't feel great to be ignored.
* Community members with different communication styles
People aren't so much assholes, but just communicate differently. It is really hard to mediate, no one is at fault but I just hate seeing people leave/get burned out from it.
I have some code on Github and got a change request. I just replied that I can do it for my currently hourly rate. That makes priorities clear.
It is nice to suspend disbelief and just all work together on making things that make people happy :)
I also maintain 10,000+ GitHub starred OSS database (https://github.com/amark/gun), and can relate to a lot of these points.
Worse than the jerks, though, are the bike-shedders. It is emotionally much easier to confront jerks, even though it is hurtful.
But the people who "Why isn't this written in X language?" or "Here's a PR to turn tabs into spaces" are honestly well-intended but naive and short-sighted.
Explaining to those people why you have to reject their PR because it provides no additional value and is purely an arbitrary distraction takes quite a bit of time trying to be delicate, but is nearly as much waste of time.
There are a few core contributors to the project now and a Discord server as well and everything in this article rings true to me.
It becomes an obligation of sorts and you definitely feel like you're letting people down, especially when it comes to people who have taken the time to contribute and share feedback, let alone people who actually open PRs.
What I think can help is having more core contributors with write permissions, share the load. Or be up front in your readme and say, "I only check this once a quarter".
That's an interesting point and thought experiment: what piece of software currently exists that will still be widely used in a century? And what's the oldest software that's still being used currently?
I am very interested in this question, but much of software is proprietary and you don't have much visibility into their history.
For software in the public, trivial lower bound is GCC 1.0 in 1987. Another case I know is Community Climate System Model, in continued development since 1983.
So basically, the age of software is limited by its hardware. (It's similar with animals: if nobody bumps them off and the hardware keeps ticking, the software probably will too... fish and mammals can live for hundreds of years)
E.g. the CIP project aims to maintain released kernels for 25 years. https://www.cip-project.org/
And yet, via Github, the community has reinvented the centralised model of software development. For any project, there is a single instance that is the "blessed" version, by virtue of having the most stars or collaborators, or owning the associated package name. 99% of forks are nothing more than a place where you create a feature branch that you intend to PR to the blessed project.
I hope that, one day, somebody invents a git-based project ecosystem that solves the problem of decentralised project management. I don't know exactly what this would look like, but I think it would need to embed the idea of community and contributor consensus and democracy at a fundamental level, separating the concept of the project and project ownership from any individual fork.
Think about the author of Git himself - Linus. He didn’t write Git because he decided one day that he was tired of being the final decision maker for the Linux kernel, and that he now wanted a thousand decentralized kernels to bloom, each with their own management and maintainers and trust levels.
It was designed to give everyone a way to work with full featured version control on their own machines, and set up adhoc networks of version control between any consenting parties. But that did not imply that the power structures that make up an organization would automatically be demolished.
Github is the extension of Git as it was designed. Maintainers on Github follow similar organzational policies as the Linux kernel and some are even codified into Github itself. But none of this is counter to the purpose of Git or even its ethos.
Regarding having a centralised service like github, I can’t see how that impacts the democraticness of anything at all. I can pull from it, I can push to it, I can fork on it. The centralizedness of it doesn’t seem to have any impact on participation to me. It just seems to be a matter of convenience. The less places I have to look for the thing I want, the better my experience is.
I don't think this is true. It was developed by Torvalds to maintain the Linux kernel, which has always had a "blessed" repository at kernel.org.
(Also, there is more than one "blessed" Linux repo that a lot of people use. There's linux-rt, linux-stable, etc.)
I think the issue is more about how to structure pull requests, issues, etc., in a better way (maybe hierarchical), such that this matches the exponential number of more communication. GitHub does not help too much with this, but it also does not really fight against this. This is basically a problem every big project has to manage in some way. For comparison, I would look at other big projects like TensorFlow, Firefox, Chromium, Linux, CPython, etc.
1. accountability: where do I go looking if something is broken?
2. and discoverability: where to search, and whom to believe as being the correct owner of the copy of the codebase.
this is similar to block-chain currency (based on my limited understanding.. correct me if i am wrong). While it is meant to be decentralised, entities like coinbase effectively create a central entry/exit point somewhere in the ecosystem.
And on the other side as a maintainer, any bug report that demonstrates a clear issue (crash, hang, etc) with a reproducible setup is appreciated.
What can be really frustrating are bug reports that aren't bugs and a better understanding of the software would allow them to solve the problem. For example, there was one issue on Slic3r that was asking for a PDF output of config files. There is really no reason to do that.
Every project has accepted everything whenever I've actually tried to help with the actual work they're doing, i.e. solving a bug or submitting a pull request fixing some documentation, something that reduces their todo-list instead of adding to it.
I’m for quite strict tax enforcement but that should be changed.
Also of course private companies need to start stepping up and providing the often only small sums of money needed to fix these projects via bug bounties and fees to the maintainers.
I once saw a sig somewhere that was something along the lines of the author's goal being of writing some piece of software that would outlive them. That seemed like a neat, but incredibly ambitious goal, very very few pieces of software will meet it.
That said, Redis is probably one of them. It isn't too hard to imagine that there will be Redis instances running somewhere in 40 years, and though I wish antirez a long life, I think it is likely there will still be Redis instances chugging along after he passes.
So be proud antirez! You've most certainly made a dent in the universe.
If the maintainer never learned and practiced "role shifting" as a part of the developer process, they are going to have a bad time if the project gains any traction whatsoever.
Worse, there's no easy way for a mentor to swoop in and teach them basic role shifting tactics. Because such an attempt would likely be interpreted as just a user giving yet another uninformed opinion. Edit: And such a maintainer is unlikely to be able to delegate to newcomers since mentoring isn't part of their skillset. (Thus the vicious cycle.)
Find these people once your project reaches a stable point from a creative POV. Then help these people get started, not the users. Then you will see that you need to do less and less. And when you are at about 30% load start using your time to find the next problem that you can solve.
No reason to be ashamed of either. There are loads of people who really love this continous lifestyle and there are few people who actually love finding a new solution to a problem that nobody can solve. Everybody will be more happy in the end, even the users.
PS: You can also see it in a different way. There's not just jenkins and github and AWS to setup as a system to get your project running. People are also part of the system. People who move the issues along these pipelines. There is no system if people aren't part of it. And there are not just devs and users, there are also maintainers, communicators etc.
It doesn't have to be. What if all the people who "pay" for Redis, their money want into a collection bucket, and nobody got the money until they earned it. Then you work on your own schedule and "earn" that money at a fixed dollar rate per hour, and claim the money at the end. Other trusted contributors can join this effort and claim the money after working the hours. This way, hours are up to the contributors, and money is more like the "pledge" in Kickstarter for example, or a check that's not yet cashed.
The problem I have had with my open source projects so far is that most of them seem to not be very popular and they receive very little if any feedback at all.
It's gotten to the point where I feel like popularity and merit are not even directly related. It seems like maybe part of it is tied to some kind of social networking popularity contest. Certainly I have to think that way otherwise I would never try to contribute anything again.
Companies post their issues on our platform and contributors are incentivised by getting to know a company, a monetary reward and the potential of raising their profile with the company to get hired.
We're looking for feedback so if anyone has any thoughts -
I might be missing something of course since your site doesn't work. Every time i tried to click on a link nothing happened. Firefox's dev console had this warning:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.segment.io/v1/p.
Looks like your tracking is broken.
BTW, i'm kinda curious, how come you have "25,000+ software engineers on WorksHub" but only 8 issues, all of them from your own company?
"Worse than a failed project/business, is a successfully one"
I part of "small scale" projects and one thing that some fear, very much, if that the project GET traction. This have killed some business before.
We detached this subthread from https://news.ycombinator.com/item?id=19936320 and marked it off-topic.
IMHO, it was quite confrontational if you mentally leave it out: "... under what moral framework do you find it reasonable to bring more than 10 children into this world?"
Now I have software that is stupid fast to extend and maintain. My software now executes faster than comparable applications coming out of Facebook with more options and at a very tiny size after accounting for dependcies. Now I finally have time to watch all the shows I have missed and play games since my software has fewer defects compared to similar applications and is faster to patch.
I did get tired of the asshole factor though and even deleted my Reddit account as a result. Now it’s harder to promote my software online but deleting my reddit account again opened up more personal time.
... it's been interesting. I'm trying to give everyone the best of both worlds but since I'm catering to a large user base I have to make sure to keep everyone happy.
Some people DO NOT want to do anything cloud. Some want their own cloud. Others want cloud and they want it easy.
I think part of what I'm learning is that in order to get the trust from users I'm going to either have to be around for a long time or get the thumbs up from other organizations that can vouch for my positive intentions.
For example, I want to write up a document about our commitment to Open Source but of course that takes work and I don't have a ton of time.