For example, what is the connection between "Beautiful code gets rewritten" and "Refuse to work on systems that profit from digital addictions." ?
It comes across as though the author is just vaguely angry at the loose concept of software.
It's true that the software ecosystem is incredibly disorganized and inefficient (100s of similar distros, dozens of similar package managers, thousands of redundant tools). But I don't think one should get bent out-of-shape about it. If you think about it objectively, if it were efficient then most of us likely wouldn't have jobs.
I see why you feel that. While it may sound like emotional associations, I find the topics still thought provoking.
> if it were efficient then most of us likely wouldn't have jobs.
This is exactly what I was thinking of this. I wonder to what extent does industry complexities are embraced as job creators.
I often feel something is fundamentally wrong in how society views job "market".
I said it before, I believe technology is meant to free us. To make jobs unnecessary. It may sound weird but I always wish to find one day my job is automated or becomes unnecessary.
I think true engineers keep things ridiculously simple for everyone. For the team and for the customers.
I'm apparently still needed. :|
I got so mad when I heard he was being paid to do that. I'm not mad that he was being paid, but that his time was being wasted.
Give me a week, a pile of old junk, and a micro-controller and I could have a solution bashed together. I'm not saying it would be good, but at least it would be functional.
Defense spending is the only politically acceptable "make work" in the USA.
Oh dear. The future looks very dim indeed, as the species that complicates things will earn money and reproduce. Maybe this has been going on for a long time already.
If we continue to do things in the way they've been done, the total work produced by human effort will grow linearly with the total amount of human effort used.
If we invest human effort in improving the way things are done, then not only will that effort be available for improving other things, all effort that would have been spent on the task that was improved may be put to other work. This allows us to grow the total work produced by human effort to grow exponentially with the total amount of human effort used.
Makes sense to me, though I personally am a bit bearish on whether humans are fit to be governors of anything, self- or other.
So programmers should write bad software to keep their jobs? And civil engineers should build bad bridges to build them again a bit later? And doctors?
Because digital bits are easily preserved, it was a tempting fantasy that we could build perfect software cathedrals immune to the ravages of time. Of course what we found is that in the physical world, matter decays, but the laws of physics remain constant. In the digital world, the matter stays the same, but the environment decays.
It's different because civil engineers and doctors will get sued if they mess up.
Software engineers are generally not held responsible for their own work.
Even big software consulting firms like IBM have had their share of multimillion dollar failed projects and the repercussions for them have been minimal - They still keep getting big contracts from governments around the world;
they say that no one ever got fired from choosing IBM... Maybe they should be!
There is a tendency to put all the responsibility on project managers who don't understand the code.
Success in the tech industry is so centralized (winner-takes-it-all) that the efficiency of programming practices don't matter at all.
Most top engineers at top companies are good at comming up with complex solutions which give them more leverage over their employers.
When conducting job interviews, companies don't differentiate between engineers who love coding from those who love money. That's a big mistake.
At least on a subconscious level, the engineer who prefers money is mentally hardwired to increase complexity while the other engineer is hardwired to reduce complexity.
The entropy of the creation and evolution of software gets tangled. We can do our utmost best, but unless we spend an equal (or sometimes greater) amount of time untangling (refactoring), and unless customers' needs suddenly become simple and static and uniform so we don't need to be adding features, we'll still have jobs.
Here is a quote from that article:
"The costs to develop a new antibiotic drug are no less expensive compared to development of drugs for other therapeutic areas, yet the commercial potential and return on investment for companies developing new antibiotics are significantly lower than drugs to treat chronic conditions such as diabetes or heart disease," said Gary Disbrow, deputy director of the Biomedical Advanced Research and Development Authority, which sits within the U.S. Department of Health and Human Services.
> what is the connection between "Beautiful code gets rewritten" and "Refuse to work on systems that profit from digital addictions." ?
Both result from systematic perverse incentives caused by local rather than global optimisation processes. That's part of what I was trying to get at in part II. We really aren't very good at making collective decisions yet; neither narrow ones about how to build software, nor broader ones about what software to build. I don't have a solution, but in the meantime I'd like to try not to make things worse.
It looks like the author completed a BS and went straight into a PhD that he's still pursuing, with summer internships at Google, RethinkDB, and the Recurse Center. I wonder if the author's perspective would change if he worked in industry?
Not trying to ad hominem argue his points away, but I agree that one shouldn't be angry about the loose concept.
It's because I work in the industry that I find a lot of what he says resonating with me.
This one is very concise and provocative.
"It comes across as though the author is just vaguely angry at the loose concept of software."
Not really, the title was clearly deliberate hyperbole. He just wants us to all stop and think about what we're doing as developers.
Obviously, because of freedom of speech.
But why are supposedly intelligent people taking this sort of stuff seriously is beyond me...
To be fair, neither is anything as generalized as this article. I'd say it's more useful as a lens through which to consider software than a fact you could prove or disprove.
If you don't feel it's plausible at all that software is trending towards overcomplicated ugly, then yea you won't find much of interest here. If you do, you can start to draw parallels, look for counterexamples, consider courses of action, etc.
You may have started with a nice abstraction for one case, but you don't have time to reabstract every step of the way.
Mediocre/poor developers just think about what needs to get done. They will copy code that seems to do something similar and hack on it until it works.
I saw this a lot with asp pages back in the day. An entire application, that actually worked and was well liked by its users, was implemented as hundreds of totally stand-alone .asp files. When a new page was needed, the developer copied an old one and modified it. That made it pretty easy to add new features without breaking old ones, but made it very difficult to make cross-cutting changes such as changing the name or type of a field on multiple pages, changing page headers, changing database connections (yes, they were also duplicated in every file) etc.
This is also how you end up with a "20k line VB.Net disaster." They are created by developers who know just enough to make something work, but don't know about abstraction and modularization or aren't smart enough or experienced enough to see the abstractions and to keep track of what's going on when the code is split out over a dozen files or modules. Or possibly, just don't care.
When you're eventually wrong, someone must pay the incredibly expensive price of deabstraction. Which can become so untenable that it makes more sense to escape-latch out of it for a new business requirement. If you disagree, then shrink the deadline until you do.
What happens with technical debt is that every new/changed feature incurs disproportional costs. And all costs, at the end of the day, boil down into time. Even the best developer has the same finite resource of time as everyone else, and they are stuck choosing between the best of suboptimal solutions once bounded by deadlines.
This is why, when encountering a mess of a codebase, it's naive to conclude "wow, what a bunch of amateurs." And that's exactly what I thought at my first job out of university. Eventually I realized that software is just hard and there is never enough time. The more experienced you get, the better you are at writing code that can be changed or thrown away. But you're still only minimizing the bad, not eliminating the bad, so on a long enough time scale with enough monkey wrenches of time constraints and requirement churn, technical debt is inevitable.
Experience is obviously good but people tend to get inflexible and dogmatic, too.
I totally get the low job security and low payment and the "I don't care" parts and have written bad code because of all of those. Still, if you put even a minimal effort, it pays off.
Having something like that is like a pressure point for business risk. A small nudge here, and the whole shaky house of cards violently implodes ejecting all kinds of badness, monetary and otherwise, over those around it.
And it is not just the bad component, it is the overall system design which permits this and does not support change within the system ("modifiability", "extensibility", ...).
Point: it is not as easy as saying "oh, this is because of that incompetent asshat". The incompetent asshattery is a systemic phenomenon with many actors. Death by a thousand small bad choices; the 20 LOC of untouchable code is just a manifestation of the bigger problems.
But rather than spend any more time talking about bad code, I'm going to go now and try and write some good code.
I don't think he's advocating for ugly code.
One can look at it as at sign of rather efficient ecosystem.
It is so easy to make and distribute certain things so we now have bunch of duplicates which are different on only in some small details.
I'd dare to call it pretentious: "your code is bad and it's your fault the world is a mess but I, the genius not at fault for and detached from any of this, know how to fix all of it".
Overall it seems quite reasonable except maybe for the tone that was off-putting to many apparently
I think a lot of code is bad, and the world is a mess. But I have no idea about your code, and I'm not sure who is at fault, or even if that's a useful question to ask. I'm certainly no genius, and I definitely don't know how to fix all of it.
I read it as: "We can do better but the current way of doing things looks broken to me. We should start by replacing it with something VERY different."
That's not how any kind of writing works. You aren't allowed to go, "This is what I wrote, but the way this poster wrote it is what I actually mean." The author will always carry some of the responsibility for interpretation via the words and grammar they use. Reader interpretation is not a way to handwave criticism of the writer's tone or the content they share.
Basically, how you choose to read it is a part of the experience, like reading a literature.
Hence I believe that people are projecting while reading it. I know I did.
Basically, software hovers along the fuzzy line between "barely works" and "doesn't work". When we start a project, it doesn't work. We add code until it barely works. Then we break it, so it doesn't work. So we add enough scaffold that it barely works and can support the new features, until we break it again, and the cycle begins anew. This is actually embedded as principle in TSTTCPW (the simplest thing that could possibly work), but attempts to waterfall our way out by careful planning are generally doomed, due to unintended consequences and such.
Eventually, the software becomes broken and no longer worth fixing (and thus gets abandoned), or it hits feature complete while (barely) working. And what happens when it's "complete"? Do we say "Thank goodness, finally I can go fix all those bugs and design flaws!"? No, never.
Instead, we drop that blecherous losing piece of crap as fast as we can and go start on the shiny new project that's been cooking in our minds, the one we couldn't work on because we were hacking our way through the wretched ball of mud that is our old code, wasting time in frustration and shame, just to get it off our plate. So we start working on the shiny new thing. And this time, the code will be good. This time, it won't suck.
Some software exists outside of this environment, usually because it is beholden to an impeccable level of quality (defined as never caught in a non-deterministic state, resillient, etc). This can be:
— software used by millions of people, all poking at obscure edge cases;
— software used by lots of other software, for the same reasons;
— software commissioned by high stakes businesses (space, life-support systems...), where the amount of life or money lost, should the software fail, is simply unacceptable.
This in turn guides what I decide to work on. If your employer/client views your software purely as a cost center, you can guarantee the quality you are allowed to deliver will always remain mediocre. Good enough to support the business but no more. I learned to stay away from those industries.
For purposes of its extreme performance and availability requirements, it barely worked. That doesn't mean it was bad code! It was great code. But it could have been better. It could have been much better. And it wasn't, because once it met those requirements, work got applied to expanding its feature set instead. Which was, in a way, also a requirement.
Pick any "impeccable level of quality" system, and you'll find the same thing. Heart monitors. Mars launches. Whatever. The software barely works, given the difficulty of the requirements. I'd argue that it's actually irresponsible for businesses to try to do any better than that! The cost of bouncing the rubble with code quality is a resource that could be much better spent on other things.
Think of it in terms of the 80/20 rule. Dig into that last 20%, and you'll spend 80% of your effort on it. It's not a good tradeoff. The Pareto Principle isn't a measure of how bad we are, it's a measure of how good we are. When we attend to its wisdom, we get the most work done.
edit: Scope is not the only requirement. Schedule is also a requirement! Pareto Principle again! Taking four times as long to get 20% better code is failure. We don't just have to create features. We create features with the resources we have, the technology we have, the people we have, within the schedule we have. A blown schedule is a failed requirement. Software not delivered on time is, in fact, broken.
Which doesn't mean that we have no reusable or generalizable quality metrics - we do. But they tend to express things that correlate to an abstract of quality, and not a failure threshold.
The most honest systems in software are the ones that do not scale, because they have built in their hard failure point - and from that, tolerance metrics similar to bridges can be predicted and planned for. It's software that has to do everything in a multitude of configurations at great speed that runs into deep architectural issues, because it hypothesizes a bridge that will someday tolerate an infinite load.
For example, take a deployment configuration where the target environments are hardcoded, with duplicate-minus-variation configuration files. Works fine, for the existing environments. But it can't handle a new one, not without duplicating again. Correcting this means going to some sort of a template configuration with parameters in a data store.
Bit rot is part of this. Works fine in Windows 10, but it'll fail come Windows 11, etc.
And yes, part of what I'm arguing is that it doesn't even make sense for software to be better than barely working. I've hit a point where I think code that works too well is a project management smell.
> "Architecture is a hypothesis about the future that holds that subsequent change will be confined to that part of the design space encompassed by that architecture."
Generally speaking, programming is easier when you have clear feedback. Things either work or they don't, and you don't have to do a lot of expensive testing to gather statistics to show there's a bug.
The altnernative is smaller code, less tightly coupled, that uses fewer libraries and more beginner level language constructs.
There's no technical impediment to that. It's actually less work technically. It's just hard to design that way, because you have to actually understand your domain much more deeply.
That's where stability comes from, not redundancy but the opposite of that. Fewer parts, better understood.
And yet, brittleness is supposed to be bad. How is it different?
Brittle means "not enough flexibility, so it breaks in some contexts".
Removing a component is not a stated use. If a component can be removed during the stated use then that's not minimal, it needs more thought or parts.
Software replaces people with machines and increases capital's share of income. I have heard technology is the biggest driver of growing inequality, and at this point in history technological change is driven by software.
I think the old nostrums about creative destruction and labor saving technologies freeing up labor to pursue higher roles are true in the aggregate, but the aggregate obscures a lot of human wreckage left behind by people who were laid off and never rehired at a comparable level.
I consider this process inevitable which is why I am trying to get ahead of it. But that's a fundamentally selfish motivation no matter how much you gussy it up. Being honest about this is one thing that keeps my aspirations modest - I simply want to carve out a place for myself where I feel comfortable. I think buying into the creative destruction rhetoric makes it easier to harden your heart.
I've seen new languages and new development methodologies introduced to try to combat code complexity and the only thing I have seen so far that prevents it are people who take a step back and think about what they're creating before implementing it. These people are rare and go unrewarded for their efforts.
Nobody seems to be able to think even two steps ahead, often fail to think one step ahead sufficiently to answer "why" questions about their current task.
Just so, extensible code gets extended and shimmed and customized until under its own sheer weight it collapses, then replaced by a monolith that Just Works.
aesthetics alone keep us from the hell
Beware of comparing "beautiful" code to subsequent code that deals with 10x the complexity in requirements. Either it's beautiful because it's perfectly suited to the scale and complexity it was written for, in which case it's impossible to preserve that beauty in the later, more complex code, or it's "beautiful" because it has a bunch of premature abstractions built in that somebody fantasized would handle all future complexity, in which case the "beauty" was useless work that later programmers had to undo. When the supposed genericity and extensibility of a system doesn't survive the evolution of the system to meet subsequent requirements, my first suspicion is premature abstraction, not insufficiently tasteful programmers.
Even worse, the effort to dismantle this "beauty" is often considerable. Architecture, defined as the set of decisions that are costly to change, needs to anticipate the drastically different needs of the future. You rarely find architecture within the code of a single system early in its development, because code at that stage should be easy to rewrite. Code seen as "beautiful" by its author often violates this rule, creating architecture where architecture shouldn't exist.
This point is well taken, and I think that many organization would benefit from having a CGR role ("chief grim reaper"), who's job would be not to enhance the codebase or its functionality, but to do the opposite: to kill/simplify code and prevent the codebase from exceeding a certain net complexity over time. Simplicity itself need to be treated as a core feature, and the success of the CGR need to be defined by whether that core feature is maintained.
Ever stop to think that maybe this is a good thing?
That code doesn't exist to be beautiful -- code exists to get the job done. Beautiful doesn't mean it's not also useless or arbitrary or over-architected, while ugly can be a healthy balance of competing priorities -- maybe not clean, but a good series of necessary real-world compromises.
This isn't to defend all (or even most) ugly code as good... but that equating beauty with good comes across as terribly naive.
I've spent so many hours obscuring the true purpose of chunks of code just so it can hit some zealous reviewers checklist of patterns they like..
I think a sense of beauty can let us know that something is amiss with the bigger picture. It can provoke us into stepping back and ask: what is actually necessary? Which jobs really need doing, and at what cost?
Certainly, short-termism is not the only failure mode when writing software. Abstraction for abstraction's sake is another.
I've noticed lately that people seem to assume that if you say something negative, you're trying to make a big existential claim about its place in the universe.
Like when people criticize masculinity and then we are all aghast as if they said masculinity needs to be systematically extracted from all aspects of culture.
I'm not sure which side of the equation needs to be more careful though. Maybe, in the internet age with the globe as your audience, if people are criticizing things they need to add a bunch of caveats to make it clear what they're not saying.
> Easy-to-change code tends to be changed. Hard-to-change code tends to endure. Thus, a codebase tends to become harder to change over time.
I can't recall exactly where I saw that version, but I like it better. It's more clear what it means and why it's true.
Not really. If I wanted to convince skeptics, I would have to make it much longer and more precise, and then it be a very different kind of thing. Maybe I'll do that some other day.
The connections between the sections are fairly obscure, I admit. But I think many "universal truths of software development" are instances of more general problems to do with collective decision-making and local optimisation processes, which also produce the problems the last section mentions. I don't have a solution, but in the meantime I'd like to avoid making things worse.
If I think this doesn't apply to me because I don't have a choice, I'm disempowering myself. My choices may be hard to make, but abducted women still try to flee. We know because of the ones who escape.
If I think others don't have a choice, I'm disempowering them. I choose to believe we everyone can learn to become more empowered and stand up to those they've been giving power to.
That being said, many devs CAN make the choice, but I don't think it's fair to blame those who can't. Not everywhere is booming to the same amount, and not everyone's life experience is the same to allow them freedom to move to an area that is.
Blame is nonsense black-and-white thinking. There is shared responsibility in everything. I share in the responsibility of all things that occur after I take my next breath.
Abandon blame and it's possible to read what I wrote without finding anything wrong with it.
1) There is a possibility that the empowered person did not make the decision(s) that would have resulted in success. Maybe there was no decision chain leading to success. Good luck deciding.
2) The environment that caused failure, could be poorly optimized to minimize failure over a set of empowered persons. Maybe it is optimized and the individuals paths to success couldn't be increased. Good luck determining this, optimization of complex functions is not easy.
The classes are not mutually exclusive. Ignoring the individual's actions (the first class) seems like a poor approach to reducing net failures.
These lines of reasoning always remind me of a quote (from an admittedly cheesy source) that I think holds a useful sentiment.
"There's always someone who'll try to convince you that they know the answer no matter the question. Be wary of those who believe in a neat little world because it's just fucking crazy, you know that it is." --DMB
(I am not accusing the parent of being a person of which to be wary, their comments seem very reasonable).
* To invest with power, especially legal power or official authority.
* To equip or supply with an ability.
Give to individuals:
Legal power over their persons (liberty), as much as is possible without infringing on the legal power of other persons.
The ability and knowledge to wield this power to maximize their individual outcomes.
These concepts are well defined by the founders of the USA in a really beautiful and rigorous fashion. Read them thoroughly and you will learn more than I can convey.
I'm not sure why you would trust me, a stranger on the internet to define these concepts to you. Would you write off the foundational ideas of liberty if I were not able to convey these concepts with the fluency they deserve?
I'm picking up a really ineffective and/or deceptive argument strategy from you. The guidelines of this cite states:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
If your plan is to ask de-constructive questions until I slip up, know that you are abusing the Socratic method.  It is a cooperative argument exercise used to promote critical thinking. I did not consent, nor have the time for a critical thinking exercise. Refuting a sub-hypothesis says little about the school of thought that may have spawned it.
I am only writing this reply because I have seen similar threads spread throughout internet boards since I started browsing. The strategy (not you) is an evil cancer that degrades effective discussions, brow beats posters into submission, and kill communities. If I were to host a discussion platform I would do my best to cull this practice (without consent). I believe that is the spirit of that point in Paul's guideline. Please do better.
"You can't always get what you want. But if you try some time, you just might find you get what you need."
But again, not always.
Very true. Recognizing one's own power and stepping into it is an extreme privilege. We don't live in a society where this is often role modeled.
I was thinking even in the case of familial obligations. It's not a lack of power that's keeping you taking care of your schizophrenic mother, it's compassion.
This is a creative process that's part engineering, part art: some people choose to take proven components and techniques and apply them in predictable ways, while some choose to experiment with new techniques, and some up with novel solutions. But any tradeoff has a cost, and future maintainers may not appreciate a solution whose meaty details aren't obvious.
While the creation of software may be a part-creative process, its maintenance is purely pragmatic: fixes to unforeseen problems need to be delivered quickly above all else, and enhancements along all extension points need to be possible, not just along axes the original authors intended. It's not hard to see that elegant code might be under immense pressure from more pragmatic modifications, any one of which can endanger its status in the eye of a beholder. This is likely why this article's author sees 'ugly' code survive: presumably, small changes are sufficient to make beautiful code ugly, while already-ugly code faces no such aesthetic pressure.
I can't stand the unnecessary complexity, fragmentation and duplication in software. The entire industry is a mess. Very few people seem to care.
If you're using a shitty compiler, yes.
Who knows what the next frontier will be?
> You might be wondering whether there is a runtime cost when you’re using generic type parameters. The good news is that Rust implements generics in such a way that your code doesn’t run any slower using generic types than it would with concrete types.
> Rust accomplishes this by performing monomorphization of the code that is using generics at compile time. Monomorphization is the process of turning generic code into specific code by filling in the concrete types that are used when compiled.
Let's assume for example you have a generic hash map where you just plug the type in, which must be hashable. Now if you plug in an integer type, monomorphization or whatever fancy compilation techniques will make that faster. But the best way to map an integer range is still a plain array.
To put it in simple terms, compilers can make shit run faster, but they can't make it not shit.
Eh, only if you know in advance that:
1. Your integer keys can only ever come from a contiguous range of values
2. Once populated, your map will have values for at least 25-50% of the integers in that range.
If either of these assumptions are false, using an array will force you to allocate way more memory than you need to hold the elements in your map, and the resulting map will be sparse, and therefore cache-unfriendly.
Arrays are not better than hash maps that use integer keys in the general case.
[update: I realized that I mistyped my conclusion sentence, and accidentally wrote the opposite of what I meant. Now updated]
And that is my point. I'm saying Sufficiently Smart Compilers do not exist.
Could you show a real world and readable example of this, using a language and compiler that actually exist?
Also, is it really better specifying all the information to possibly help the compiler come to the conclusion that it should choose an array, than simply typing "Array"?
> Also, is it really better specifying all the information to possibly help the compiler come to the conclusion that it should choose an array, than simply typing "Array"?
Obviously right? The end result (target code) is the same but now you have two advantages (1) your compiler will check if your assumptions are right i.e. your keys are dense where you think they're dense and if not you'll get a type error in compile time so that you're not wasting space at runtime had you were using Array<int> (2) now your code doesn't have specialized data structures (hashmap vs array for the same logic), it's generic, so it's more readable.
That's just another way of saying that my rebuttal was spot-on. I don't even need to ask if you have any code to show, because I know you haven't. Seriously consider http://wiki.c2.com/?SufficientlySmartCompiler
We could agree that the latter can be implemented relatively efficiently (given a few concrete examples to clarify what that means). But that does not mean that their use, or more generally, the use of generic code, is always the best and most efficient to do. It is not, by far.
Furthermore template specialization is usually a bad idea. Think how we just LOVE std::vector<bool> (/s). Also std::unordered_map<int> is not specialized to an array implementation. Because you can't tell from the type whether it will map a contiguous range.
I've done my homework. Your turn.
But maybe you're right and I should have ignored it...
A hashmap is not the generic version of an array.
Provided a hash function for the key type exists, the hash map is one valid implementation for a generic map from keys to values. An array is a more efficient map from keys to values if the key type is an integral type and the actual keys at runtime are contiguous. It's also efficient if the keys are only nearly contiguous, and we have a "N/A" sentinel in the value type.
Plus, I'm not sure what optimizing =<< and concatMap has to do regarding optimizing generics vs concrete behavior.
Sometimes the good jobs are where you aren't. Sometimes the good jobs won't return your calls. Sometimes you have to choose between your morals and your wife, deportation, debt, or all together.
To give a concrete example: let's say I have to choose between unemployment and working in a casino (assuming I believe casinos to be evil). I can work there and do nothing. I can work there and donate an X amount of my salary to addiction-recovery ONGs. I can refuse to work there and compromise my (hypothetical) children's well-being.
Which one of this is the "moral" choice? I can't say. And I'm willing to bet that you can make a moral argument for each one of those.
I'm with you in that sticking to your morals often involve sacrifice. But I wouldn't assume that the weights I add to each aspect of my morals are the same as everybody else's.
If you are asking that question without the parent as context, I would say that it seems like the only distinction between morals and preferences is one of definition.
Also: GPL code gets rewritten, BSD/MIT code survives.
Given the choice between two otherwise similar frameworks/libraries, I would expect most to gravitate towards the non-GPL'd one since there is less friction. i.e. you don't even have to consider things like the GPL linking rules.
[a bit oversimplified] You want to use this GPL library? Great! Just release your own software as GPL!
It's not complicated, per se, it's just that some people/business are not ok with that condition (which is their prerogative).
The point is that if people/businesses are not okay with using it, then that hurts the survival of that thing.
There was a wide sweeping statement about a set that I believe to be generally true for one subset  and it was refuted with cherry-picked examples from a disjoint subset.
: Of a significant size.
(I have the opposite experience, in the beginning I just used the GPL because it seemed fine, but thatnks to some real life experience, I came to realize how my own political views are embodied in it and how wrong I'd feel producing code that it's not GPL) (I don't say the GPL is perfect here, but it's way better than the MIT license for me)
> Refuse to work on systems that centralize control of media.
> Refuse to work on systems that prop up an unjust status quo.
> Refuse to work on systems that require unsustainable tradeoffs.
> Refuse to work on systems that weaponize the fabric of society.
Rich people can afford to turn their nose up at any kind of work apparently. Also noteworthy that the internet was invented as a result of "weaponizing"...so put that phone down.
Oddly enough all of this also invalidates working on most open source code, since the OSI says a license should not discriminate against intended use
Dirt poor people have made much harder choices all over the world. And they had families and everything. It's called dignity, and it's not a luxury reserved for the rich.
>Also noteworthy that the internet was invented as a result of "weaponizing"...so put that phone down.
Which is neither here nor there. The internet was not some special invention that only the military could make. It was a design created to solve certain constraints. If the military hadn't worked on it (e.g. because programmers refused to do it), it would have been developed by some company or university.
Besides it took 3+ decades and lots work after its invention until it got to the hands of the people, and thanks to public funded telephone network infrastructure, academic work (from MIT to CERN), and private companies (CompuServe, AOL and co). Left to the military it would be an insignificant niche network.
That's fair, but there's an entire, centuries-old legal apparatus designed to protect the rights of that kind of developer: they should engage a lawyer, craft a license with the correct balance they seek, and pursue to reap the concrete or abstract rewards from their work.
But instead, it's en vogue to complain about open source licenses and wonder why the cake can't also be eaten, because developers have a marketshare-scaling problem: they want their software distributed to a wide audience such that it gains usage and mindshare, but not wide enough where AWS is selling their work as a managed service at prices they couldn't by themselves. Attempts by rightholders of software that suffered this fate always try to split the offering into a libre core and a proprietary set of enhancements with various levels of success, while some go down a nonsensical and legally-dubious path of trying to force libre licensing onto proprietary companion software within a larger offering. These demonstrate that giving away libre software may not be a sound business model on its own.
Unfortunate for this article's author, the ethical stands he wants other developers to make are quite often good business models. That's to the detriment of all of us. Some amount of people will make a stand, but there will be others who won't, and they'll reap the rewards, and we'll suffer the consequences of their work nonetheless.
Something that can make a task difficult or unpleasant is that task being unethical. Take for example an investment scheme that bilks the elderly out of their retirement savings. Technically, it's not difficult to do. Ethically, it's awful. So it's profitable because it's unethical. If robbing the elderly was ethical, a lot more people would do it, and there'd be a lot less money in it.
This is why the OSI guidelines are correct to stay out of the tarpit of value judgements.
Edit: BTW This is another reason OSI does not discriminate against intended use...to protect otherwise rational people from behaviors like you see on HN...blind downvoting and no responses
HN is becoming incredibly anti-intellectual, bordering on reddit
How about acting in a way so as to not cause others to suffer or impede their search for happiness? The Dalai Lama makes a case for this as a universal ethic in his Ethics for a New Millennium.
Calling making any kind of value judgement a "tarpit" is itself a value judgement in favor of the status quo and ceding any of your own culpability to others.
It's not their "goals", it's their behaviors. Tons of businesses have behavior that violates that Charter.
This is why you're getting downvoted, not anti-intellectualism.
That being said, you should go for a walk or something. You seem to be getting unnecessarily worked up over the fact that there are people on HN that have different opinions on this subject.
I am not rich.
And yes, phones and social media networks have been weaponized against our attention and wellbeing. That's why people at the head of companies producing these systems keep them out of the hands of their children.
Edit: accidentally left out "their" in the last sentence... Definitely changes the meaning a bit :D
Does Apple stop my kids from placing orders? No. Does Youtube stop my kids from watching videos? No. In fact, they make a special service specifically targeting kids.
- technologies like Internet, phones, GPS... is a result of militsary investment. It is no longer majorly funded from military resources. The technology is here and nothing would change if we stopped using it - actually, a lot of human suffering (including material - tax money) would be rendered pointless if we did that.
- employers like Facebook or Google do a lot of data analysis that might be considered wrong by many privacy oriented people.
- licenses are open, but that does not mean that a person should be expected to work on software they deem unethical. There is no connection between a license and subjective personal ethics.
- required functionality might be oriented towards anti-ethical functionality, or neutral, or ethical. A developer is free to choose whether they want or do not want to contribute and I expect them to make a subjective evaluation of their feelings.
- we have freedom of speech, developers are free to persuade each other
- taking things to the extreme is never productive
For those curious about how military funding influenced the invention of ARPANET, the internetsociety has a brief article on the history . Ironically the sites responsive behavior is not designed very well for small viewports. For example, it is difficult to read from a phone.
I'm surprised I can't find more people willing or able to do this.
No, those who have the opportunity to work on stuff that supports some values should do it. You introduce the notion of money here, but it's not written.
>> Oddly enough all of this also invalidates working on most open source code
Those who work on nuclear fission don't all work on nuclear bombs (fortunately).
There are plenty of people who would disagree.
And why would an opinion of some organization funded and supported by unethical corporations even matter here? Of course they want you to make software they can use. You shouldn't though.
It's very reasonable to discriminate at least against unethical corporations. You probably don't want your software to be used in organizations that kill people or that sell censorship solutions.