Hacker News new | comments | show | ask | jobs | submit login
Against software development (rntz.net)
243 points by octosphere 5 days ago | hide | past | web | favorite | 174 comments





There's something mildly "ranty" about this, to its own detriment. I believe what it is is that it seems to try to connect ideas that have no logical connection, but simply an emotional association.

For example, what is the connection between "Beautiful code gets rewritten" and "Refuse to work on systems that profit from digital addictions." ?

It comes across as though the author is just vaguely angry at the loose concept of software.

It's true that the software ecosystem is incredibly disorganized and inefficient (100s of similar distros, dozens of similar package managers, thousands of redundant tools). But I don't think one should get bent out-of-shape about it. If you think about it objectively, if it were efficient then most of us likely wouldn't have jobs.


> I believe what it is is that it seems to try to connect ideas that have no logical connection, but simply an emotional association.

I see why you feel that. While it may sound like emotional associations, I find the topics still thought provoking.

For example:

> if it were efficient then most of us likely wouldn't have jobs.

This is exactly what I was thinking of this. I wonder to what extent does industry complexities are embraced as job creators. I often feel something is fundamentally wrong in how society views job "market". I said it before, I believe technology is meant to free us. To make jobs unnecessary. It may sound weird but I always wish to find one day my job is automated or becomes unnecessary.

I think true engineers keep things ridiculously simple for everyone. For the team and for the customers.


My credo is to do my job so good that my company could fire me.

I'm apparently still needed. :|


Mine is that I build robots to do my job for me. I'm only interested in the parts of my job that can't be done by a robot.

I have a family member who worked in a factory that built electronics backplanes. His entire job, for eight hours a day, was to pick up a back plane from a chute, flip it over, and place it on a conveyor.

I got so mad when I heard he was being paid to do that. I'm not mad that he was being paid, but that his time was being wasted.

Give me a week, a pile of old junk, and a micro-controller and I could have a solution bashed together. I'm not saying it would be good, but at least it would be functional.


I had a friend in high school with a similar job. His entire job consisted of getting a box of electronic components, plugging them into a test machine, and pressing a button. If a light on the test machine turned green, he put the part in one box, if it turned red, he put it in another box. Sad part is that it actually paid really well because working in the facility required a security clearance.

It's by design -- somebody spent a shotload on R&D making an idiot proof machine that still requires an idiot in the loop.

Defense spending is the only politically acceptable "make work" in the USA.


That and cost plus contracts. Does not really motivate companies to high efficiencies.

> I think true engineers keep things ridiculously simple for everyone. For the team and for the customers.

Oh dear. The future looks very dim indeed, as the species that complicates things will earn money and reproduce. Maybe this has been going on for a long time already.


A contrasting opinion - human effort is finite, but increasing.

If we continue to do things in the way they've been done, the total work produced by human effort will grow linearly with the total amount of human effort used.

If we invest human effort in improving the way things are done, then not only will that effort be available for improving other things, all effort that would have been spent on the task that was improved may be put to other work. This allows us to grow the total work produced by human effort to grow exponentially with the total amount of human effort used.


And theoretically, when you reach a level where you have very few tasks left and very large available workforce, you can close the remaining "holes" by volunteering force alone.

I think the two parts are bridged by the line, "Perhaps we should expect true advances in software 'engineering' only when we learn how better to govern ourselves."

Makes sense to me, though I personally am a bit bearish on whether humans are fit to be governors of anything, self- or other.


"If you think about it objectively, if it were efficient then most of us likely wouldn't have jobs."

So programmers should write bad software to keep their jobs? And civil engineers should build bad bridges to build them again a bit later? And doctors?


Well planned obsolescence is definitely incentivized by the current economy, but in the grand scheme of things there's plenty of entropy to go around.

Because digital bits are easily preserved, it was a tempting fantasy that we could build perfect software cathedrals immune to the ravages of time. Of course what we found is that in the physical world, matter decays, but the laws of physics remain constant. In the digital world, the matter stays the same, but the environment decays.


>> And civil engineers should build bad bridges to build them again a bit later? And doctors?

It's different because civil engineers and doctors will get sued if they mess up. Software engineers are generally not held responsible for their own work. Even big software consulting firms like IBM have had their share of multimillion dollar failed projects and the repercussions for them have been minimal - They still keep getting big contracts from governments around the world; they say that no one ever got fired from choosing IBM... Maybe they should be!

There is a tendency to put all the responsibility on project managers who don't understand the code. Companies are terrified of giving their engineers any leverage so they will do everything in their power to avoid it; that means hiring more project managers, implementing more strict project management practices (e.g. rigidly adhering to all Agile/Scrum practices), using more advanced project management tools (e.g. Jira instead of Trello), using more rigid frameworks and platforms (e.g. TypeScript instead of JavaScript), more thorough testing (100% unit and integration test coverage), etc...

Success in the tech industry is so centralized (winner-takes-it-all) that the efficiency of programming practices don't matter at all. Most top engineers at top companies are good at comming up with complex solutions which give them more leverage over their employers.

When conducting job interviews, companies don't differentiate between engineers who love coding from those who love money. That's a big mistake. At least on a subconscious level, the engineer who prefers money is mentally hardwired to increase complexity while the other engineer is hardwired to reduce complexity.


I think it's more like the fact that my joystick cables always get tangled, no matter what I do

The entropy of the creation and evolution of software gets tangled. We can do our utmost best, but unless we spend an equal (or sometimes greater) amount of time untangling (refactoring), and unless customers' needs suddenly become simple and static and uniform so we don't need to be adding features, we'll still have jobs.


Drug companies look to discover/invent drugs that can treat a symptom forever, rather than curing people. Probably the main reason people hate drug companies. That and charging 100k for a treatment when they do find a cure (Hep C).

Can you cite actual instances of this happening?

It is what is not happening that is the problem. A bit harder to prove but not researching antibiotics is one case that is widely discussed. A google search on "why aren't drug companies researching antibiotic drugs" brings up lots of articles spanning decades. This Business Insider [1] article is quite recent.

Here is a quote from that article:

"The costs to develop a new antibiotic drug are no less expensive compared to development of drugs for other therapeutic areas, yet the commercial potential and return on investment for companies developing new antibiotics are significantly lower than drugs to treat chronic conditions such as diabetes or heart disease," said Gary Disbrow, deputy director of the Biomedical Advanced Research and Development Authority, which sits within the U.S. Department of Health and Human Services.

[1]https://www.businessinsider.com/major-pharmaceutical-compani...


It is a bit of a rant, yes; I did write it to evoke emotion; and it is less clear than it could be.

> what is the connection between "Beautiful code gets rewritten" and "Refuse to work on systems that profit from digital addictions." ?

Both result from systematic perverse incentives caused by local rather than global optimisation processes. That's part of what I was trying to get at in part II. We really aren't very good at making collective decisions yet; neither narrow ones about how to build software, nor broader ones about what software to build. I don't have a solution, but in the meantime I'd like to try not to make things worse.


I got the ranty vibe as well.

It looks like the author completed a BS and went straight into a PhD that he's still pursuing, with summer internships at Google, RethinkDB, and the Recurse Center. I wonder if the author's perspective would change if he worked in industry?

Not trying to ad hominem argue his points away, but I agree that one shouldn't be angry about the loose concept.


"I wonder if the author's perspective would change if he worked in industry?"

It's because I work in the industry that I find a lot of what he says resonating with me.


I like reading rants. Maybe because I'm prone to them myself.

This one is very concise and provocative.

"It comes across as though the author is just vaguely angry at the loose concept of software."

Not really, the title was clearly deliberate hyperbole. He just wants us to all stop and think about what we're doing as developers.


I read it more like a poem than a manifesto. There are elements of both, of course.

I wonder why a person whose experience as a software engineer is limited to a couple of internships is trying to come up with god-like generalisations even reaching as far out as politics.

Obviously, because of freedom of speech.

But why are supposedly intelligent people taking this sort of stuff seriously is beyond me...


Ad hominem is not an argument.

To be fair, neither is anything as generalized as this article. I'd say it's more useful as a lens through which to consider software than a fact you could prove or disprove.

If you don't feel it's plausible at all that software is trending towards overcomplicated ugly, then yea you won't find much of interest here. If you do, you can start to draw parallels, look for counterexamples, consider courses of action, etc.


It does say Take with a grain of salt. at the top of the page...

Yeah, I wonder how many 'just works' codebases he/she has had to spar with. It usually ends in tears. I have a 20k line VB.Net disaster that can't be changed without side effects because of the use of global variables, basically needing a whole rethink/rewrite. At least with some beautiful code the author might have put some thought into it beyond what is directly in front of them.

To me, that's what the author is saying with "ugly code survives." You have an ugly 20k line codebase that survives because you can't touch it without breaking something.

This idea is very well explored in the classic article, "Big Ball of Mud" by Foote and Yoder:

http://laputan.org/mud/


And you have it because the elegant, abstracted version exceeded the developer's ability to comprehend.

In my experience, ugly code is more a product of churn (like moving target business requirements) than incompetence.

You may have started with a nice abstraction for one case, but you don't have time to reabstract every step of the way.


Yes, it can be, but that has not been the only reason in my experience. Smart developers can see the abstractions and organize their code accordingly.

Mediocre/poor developers just think about what needs to get done. They will copy code that seems to do something similar and hack on it until it works.

I saw this a lot with asp pages back in the day. An entire application, that actually worked and was well liked by its users, was implemented as hundreds of totally stand-alone .asp files. When a new page was needed, the developer copied an old one and modified it. That made it pretty easy to add new features without breaking old ones, but made it very difficult to make cross-cutting changes such as changing the name or type of a field on multiple pages, changing page headers, changing database connections (yes, they were also duplicated in every file) etc.

This is also how you end up with a "20k line VB.Net disaster." They are created by developers who know just enough to make something work, but don't know about abstraction and modularization or aren't smart enough or experienced enough to see the abstractions and to keep track of what's going on when the code is split out over a dozen files or modules. Or possibly, just don't care.


Abstraction isn't free, either. The more you build, the more you're gambling that it abstracts over all future requirements.

When you're eventually wrong, someone must pay the incredibly expensive price of deabstraction. Which can become so untenable that it makes more sense to escape-latch out of it for a new business requirement. If you disagree, then shrink the deadline until you do.

What happens with technical debt is that every new/changed feature incurs disproportional costs. And all costs, at the end of the day, boil down into time. Even the best developer has the same finite resource of time as everyone else, and they are stuck choosing between the best of suboptimal solutions once bounded by deadlines.

This is why, when encountering a mess of a codebase, it's naive to conclude "wow, what a bunch of amateurs." And that's exactly what I thought at my first job out of university. Eventually I realized that software is just hard and there is never enough time. The more experienced you get, the better you are at writing code that can be changed or thrown away. But you're still only minimizing the bad, not eliminating the bad, so on a long enough time scale with enough monkey wrenches of time constraints and requirement churn, technical debt is inevitable.


Absolutely, you can go too far; sometimes the "naive" approach is the best.

While there are lots of examples out in the wild of people making a mess of things due to lack of experience, hastiness, and not thinking long term, there is a two way street there. Experienced developers sometimes apply too many rules and too rigidly to deliver elegant, well-performing, timely solutions. I see people abstracting and decoupling things, applying highly generalized pessimistic patterns, bickering about how to name things that probably shouldn't even exist, let alone have explicit names.

Experience is obviously good but people tend to get inflexible and dogmatic, too.


Perhaps a lot of bad code comes from strict time requirements, loose specs, and employee churn. Your customers and your manager/supervisor couldn’t care less about how neat your code is, so I understand the logic to push out something that works ASAP. I personally try my best to write clean code, but I don’t call other developers ‘stupid’ given the reality of ridiculous timelines and low job security.

I agree with you but would just add one more nuance: there's a critical minimum of "so-so-okay-ish code" that we should never go below of. The problem with untouchable huge turd piles is borne out of going below that bar.

I totally get the low job security and low payment and the "I don't care" parts and have written bad code because of all of those. Still, if you put even a minimal effort, it pays off.


Any nontrivial component, which today does whatever its job is today, but is very brittle and resistant to change and is centrally located in the overall system architecture/structure, certainly can become tomorrow's untouchable 20 k LOC VB.net script.

Having something like that is like a pressure point for business risk. A small nudge here, and the whole shaky house of cards violently implodes ejecting all kinds of badness, monetary and otherwise, over those around it.

And it is not just the bad component, it is the overall system design which permits this and does not support change within the system ("modifiability", "extensibility", ...).

Point: it is not as easy as saying "oh, this is because of that incompetent asshat". The incompetent asshattery is a systemic phenomenon with many actors. Death by a thousand small bad choices; the 20 LOC of untouchable code is just a manifestation of the bigger problems.


It seemed to me that the author was advocating for ugly code. If that wasn't his intention, then he didn't do a very good job of expressing it - the title of the article is 'against software development'.

But rather than spend any more time talking about bad code, I'm going to go now and try and write some good code.


I think he's advocating (with a "grain of salt") for _minimizing_ software development, because it's so hard to keep it from turning terrible. If we could spend more time on less of it, maybe we could keep it better.

I don't think he's advocating for ugly code.


That makes more sense, thanks. RIP my reading comprehension.

That was the point of the reference to the "Programmer Archaelogist":

http://lambda-the-ultimate.org/node/4424


Well Part 3 just goes off the rails completely. I don’t know what centralizing control of media has to do with poorly written software.

The problem with your analysis is that the OP should be read like a poem, not an essay. That's my take, at least.

Inefficiency and duplication.

One can look at it as at sign of rather efficient ecosystem.

It is so easy to make and distribute certain things so we now have bunch of duplicates which are different on only in some small details.


> It comes across as though the author is just vaguely angry at the loose concept of software.

I'd dare to call it pretentious: "your code is bad and it's your fault the world is a mess but I, the genius not at fault for and detached from any of this, know how to fix all of it".


I read it more as a two part rant, first "all code is bad and what is good will become bad" with a shade of resignation then "the source of badness is in how we set our own goals as an industry and we should strive to change them".

Overall it seems quite reasonable except maybe for the tone that was off-putting to many apparently


I'm sorry it comes across that way. I certainly could have been more precise, but I wrote this to express a certain feeling, and felt that I couldn't improve it without losing something essential. Maybe someday I'll write something more analytic and less emotional.

I think a lot of code is bad, and the world is a mess. But I have no idea about your code, and I'm not sure who is at fault, or even if that's a useful question to ask. I'm certainly no genius, and I definitely don't know how to fix all of it.


It's only your choice to read it that way.

I read it as: "We can do better but the current way of doing things looks broken to me. We should start by replacing it with something VERY different."


> It's only your choice to read it that way.

That's not how any kind of writing works. You aren't allowed to go, "This is what I wrote, but the way this poster wrote it is what I actually mean." The author will always carry some of the responsibility for interpretation via the words and grammar they use. Reader interpretation is not a way to handwave criticism of the writer's tone or the content they share.


IMO that's exactly how most of the writing works -- especially if it is non-scientific as this post is.

Basically, how you choose to read it is a part of the experience, like reading a literature.

Hence I believe that people are projecting while reading it. I know I did.


I've been meaning to give a lightning talk about a related subject someday, called "Why Software Sucks".

Basically, software hovers along the fuzzy line between "barely works" and "doesn't work". When we start a project, it doesn't work. We add code until it barely works. Then we break it, so it doesn't work. So we add enough scaffold that it barely works and can support the new features, until we break it again, and the cycle begins anew. This is actually embedded as principle in TSTTCPW (the simplest thing that could possibly work), but attempts to waterfall our way out by careful planning are generally doomed, due to unintended consequences and such.

Eventually, the software becomes broken and no longer worth fixing (and thus gets abandoned), or it hits feature complete while (barely) working. And what happens when it's "complete"? Do we say "Thank goodness, finally I can go fix all those bugs and design flaws!"? No, never.

Instead, we drop that blecherous losing piece of crap as fast as we can and go start on the shiny new project that's been cooking in our minds, the one we couldn't work on because we were hacking our way through the wretched ball of mud that is our old code, wasting time in frustration and shame, just to get it off our plate. So we start working on the shiny new thing. And this time, the code will be good. This time, it won't suck.

Yeah, right.


This really depends on the kind of industry. If we follow the money trail, software exists to achieve a business goal, either directly or indirectly. As with most things in business, what matters is "good enough".

Some software exists outside of this environment, usually because it is beholden to an impeccable level of quality (defined as never caught in a non-deterministic state, resillient, etc). This can be:

— software used by millions of people, all poking at obscure edge cases;

— software used by lots of other software, for the same reasons;

— software commissioned by high stakes businesses (space, life-support systems...), where the amount of life or money lost, should the software fail, is simply unacceptable.

This in turn guides what I decide to work on. If your employer/client views your software purely as a cost center, you can guarantee the quality you are allowed to deliver will always remain mediocre. Good enough to support the business but no more. I learned to stay away from those industries.


I used to work on a system that moved a hundred billion dollars a day. Yes, billion. Downtime costed thousands of dollars a minute simply in floating interest from money not moving, and a serious outage could damage the entire economy. That's some serious high availability requirements.

For purposes of its extreme performance and availability requirements, it barely worked. That doesn't mean it was bad code! It was great code. But it could have been better. It could have been much better. And it wasn't, because once it met those requirements, work got applied to expanding its feature set instead. Which was, in a way, also a requirement.

Pick any "impeccable level of quality" system, and you'll find the same thing. Heart monitors. Mars launches. Whatever. The software barely works, given the difficulty of the requirements. I'd argue that it's actually irresponsible for businesses to try to do any better than that! The cost of bouncing the rubble with code quality is a resource that could be much better spent on other things.

Think of it in terms of the 80/20 rule. Dig into that last 20%, and you'll spend 80% of your effort on it. It's not a good tradeoff. The Pareto Principle isn't a measure of how bad we are, it's a measure of how good we are. When we attend to its wisdom, we get the most work done.

edit: Scope is not the only requirement. Schedule is also a requirement! Pareto Principle again! Taking four times as long to get 20% better code is failure. We don't just have to create features. We create features with the resources we have, the technology we have, the people we have, within the schedule we have. A blown schedule is a failed requirement. Software not delivered on time is, in fact, broken.


By definition wouldn't all bridges "barely work"? Yes it's spec'd to handle X load, now putting X+1 might cause some damage because it was never tested or designed for X+1.

All bridges follow similar designs, even if their implementations are original. Digital systems tend to fail in original ways all the time, because the design is something intentionally new. If it weren't, you'd just copy the existing software.

Which doesn't mean that we have no reusable or generalizable quality metrics - we do. But they tend to express things that correlate to an abstract of quality, and not a failure threshold.

The most honest systems in software are the ones that do not scale, because they have built in their hard failure point - and from that, tolerance metrics similar to bridges can be predicted and planned for. It's software that has to do everything in a multitude of configurations at great speed that runs into deep architectural issues, because it hypothesizes a bridge that will someday tolerate an infinite load.


What do you mean by barely working (and as a corollary, does it even make sense for it to do more than that)? The spectrum of correctness seems quite limited; you said it yourself, software starts out not working, and then when it's correct, it works. It seems pretty binary to me. Do you mean that the software is brittle, doesn't handle edge cases, has bugs, etc? It seems to me that once software is correct, there's not much more to do in terms of working.

Yes, I mean brittleness, inability to handle edge cases, etc. Oftentimes, this means that it's boxed-in and can't grow in certain necessary directions without major rework.

For example, take a deployment configuration where the target environments are hardcoded, with duplicate-minus-variation configuration files. Works fine, for the existing environments. But it can't handle a new one, not without duplicating again. Correcting this means going to some sort of a template configuration with parameters in a data store.

Bit rot is part of this. Works fine in Windows 10, but it'll fail come Windows 11, etc.

And yes, part of what I'm arguing is that it doesn't even make sense for software to be better than barely working. I've hit a point where I think code that works too well is a project management smell.


As I was reading this post I was about to reply with something similar but you nailed it. The vast majority of software written isn’t there just to be pretty, it serves a business goal and as long as that goal is met (on time!) and the cost of supporting it is within budget then why fix something that isn’t broken.

For a (not lightning) talk on the subject, I recommend "The Mess We're In" by Joe Armstrong: https://www.youtube.com/watch?v=lKXe3HUG2l4

The whole world hovers between barely works and doesn't work. That's the theory of evolution that explains the history of life.

Nice observation. I would enjoy a Tao of System Design that coyly smiles and says, The architecture that can be defined is not the true architecture.

It's not nearly as pithy, but I think that this line from "Big Ball of Mud" captured something essential:

> "Architecture is a hypothesis about the future that holds that subsequent change will be confined to that part of the design space encompassed by that architecture."

http://laputan.org/mud/mud.html#Forces


EXACTLY.

On the other hand, you have redundant systems where everything seems fine even if some parts are not working. If you don't have good monitoring, you won't notice when you break something until other things break as well. Or maybe it doesn't stop working and you just have mysterious brownouts or irreproducible bugs?

Generally speaking, programming is easier when you have clear feedback. Things either work or they don't, and you don't have to do a lot of expensive testing to gather statistics to show there's a bug.


I wouldn't say that redundancy is the alternative to brittleness.

The altnernative is smaller code, less tightly coupled, that uses fewer libraries and more beginner level language constructs.

There's no technical impediment to that. It's actually less work technically. It's just hard to design that way, because you have to actually understand your domain much more deeply.

That's where stability comes from, not redundancy but the opposite of that. Fewer parts, better understood.


In a minimalist design, everything has a purpose, so if you remove it, it breaks. That seems good.

And yet, brittleness is supposed to be bad. How is it different?


I would say minimal means something like "just enough flexibility to support one stated use in all predictable contexts".

Brittle means "not enough flexibility, so it breaks in some contexts".

Removing a component is not a stated use. If a component can be removed during the stated use then that's not minimal, it needs more thought or parts.


I don't disagree with this, and I'm sure I'd promote the talk, but I'd love to see counterexamples. SQLite?

A lightning talk I would love to attend.

The author makes some good points about the ethical dubiousness of a lot of software jobs. But I think the author misses the biggest point.

Software replaces people with machines and increases capital's share of income. I have heard technology is the biggest driver of growing inequality, and at this point in history technological change is driven by software.

I think the old nostrums about creative destruction and labor saving technologies freeing up labor to pursue higher roles are true in the aggregate, but the aggregate obscures a lot of human wreckage left behind by people who were laid off and never rehired at a comparable level.

I consider this process inevitable which is why I am trying to get ahead of it. But that's a fundamentally selfish motivation no matter how much you gussy it up. Being honest about this is one thing that keeps my aspirations modest - I simply want to carve out a place for myself where I feel comfortable. I think buying into the creative destruction rhetoric makes it easier to harden your heart.


The only time I haven't seen shabby code with caked on layers of features has been when I had the fortune of working on a development team that was trained to write software.

I've seen new languages and new development methodologies introduced to try to combat code complexity and the only thing I have seen so far that prevents it are people who take a step back and think about what they're creating before implementing it. These people are rare and go unrewarded for their efforts.


This is so true. There is a lost art of software architecture and design.

Shit software architecture/design is nothing new. There always was shit code, and always will be. The sky is blue.

I've been wandering from job to job looking for these folks to learn from and have failed so far.

Nobody seems to be able to think even two steps ahead, often fail to think one step ahead sufficiently to answer "why" questions about their current task.


Just so, generic code is replaced by its concrete instances, which are faster and (at first) easier to comprehend.

Just so, extensible code gets extended and shimmed and customized until under its own sheer weight it collapses, then replaced by a monolith that Just Works.

aesthetics alone keep us from the hell

Beware of comparing "beautiful" code to subsequent code that deals with 10x the complexity in requirements. Either it's beautiful because it's perfectly suited to the scale and complexity it was written for, in which case it's impossible to preserve that beauty in the later, more complex code, or it's "beautiful" because it has a bunch of premature abstractions built in that somebody fantasized would handle all future complexity, in which case the "beauty" was useless work that later programmers had to undo. When the supposed genericity and extensibility of a system doesn't survive the evolution of the system to meet subsequent requirements, my first suspicion is premature abstraction, not insufficiently tasteful programmers.

Even worse, the effort to dismantle this "beauty" is often considerable. Architecture, defined as the set of decisions that are costly to change, needs to anticipate the drastically different needs of the future. You rarely find architecture within the code of a single system early in its development, because code at that stage should be easy to rewrite. Code seen as "beautiful" by its author often violates this rule, creating architecture where architecture shouldn't exist.


>Software grows until it exceeds our capacity to understand it.

This point is well taken, and I think that many organization would benefit from having a CGR role ("chief grim reaper"), who's job would be not to enhance the codebase or its functionality, but to do the opposite: to kill/simplify code and prevent the codebase from exceeding a certain net complexity over time. Simplicity itself need to be treated as a core feature, and the success of the CGR need to be defined by whether that core feature is maintained.


Wow that is a really good idea. Unless a specific individual is in charge of something it will never get done. I am going to propose this to my team.

> Beautiful code gets rewritten; ugly code survives.

Ever stop to think that maybe this is a good thing?

That code doesn't exist to be beautiful -- code exists to get the job done. Beautiful doesn't mean it's not also useless or arbitrary or over-architected, while ugly can be a healthy balance of competing priorities -- maybe not clean, but a good series of necessary real-world compromises.

This isn't to defend all (or even most) ugly code as good... but that equating beauty with good comes across as terribly naive.


Not to mention, often times the business requirements themselves are ugly. Ugly but clear is the lowest possible distance between code and purpose.

I've spent so many hours obscuring the true purpose of chunks of code just so it can hit some zealous reviewers checklist of patterns they like..


"Getting the job done" is a short-term goal that often comes at the expense of long-term well-being. For example, the endless parade of backward compatibility layers on top of backward compatibility layers. Each got the job done at the time. And each makes it just a little bit harder for future programmers to get their jobs done.

I think a sense of beauty can let us know that something is amiss with the bigger picture. It can provoke us into stepping back and ask: what is actually necessary? Which jobs really need doing, and at what cost?

Certainly, short-termism is not the only failure mode when writing software. Abstraction for abstraction's sake is another.


They didn't say it was a bad thing. Just that it's what happens.

I've noticed lately that people seem to assume that if you say something negative, you're trying to make a big existential claim about its place in the universe.

Like when people criticize masculinity and then we are all aghast as if they said masculinity needs to be systematically extracted from all aspects of culture.

I'm not sure which side of the equation needs to be more careful though. Maybe, in the internet age with the globe as your audience, if people are criticizing things they need to add a bunch of caveats to make it clear what they're not saying.


The first sentence literally calls it the "tragedy of software engineering". I don't think I'm making an unwarranted assumption that the author means it's a bad thing.

I dislike "beautiful" as a description because it means different things to different people. I've seen a similar, narrower claim before:

> Easy-to-change code tends to be changed. Hard-to-change code tends to endure. Thus, a codebase tends to become harder to change over time.

I can't recall exactly where I saw that version, but I like it better. It's more clear what it means and why it's true.


Whiplash. The first two sections are about generic software development truisms; from there, for the last section, the author pivots to arguing for his specific political positions, as if universal truths of software development justify these ideas. Is the author expect to convince anyone who doesn't already agree with him?

> Is the author expect to convince anyone who doesn't already agree with him?

Not really. If I wanted to convince skeptics, I would have to make it much longer and more precise, and then it be a very different kind of thing. Maybe I'll do that some other day.

The connections between the sections are fairly obscure, I admit. But I think many "universal truths of software development" are instances of more general problems to do with collective decision-making and local optimisation processes, which also produce the problems the last section mentions. I don't have a solution, but in the meantime I'd like to avoid making things worse.


> To those who have a choice:

If I think this doesn't apply to me because I don't have a choice, I'm disempowering myself. My choices may be hard to make, but abducted women still try to flee. We know because of the ones who escape.

If I think others don't have a choice, I'm disempowering them. I choose to believe we everyone can learn to become more empowered and stand up to those they've been giving power to.


That line of thinking can end at victim blaming - the abducted women stayed abducted because she didn't try to run hard enough. That's putting the blame on the wrong person.

That being said, many devs CAN make the choice, but I don't think it's fair to blame those who can't. Not everywhere is booming to the same amount, and not everyone's life experience is the same to allow them freedom to move to an area that is.


I'm not assigning blame. I'm choosing who I want as a role model.

Blame is nonsense black-and-white thinking. There is shared responsibility in everything. I share in the responsibility of all things that occur after I take my next breath.

Abandon blame and it's possible to read what I wrote without finding anything wrong with it.


Without framing it in blame then, if one fails even though they are empowered, what is the implications?

There are two classes of implications that come to me easily.

1) There is a possibility that the empowered person did not make the decision(s) that would have resulted in success. Maybe there was no decision chain leading to success. Good luck deciding.

2) The environment that caused failure, could be poorly optimized to minimize failure over a set of empowered persons. Maybe it is optimized and the individuals paths to success couldn't be increased. Good luck determining this, optimization of complex functions is not easy.

The classes are not mutually exclusive. Ignoring the individual's actions (the first class) seems like a poor approach to reducing net failures.

These lines of reasoning always remind me of a quote (from an admittedly cheesy source) that I think holds a useful sentiment.

"There's always someone who'll try to convince you that they know the answer no matter the question. Be wary of those who believe in a neat little world because it's just fucking crazy, you know that it is." --DMB

(I am not accusing the parent of being a person of which to be wary, their comments seem very reasonable).


So empowerment doesn't mean they have the ability to change anything, it's just a state of being where they are under the belief they can possibly change something? What exactly is empowerment?

Predetermination is not a theological rabbit hole that is necessary to go down for this conversation. Reducing things to the point where the original ideas have little tangible meaning ineffective way to discuss ideas. See why mathematicians reduce complex ideas into a concise syntax.

---

Empower:

* To invest with power, especially legal power or official authority.

* To equip or supply with an ability.

Give to individuals:

Legal power over their persons (liberty), as much as is possible without infringing on the legal power of other persons.

The ability and knowledge to wield this power to maximize their individual outcomes.

---

These concepts are well defined by the founders of the USA in a really beautiful and rigorous fashion. Read them thoroughly and you will learn more than I can convey.

I'm not sure why you would trust me, a stranger on the internet to define these concepts to you. Would you write off the foundational ideas of liberty if I were not able to convey these concepts with the fluency they deserve?

I'm picking up a really ineffective and/or deceptive argument strategy from you. The guidelines of this cite states:

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

If your plan is to ask de-constructive questions until I slip up, know that you are abusing the Socratic method. [1] It is a cooperative argument exercise used to promote critical thinking. I did not consent, nor have the time for a critical thinking exercise. Refuting a sub-hypothesis says little about the school of thought that may have spawned it.

I am only writing this reply because I have seen similar threads spread throughout internet boards since I started browsing. The strategy (not you) is an evil cancer that degrades effective discussions, brow beats posters into submission, and kill communities. If I were to host a discussion platform I would do my best to cull this practice (without consent). I believe that is the spirit of that point in Paul's guideline. Please do better.

[1]: https://en.wikipedia.org/wiki/Socratic_method


I had another quote lined up.

"You can't always get what you want. But if you try some time, you just might find you get what you need."

But again, not always.


> not everyone's life experience is the same to allow them freedom to move to an area that is

Very true. Recognizing one's own power and stepping into it is an extreme privilege. We don't live in a society where this is often role modeled.


> Recognizing one's own power and stepping into it is an extreme privilege.

I was thinking even in the case of familial obligations. It's not a lack of power that's keeping you taking care of your schizophrenic mother, it's compassion.


Choosing to take care of her out of compassion and not out of obligation is what I would consider the empowered position there. If I'm telling myself I have to take care of her, I'm disempowering myself. If I'm choosing to take care of her and it's out of compassion, not out of a negative emotion like guilt/shame or to avoid said emotions, it's an active empowered choice made out of love. Or so I think.

But now they've lost agency to make the better career choice.

That's just FOMO talking. Sacrifice is giving up what you want for what you need. I'd say careers are strategies for meeting needs while contributing to the life of another is a need. As long as all other needs are being met, I don't see the problem.

The purpose of software is to teach a machine perform tasks, even when many of those tasks are to show an interface so that a human can perform a task. Design is nontrivial, but computers are powerful, so we can layer on abstractions to ease the understandability and modularity of the code.

This is a creative process that's part engineering, part art: some people choose to take proven components and techniques and apply them in predictable ways, while some choose to experiment with new techniques, and some up with novel solutions. But any tradeoff has a cost, and future maintainers may not appreciate a solution whose meaty details aren't obvious.

While the creation of software may be a part-creative process, its maintenance is purely pragmatic: fixes to unforeseen problems need to be delivered quickly above all else, and enhancements along all extension points need to be possible, not just along axes the original authors intended. It's not hard to see that elegant code might be under immense pressure from more pragmatic modifications, any one of which can endanger its status in the eye of a beholder. This is likely why this article's author sees 'ugly' code survive: presumably, small changes are sufficient to make beautiful code ugly, while already-ugly code faces no such aesthetic pressure.


Can someone point out to projets or initiatives that are attempting to solve the problems mentioned by the writer of this article?

I can't stand the unnecessary complexity, fragmentation and duplication in software. The entire industry is a mess. Very few people seem to care.


> generic code is replaced by its concrete instances, which are faster

If you're using a shitty compiler, yes.


Yes, compilers have gotten much better at optimising some forms of generic code over the past few decades. This is great! But it's not like the war is won. Staged, reactive, and incremental programming, for example, are still areas of very active research. Good implementations of these let us write code at a higher-level and yet avoid the interpretative overhead of evaluating them naively. Without such implementations people just perform the optimisations by hand (although they usually don't think of it this way) - a lot of imperative UI code is like this, for example.

Who knows what the next frontier will be?



They're referring to monomorphization. Rust, for example:

> You might be wondering whether there is a runtime cost when you’re using generic type parameters. The good news is that Rust implements generics in such a way that your code doesn’t run any slower using generic types than it would with concrete types.

> Rust accomplishes this by performing monomorphization of the code that is using generics at compile time. Monomorphization is the process of turning generic code into specific code by filling in the concrete types that are used when compiled.


Where do you get that monomorphization reference from?

Let's assume for example you have a generic hash map where you just plug the type in, which must be hashable. Now if you plug in an integer type, monomorphization or whatever fancy compilation techniques will make that faster. But the best way to map an integer range is still a plain array.

To put it in simple terms, compilers can make shit run faster, but they can't make it not shit.


> But the best way to map an integer range is still a plain array.

Eh, only if you know in advance that:

1. Your integer keys can only ever come from a contiguous range of values

2. Once populated, your map will have values for at least 25-50% of the integers in that range.

If either of these assumptions are false, using an array will force you to allocate way more memory than you need to hold the elements in your map, and the resulting map will be sparse, and therefore cache-unfriendly.

Arrays are not better than hash maps that use integer keys in the general case.


Which is my point. Compilers can't help here.

I don’t understand. In every language I’m familiar with, the hash map uses a hash function that’s appropriate for integers by default for integer keys. What else do you need the compiler to do for you?

[update: I realized that I mistyped my conclusion sentence, and accidentally wrote the opposite of what I meant. Now updated]


An Array<T> is so much faster than a Hashmap<int, T>, it's not even a competition. For contiguous integer key sequences, that is. Especially if you are iterating in order. The compiler does not know that you will expect a contiguous key sequence, so it cannot do the work of specializing from a generic container to an Array<T> for you.

And that is my point. I'm saying Sufficiently Smart Compilers do not exist.


You can solve this problem using dependent types and encoding that information to your types (keys of the hashmap will be "dense") so that "Sufficiently Smart Compiler" will have enough information to optimize.

Now, who does that (and what languages support that anyway) and still gets the project finished?

Could you show a real world and readable example of this, using a language and compiler that actually exist?

Also, is it really better specifying all the information to possibly help the compiler come to the conclusion that it should choose an array, than simply typing "Array"?


Haskell, C++ and Rust support tiny amount of dependent type theory, enough to encode this information. I don't think any implementation would optimize given that though. But this is ok since you can implement this optimization yourself with no runtime cost and no change to the code using the datastructure. As far as full fledged dependent types go, we have agda if you're into Haskell ecosystem. I think your question is a bit badly formulated since dependent types are a new, and shiny thing so you won't find any industry-standard implementation of it. I was trying to express this problem is solvable.

> Also, is it really better specifying all the information to possibly help the compiler come to the conclusion that it should choose an array, than simply typing "Array"?

Obviously right? The end result (target code) is the same but now you have two advantages (1) your compiler will check if your assumptions are right i.e. your keys are dense where you think they're dense and if not you'll get a type error in compile time so that you're not wasting space at runtime had you were using Array<int> (2) now your code doesn't have specialized data structures (hashmap vs array for the same logic), it's generic, so it's more readable.


> I think your question is a bit badly formulated since dependent types are a new, and shiny thing so you won't find any industry-standard implementation of it.

That's just another way of saying that my rebuttal was spot-on. I don't even need to ask if you have any code to show, because I know you haven't. Seriously consider http://wiki.c2.com/?SufficientlySmartCompiler


I don't think we're (in particular) disagreeing, you're just being weirdly defensive (in this whole thread). The part we're agreeing is: there is no such "Sufficiently optimizing compiler" yet so a HL won't be as fast as LL by default. (this is very easy to check, C always wins in compiler metrics). What I'm trying to express is: (1) this is a trade-off by getting that slight performance increase you're making code uglier/less-readable (2) with current technologies you'll get acceptable performance even if you use generics (3) theoretically there is no reason why such sufficiently smart compiler cannot exist, we know enough to express sufficient amount of information in our programs.

Yes, that is correct, you missed the point. You don't understand what generics are.

I think you are confusing "generic code" (which is a pretty generic term) with "Generics", which has a somewhat more defined meaning in programming languages.

We could agree that the latter can be implemented relatively efficiently (given a few concrete examples to clarify what that means). But that does not mean that their use, or more generally, the use of generic code, is always the best and most efficient to do. It is not, by far.


Hm... this seems like an incredibly poorly thought out example on your part. We've had C++ template specialization for decades. It is trivial to implement a map that uses hashes for non-integer types and an array for integer ones. This is 2018 after all.

Specialized code is... not generic code, right?

Furthermore template specialization is usually a bad idea. Think how we just LOVE std::vector<bool> (/s). Also std::unordered_map<int> is not specialized to an array implementation. Because you can't tell from the type whether it will map a contiguous range.

I've done my homework. Your turn.


It's absolutely generic. The code that uses the map would be indifferent to the type of key. The code that implements it is obviously different, because they are two separate implementations. It seems unlikely that you could have automatic specialized implementations without substantial improvements in AI.

Yikes, I have no dog in this fight, but the level of sarcasm and meanness in your post ("Your turn.") just seems over the top for this argument. You'd be more convincing if you were more level, at least for me.

> incredibly poorly thought out example on your part

But maybe you're right and I should have ignored it...


That's a terrible example. You're using two different data structures.

A hashmap is not the generic version of an array.


Can you clarify what's "terrible" about it?

Provided a hash function for the key type exists, the hash map is one valid implementation for a generic map from keys to values. An array is a more efficient map from keys to values if the key type is an integral type and the actual keys at runtime are contiguous. It's also efficient if the keys are only nearly contiguous, and we have a "N/A" sentinel in the value type.


No, this is a completely standard compiler optimization. GHC already has it. Using (=<<) is not slower than concatMap on code you write today.

GHC is a compiler specially used for PL research, dedicated to a language used for PL research. That's neither here, nor there.

Plus, I'm not sure what optimizing =<< and concatMap has to do regarding optimizing generics vs concrete behavior.


I only wish it was longer.

Given how hot the market is for software developers right now, there's really no excuse to be working on ethically questionable products.

I would be cautious about generalisations like that.

Sometimes the good jobs are where you aren't. Sometimes the good jobs won't return your calls. Sometimes you have to choose between your morals and your wife, deportation, debt, or all together.


Can you really say you have morals if you sacrifice nothing for them? Are they not merely preferences at that point?

It's a tricky question. I agree that often morals involve making though choices. At the same time, those choices have to balance the magnitude of the evil, the consequences of sticking to your morals, and the consequences of just going with it.

To give a concrete example: let's say I have to choose between unemployment and working in a casino (assuming I believe casinos to be evil). I can work there and do nothing. I can work there and donate an X amount of my salary to addiction-recovery ONGs. I can refuse to work there and compromise my (hypothetical) children's well-being.

Which one of this is the "moral" choice? I can't say. And I'm willing to bet that you can make a moral argument for each one of those.

I'm with you in that sticking to your morals often involve sacrifice. But I wouldn't assume that the weights I add to each aspect of my morals are the same as everybody else's.


Participation in community at some level is necessary, and, depending on your morals, may be a moral imperative even when your morals disagree with the products of the community with which you interact. Is it easier to promote change in a community from within or from the outside?

It would be easy to read your question as "wife, deportation, debt, or all together" == nothing, and in doing so dismiss it. It is easy to see it as "you aren't even willing to sacrifice [all or most of these things] for this?" I would imagine that for most people, their loved ones trump their morals by a wide margin.

If you are asking that question without the parent as context, I would say that it seems like the only distinction between morals and preferences is one of definition.


Even not-great software developers who live in poor areas need to eat.

Ethically questionable is a very low bar. Virtually all economic and technological progress made in since the industrial revolution has been "ethically questionable" in some way or another. In the end, almost everyone agrees that we've moved in the right direction. It's up to each individual to decide where they draw the line of what is or isn't OK. I don't think most people cross that line because like you said, it isn't too difficult to find a job that you're OK with. Who are you to decide for other people whether obesity outweighs the enjoyment people get from soda, for example?

The only reason the market is hot is because of those projects driving up labor demand and spending capacity. It's all blood money.

True. The market would look different if all the people working at Google, Facebook or any other company you view as ethically challenged would quit.

> Beautiful code gets rewritten; ugly code survives.

Also: GPL code gets rewritten, BSD/MIT code survives.


Huh? Where's the BSD rewrite of Emacs? Is Linux not surviving? Are Hg and Git? GPL programs are category-killers.

It only matters if you're redistributing it. So, libraries not tools.

Given the choice between two otherwise similar frameworks/libraries, I would expect most to gravitate towards the non-GPL'd one since there is less friction. i.e. you don't even have to consider things like the GPL linking rules.


But that's exactly the point, isn't it?

[a bit oversimplified] You want to use this GPL library? Great! Just release your own software as GPL!

It's not complicated, per se, it's just that some people/business are not ok with that condition (which is their prerogative).


> it's just that some people/business are not ok with that condition

The point is that if people/businesses are not okay with using it, then that hurts the survival of that thing.


Are people not redistributing Linux? This seems like an awfully thin hair to split.

I'm not trying to say everything exactly falls into one category or the other.

There was a wide sweeping statement about a set that I believe to be generally true for one subset [0] and it was refuted with cherry-picked examples from a disjoint subset.

[0]: Of a significant size.


I hope we could bury that old discussion in the 90s. It's just holding us back from tackling today's challenges. Like, do you want to contribute to the ongoing centralization of information? Because if you're publishing your blog on FB (medium.com I don't know), or releasing your software on GitHub, then you're excluding indie search crawlers from access. You maybe shouldn't link to content silos that block non-UA agents (except GoogleBot and Bing) at all, because why would you contribute to non-reciprocal sites that are ruining the 'net?

statistics ?

I don't have any, but I've rewritten my fair share of GPL code so I could use it both at work and personal projects. I've started using MPL for my projects recently because (L)GPL is just too restrictive.

The GPL has some deeply political implications and it doesn't fit everyone's views. That's why people don't like the GPL, because it doesn't fit their view. But as far as its goals are concerned, I think the GPL achieve them very well.

(I have the opposite experience, in the beginning I just used the GPL because it seemed fine, but thatnks to some real life experience, I came to realize how my own political views are embodied in it and how wrong I'd feel producing code that it's not GPL) (I don't say the GPL is perfect here, but it's way better than the MIT license for me)


too restrictive for whom ? as a potential user of your software, can I have the guarantee that if I buy a device with your software on it, I can replace it / fix it ?

You are not asking all the questions though. What about buying a device that connects to a remote service running that software underneath? GPL won't protect you from such anti-user behavior. It's perfectly fine with depriving users of their freedoms this way. So (A)GPL is not restrictive enough either.

> Refuse to work on systems that profit from digital addictions.

> Refuse to work on systems that centralize control of media.

> Refuse to work on systems that prop up an unjust status quo.

> Refuse to work on systems that require unsustainable tradeoffs.

> Refuse to work on systems that weaponize the fabric of society.

Rich people can afford to turn their nose up at any kind of work apparently. Also noteworthy that the internet was invented as a result of "weaponizing"...so put that phone down.

Oddly enough all of this also invalidates working on most open source code, since the OSI says a license should not discriminate against intended use


>Rich people can afford to turn their nose up at any kind of work apparently.

Dirt poor people have made much harder choices all over the world. And they had families and everything. It's called dignity, and it's not a luxury reserved for the rich.

>Also noteworthy that the internet was invented as a result of "weaponizing"...so put that phone down.

Which is neither here nor there. The internet was not some special invention that only the military could make. It was a design created to solve certain constraints. If the military hadn't worked on it (e.g. because programmers refused to do it), it would have been developed by some company or university.

Besides it took 3+ decades and lots work after its invention until it got to the hands of the people, and thanks to public funded telephone network infrastructure, academic work (from MIT to CERN), and private companies (CompuServe, AOL and co). Left to the military it would be an insignificant niche network.


The OSI model of licensing has its philosophical roots in a very classical notion of universal rights and freedoms. As with all absolutist points-of-view, this frustrates some pragmatic people, and/or irks those developers who want to benefit from the publication of their work without large corporations then profiting monetarily from the same work.

That's fair, but there's an entire, centuries-old legal apparatus designed to protect the rights of that kind of developer: they should engage a lawyer, craft a license with the correct balance they seek, and pursue to reap the concrete or abstract rewards from their work.

But instead, it's en vogue to complain about open source licenses and wonder why the cake can't also be eaten, because developers have a marketshare-scaling problem: they want their software distributed to a wide audience such that it gains usage and mindshare, but not wide enough where AWS is selling their work as a managed service at prices they couldn't by themselves. Attempts by rightholders of software that suffered this fate always try to split the offering into a libre core and a proprietary set of enhancements with various levels of success, while some go down a nonsensical and legally-dubious path of trying to force libre licensing onto proprietary companion software within a larger offering. These demonstrate that giving away libre software may not be a sound business model on its own.

Unfortunate for this article's author, the ethical stands he wants other developers to make are quite often good business models. That's to the detriment of all of us. Some amount of people will make a stand, but there will be others who won't, and they'll reap the rewards, and we'll suffer the consequences of their work nonetheless.


It's very easy to derive a good business model from bad ethics. All business models derive from friction, right? We make money doing things for others that they find difficult or unpleasant to do for themselves. The more difficult/unpleasant it is, the greater the potential value.

Something that can make a task difficult or unpleasant is that task being unethical. Take for example an investment scheme that bilks the elderly out of their retirement savings. Technically, it's not difficult to do. Ethically, it's awful. So it's profitable because it's unethical. If robbing the elderly was ethical, a lot more people would do it, and there'd be a lot less money in it.


I think one can simply refuse to work on code that adapts the open source code towards nefarious ends. A screwdriver can build a pipebomb, but surely the person using the screwdriver is more culpable than the person who built the screwdriver.

Define "nefarious end".

This is why the OSI guidelines are correct to stay out of the tarpit of value judgements.

Edit: BTW This is another reason OSI does not discriminate against intended use...to protect otherwise rational people from behaviors like you see on HN...blind downvoting and no responses

HN is becoming incredibly anti-intellectual, bordering on reddit


A good start is anything that works contrary to the Universal Declaration of Human Rights: http://www.un.org/en/universal-declaration-human-rights/

How about acting in a way so as to not cause others to suffer or impede their search for happiness? The Dalai Lama makes a case for this as a universal ethic in his Ethics for a New Millennium.

Calling making any kind of value judgement a "tarpit" is itself a value judgement in favor of the status quo and ceding any of your own culpability to others.


[flagged]


>What businesses have goals that violate that Charter? You might actually want to read it...

It's not their "goals", it's their behaviors. Tons of businesses have behavior that violates that Charter.

https://en.wikipedia.org/wiki/Sweatshop

https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal

https://en.wikipedia.org/wiki/Banana_republic

https://www.independent.co.uk/life-style/gadgets-and-tech/ne...

https://www.alternet.org/story/146579/coca_cola%27s_role_in_...


> I feel like I am debating a bunch of fifteen year olds

This is why you're getting downvoted, not anti-intellectualism.


Article 12. No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

That being said, you should go for a walk or something. You seem to be getting unnecessarily worked up over the fact that there are people on HN that have different opinions on this subject.


It's up to you, isn't it? If you're okay with writing software that actively harms people, then you're the one that needs to figure out a way to sleep at night.

The OSI guidelines are themselves a codification of values; all systems are. Don't fool yourself into believing that there's any way to function in the world without exercising discriminating judgement. Your responsibility is to make sure you're exercising productive judgement, not avoiding judgement altogether.

I get to choose to make my money programming or doing something else.

I am not rich.

And yes, phones and social media networks have been weaponized against our attention and wellbeing. That's why people at the head of companies producing these systems keep them out of the hands of their children.

Edit: accidentally left out "their" in the last sentence... Definitely changes the meaning a bit :D


> That's why people at the head of companies producing these systems keep them out of the hands of children.

?

Does Apple stop my kids from placing orders? No. Does Youtube stop my kids from watching videos? No. In fact, they make a special service specifically targeting kids.


Parent means that the execs don't let their own kids use the toxic products they make.

Let's see what a rational stance would look like.

- technologies like Internet, phones, GPS... is a result of militsary investment. It is no longer majorly funded from military resources. The technology is here and nothing would change if we stopped using it - actually, a lot of human suffering (including material - tax money) would be rendered pointless if we did that.

- employers like Facebook or Google do a lot of data analysis that might be considered wrong by many privacy oriented people.

- licenses are open, but that does not mean that a person should be expected to work on software they deem unethical. There is no connection between a license and subjective personal ethics.

- required functionality might be oriented towards anti-ethical functionality, or neutral, or ethical. A developer is free to choose whether they want or do not want to contribute and I expect them to make a subjective evaluation of their feelings.

- we have freedom of speech, developers are free to persuade each other

- taking things to the extreme is never productive


> Also noteworthy that the internet was invented as a result of "weaponizing"

For those curious about how military funding influenced the invention of ARPANET, the internetsociety has a brief article on the history [1]. Ironically the sites responsive behavior is not designed very well for small viewports. For example, it is difficult to read from a phone.

[1]https://www.internetsociety.org/internet/history-internet/br...


I'm not rich, yet I quit my job to work on the problems the writer mentioned in this article.

I'm surprised I can't find more people willing or able to do this.


>> Rich people can afford to turn their nose up at any kind of work apparently.

No, those who have the opportunity to work on stuff that supports some values should do it. You introduce the notion of money here, but it's not written.

>> Oddly enough all of this also invalidates working on most open source code

Those who work on nuclear fission don't all work on nuclear bombs (fortunately).


So you are asserting that nuclear fission is benign and working on it is unassailable?

There are plenty of people who would disagree.


Which is neither here nor there. At the end, if you agree to work on benign things, you make it on your own assertion of which is benign and which thing is not. Not on whether some disagree.

> since the OSI says a license should not discriminate against intended use

And why would an opinion of some organization funded and supported by unethical corporations even matter here? Of course they want you to make software they can use. You shouldn't though.

It's very reasonable to discriminate at least against unethical corporations. You probably don't want your software to be used in organizations that kill people or that sell censorship solutions.


How do you know what I want? You probably don't want your software to be used in organizations that perform abortions or promote religious beliefs.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: