Hacker News new | comments | show | ask | jobs | submit login
De-facto closed source: the case for understandable software (13brane.net)
198 points by zdw 11 days ago | hide | past | web | favorite | 168 comments





The more I read about the js ecosystem, the more it seems the inmates are running the asylum

It's a long time since js needed a good stdlib.

Everybody was using this package but apparently no one but an adversarial player stepped up to actually maintain it.

(And don't get me started on "let's always get the latest version of the package")

> You want to download thousands of lines of useful, but random, code from the internet, for free, run it in a production web server, or worse, your user’s machine, trust it with your paying users’ data and reap that sweet dough. We all do. But then you can’t be bothered to check the license, understand the software you are running and still want to blame the people who make your business a possibility when mistakes happen, while giving them nothing for it? This is both incompetence and entitlement.

Not surprising. Not surprising in the least. "Oh wow somebody 0wned the package I needed". Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project. Maybe because in the extreme opposite of NIH, people feel the need to import a module for every small thing they want to do? "Stack overflow programming", "how do I add 2 numbers using React, is there a module for that!?!?!"


The cynic in me is saying that JS is the new PHP. Ironically, both are very capable and suitable languages, but the sheer freedom they offer attracts crowds of wannabe developers who think they are gods because they know how to shave off 10% of execution time in some small part of their overengineered and unmaintainable mess. And give talks about it, write blogs and post clever twits.

However developing robust software is possible in JS, the same way it was possible in PHP. You just need to take some time to vet dependencies a bit better and not use every fad when it surfaces. Common sense and experience help a lot. But it's incredible what can be built with modern toolchains and how maintainable it can be. Don't let the anti-JS feeling scare you from trying React / Vue /..., just don't forget that state management, separation of concerns and similar concepts are still valid.


> the sheer freedom they offer attracts crowds of wannabe developers who think they are gods because they know how to shave off 10% of execution time in some small part of their overengineered and unmaintainable mess. And give talks about it, write blogs and post clever twits

Honestly, I think JavaScript just has a lot of new developers who have never been exposed to concepts of software development such as dependency management, cultivating trust, and how (free and) open source works. I think the kind of developer you're describing is actually a minority.


This is my thought as well. "I have creative experience and/or student loans but I don't know what to do for work so I'll take a Javascript course and become a web dev."

It's wonderful that programming is becoming a kind of "equalizer" on the job market. But without supervision from software engineers (i.e. people with the experience and aptitude to design software properly) you end up with a mess.


The myriad of small snippets of JS encapsulated as npm modules is best compared to the alternative of copy-pasting solutions from websites like stack overflow. Instead of lots of copies of solutions to common problems, now everyone shares the same implementation. This obviously has the centralization weaknesses but also has the centralization strengths.

The fact that there's no panacea is a fact of life. There's a tradeoff, but it is not reflective of the people in the JS community.


In Java world those snippets tend to combine into bigger libraries. Apache commons and Google guava are the most known ones and their reputation is established. I don't understand why JS world does not do something similar. Some essential-utils from AiRbNb or whatever JavaScript analog of Google. And if there's still need for some tiny function, just implement it and move on! I can understand that JavaScript favors small download size, but modern tools should be able to remove unused code, so it shouldn't be an issue either.

I'm not talking specifically about having dependencies instead of copying and pasting utility functions.

In both cases you are expected to at least sort of understand the code you are copying/importing.


That's exactly what happened to PHP, with the difference that PHP core developers were also not exposed to good community and development practices before starting their project, while there exist JS core projects in good shape.

There exist PHP projects in good shape as well... and remember JS existed before Node and NPM, and a lot of javascript is still written entirely outside of that ecosystem.

If anything, taking for granted that the JS ecosystem has a higher caliber of developers in it than PHP only serves to make the JS ecosystem look worse in contrast.


There exist PHP projects in good shape now. Even the core developers learned their lessons and are seeking a good language now. PHP is maturing, and may become a good language at some point... if the world hasn't moved on that this point so it become an old, brittle, powerless one before it has a chance to put its flexibility to good use.

Javascript started from a much better position. You simply couldn't choose a sane toolset for working in PHP on the begining, but while there is no great choice for JS, there exist some that don't suck.


Javascript started in the same position as PHP - a wild west community of amateurs writing their own code by hand... then JQuery and other frameworks came along and more amateurs were writing their own code mostly by hand around those frameworks. Neither language or community started off on a good footing.

PHP has many built in functionality and modules that are packaged in Linux distros,in comparison JS is very bare bones and JS ecosystem encourages the reuse of 5 lines helper function from npm.

Edit, a culture difference example:

a dev wanted to get the video length in a mp4, so he installed a npm package but it did not worked, the issue was that ffmpeg was not setup in the PATH so he asked me to solve it, I asked to use the absolute path of the ffmpeg binary but that was not an option n the package.

In the end I checked the package he found and used and I showed that it just called ffmpeg from the command line and if we just do that we can pass the exact options we want and not have to install a third party package.

As a PHP dev when I have to perform something is to find if there is a Linux CLI app that does this, install the app on the server and call it (like ffpmeg, wkhtml2pdf, an epub to mobi app, etc) I can trust a CLI app that is packaged in Linux then a random package.

PHP has composer and we use that for third party integration like Dropbox,Facebook,Amazon have official packages that we can grab and use.


>As a PHP dev when I have to perform something is to find if there is a Linux CLI app that does this, install the app on the server and call it (like ffpmeg, wkhtml2pdf, an epub to mobi app, etc) I can trust a CLI app that is packaged in Linux then a random package.

This is only better because your distribution has a curated set of packages with trusted maintainers that (in theory) read through the source code to make sure it's not doing anything malicious.


Not only that but I use the CLI app directly then using a package that wraps it and hides what is happening from you. Plus you will not have a surprise that an update (on an LTS server distro) removes your dependency, replaces it with malware or drops a feature.

I noticed that my colleagues that never used Linux are not familiar with the CLI tools and the power this tools have so they are not trained to think "hey we can use this CLI tools and we can solve our problem" but they think let's search SO, GitHub, or NPM for a solution on how to do X with node/PHP etc.


Which is, as you say, better.

also because most of those libraries are just thin wrappers of other C libraries, which have already been vetted and most of the time are already part of the OS. Javascript reinvents everything .

Is not JS itself, but the NPM culture where you instead of writing a few lines of code you include someone else package.

So in my example the job was to get the video length, the dev tried to use https://github.com/eugeneware/ffprobe if it all would have worked would have been faster but at the cost of not knowing what is happening under the hood and depending on this package that also seems to depend on 3 others(not sure if those also depend on others)


I believe the culture started from people who didnt trust themselves to write good code. Also, the lack of preexisting libraries compatible with the async model led to de novo implementations of many things that are covered by wrappers in other languages.

I agree, but I think you can call a CLI app using node without any need for at least 4 extra packages.

IMO browser vendors should check some statistics , see what helper libraries are most popular and put those functions in the browser, similar for node, see what is popular and install that. Also the culture needs to change, when someone wants to explain how to create a thumbnail using image magic or other task that is just a wrapper then they should show the code on how to do it and avoid creating a new NPM package.

I assume but I could be wrong that some developers think that having published npm packages, github repos, blog posts are helpful n getting better jobs so the culture of "CV programming" appeared where a lot of packages,side projects and blogs are created but the quality suffers.


I was hoping Gist would become this, a place for sharing patterns and implementations so that devs would learn what they're doing rather than just blindly playing npm lego. But it doesn't seem to have happened.

I think the biggest problem with the JS community is that the mindset is always "implement first, understand second", rather than the other way around. It's ok to use libraries but you should at least have a think about how you would implement it yourself first, and understand at a basic level what's happening under the hood.

Whenever I've brought up this idea the pushback in the real world is absolutely insane. Here on HN people are more receptive to the idea, but in most companies the mindset is just "shortest path to implementation at all costs".

The ironic thing is that bigger companies, rather than giving their devs the time to learn, will force them to spend 80% of their effort on processes and unit testing to try and account for the shit code being produced in the other 20% of their time.


> both are very capable and suitable languages

No, both are terrible, garbage languages, languages that have been whipped into reasonable shape over a decade or two by good programmers forced to use them due to awful monocultures that arose through successions of largely arbitrary events. This reasonable shape means that the most recent versions can be used with relative pleasure if you ignore half of their syntax, clamp massive libraries to them to replace the other half, and assume that your end product will be as fragile as glass.

edit: sorry about the trolly comment (which I make in response to a good comment), but if PHP and javascript aren't bad languages, I wouldn't know how to make a case that any language is a bad language.


> I wouldn't know how to make a case that any language is a bad language

I have a checklist I've been building which is purely based on my experience and is subjective. Appearing on the checklist makes teaching someone else more difficult and looks bad for the language in general.

I believe the ecosystem is part of the language. You can't do much without running into npm if you use node but you can avoid it if you just use JS - are they separate? I treat them so. If you aren't a general purpose language being used as a general purpose language, that's partially the language's (including ecosystem) fault. eg don't make a UI with erlang.

* Language has a toxic ecosystem - node

* Language makes it easy to do the wrong thing - node, js, PHP, lua, Perl (notice no Python)

* Language makes it hard to do the right thing - node, js, PHP, Perl, erlang, Python, Haskell, Java

* Language is based on an esoteric design principle - erlang, Haskell, lua (meta-things)

* Language which is internally inconsistent - node, js (floats, time, etc), PHP (bifs), Java (type system)

etc. I think there's plenty of languages which have problems and few seem to be shoring them up because we still don't have a consensus on how dynamic typing should be implemented, so we build upon the sand of flawed languages and argue about triviality.


No programming language design survives first contact with the programmers. This is also holds on project level if you are 100 persons working on the same project it's very easy to mess it up.

I've seen lots of bad python code, I don't think it's hard to do the wrong thing.


javascript isn't even usable out-of-the-box without a few (hundreds) libraries to make it comfortable to use. I disagree, PHP is nothing like it, its standard modules library is chock-full of stuff that is fast, useful and based on existing C/C++ libraries or standards.

but sure if one wants want to classify incompetent developers in the PHP bucket, then everything is "the new PHP"


> javascript isn't even usable out-of-the-box without a few (hundreds) libraries to make it comfortable to use

What? Start small and vanilla:

   <script type="text/javascript">...
   <?php ...
You always have a chance of drowning in complexity with whatever you use: Webpack, Compose, Laravel, Make, JSX, SSR, apt-get...

This "small and vanilla" lacks 1000s of helpful functions PHP has built-in in its standard library, that's what the parent says.

Right, but PHP alone is also not "usable out-of-the-box" if it needs hundreds of third-party dependencies, whatever as static compiled bundle or linked distro, libc, libmysql, libgd, so on... Comparing generic language with language bundle is unfair - that is my point.

Fast and useful stuff like mysql_escape_string?

That's been deprecated for years, removed from current PHP and was just a wrapper around an existing C library to begin with. If you want to blame the current language for its former vulnerabilities, fine, but comparing apples to apples would mean looking at all of the terrible ways in which javascript has been integrated into the browser and OS over the years, particularly with Internet Explorer.

And I can guarantee you, from personal experience, that PHP is not the only language in which string concatenation with variables is the most common means of writing SQL queries.


I’m not defending js. On the contrary the two languages are similar in that early on a lot of people had a negative first impression because of some mind bogglingly stupid decisions made by the initial implementators. This lead to feedback loops where those that were repulsed by those negative impressions moved on to other languages while those that didn’t care stuck around. In both cases that lead to cowboy cultures developing.

It is true that in both cases, eventually top notch engineers got involved in the projects’ core and cleaned things up to the extent possible. But it’s a heck of a lot easier to improve a spec than a culture.


Yet, this ancient library did not have serious issues. It was not deprecated because someone hacked it or it was even bad, but because it had not been maintained for many years. The reason was probably that it did not need maintainance, it just worked, but programmers often used it in the wrong way. In fact, the removal from PHP has caused some very small and old projects to break, and IMHO php should not have removed it, for the sake of compatibility.

Also, it is much faster than the suggested alternative (pdo).


It did have serious security issues. That’s why not long after there was another function added named mysql_real_escape_string.

No, that's not true. That function name was copied directly from the MySQL C API. It was not added by PHP.

https://dev.mysql.com/doc/refman/5.5/en/mysql-real-escape-st...


“You can use this function safely with your MySQL database queries if and only if you are sure that your database connection is using ASCII, UTF-8, or ISO-8859-* and that the backslash is your database's escape character. If you're not sure, then use mysqli_real_escape_string instead. This function is not safe to use on databases with multi-byte character sets.”

you're focusing on a function with a known limitation, that was superceded 15 years ago and practically everyone used mysql_real since then. That's not the reason why the module was removed though. The reason is that the mysql_* functions are "too low level for noobs".

https://stackoverflow.com/questions/16859477/why-are-phps-my...


That is a function made by MySQL developers [1]. Why don't you blame Sun or Oracle or a Swedish company whose name I don't remember for this?

[1] https://dev.mysql.com/doc/refman/8.0/en/mysql-escape-string....


You did remember it; "MySQL". It was an joint stock company so it is actually MySQL AB, but that postfix is quite similar to LLC, S.A. or A/S. AB literally translates to stock company, I think the english term is joint stock company

JS is haunted by its past, today we are almost there where we have been in 2011: on the one hand it has this perfect learning curve, being accessible for everybody who mastered basic computer usage but still offering a lot of advanced but optional features. At the same time it has this bad reputation of being unreliable and having bad documentation/code.

I think in 2012 a frameworks got popular that simplified the task of building JS apps that are as complex as apps in C++ or Java. Actually using these frameworks wasn't trivial when they came out. Now that's pretty easy but the deps are crazy. I wonder what happens next, maybe some super advanced package manager will come...


> I wonder what happens next, maybe some super advanced package manager will come...

This is what needs to happen. Javascript isn't a proprietary language, yet the entire ecosystem is currently inextricably linked to the single point of failure in a single company's proprietary database, language extension (which Node.js is) and package manager.

I would hold out hope for Webassembly but I'm certain it's just going to be swallowed up by the Node monster along with everything else.


I think JS has a better standing than PHP, because it was standardised and got world class devs early on.

But in all other aspects I think JS is the new PHP.


Javascript is inherently broken and unfixable. With Python, if I find a bug or something that I think should be in the standard library then I can just post on the mailing list and make my case. The core devs might not accept my proposed change, but at least I'm guaranteed my day in court. And if you get ruled against then you can go back and gather more evidence and appeal at least once and probably even twice before you get told to fuck off.

Whereas with Javascript if you find a bug or something you want added to the core library then you need to pay millions of dollars to join ECMA. How can you find competent developers to work in an ecosystem with that kind of governance? You can't, which is why we are where we are now.

Even if with respect to the implementations, if you file a bug with any of the major browser vendors then you'll probably die of old age before the ticket is even triaged, let alone fixed.


I haven't done it, but if you want to propose a change to JavaScript, it looks like there is a process to follow, and it doesn't necessarily involve spending money:

https://github.com/tc39/ecma262/blob/master/CONTRIBUTING.md


> It's a long time since js needed a good stdlib.

To me this illustrates a fundamental problem that JS has to deal with that is virtually unique in the programming language space: multiple browsers across multiple browser versions. It is incredibly difficult to get all browsers to adopt a "standard library" and even then it takes years for all users to adopt those browsers that support the standard library. Even on top of that, not all browsers implement the standard library properly. It really is a nightmare grown out of competing browsers with users that do not update them enough.

> Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project.

This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.

Let's flip to Python. Let's say magically you can run python inside a browser starting tomorrow. The second you bring in a library like `numpy` you're looking at a bundle size of 40 MB, and that's just one dependency. In the JS world that is utterly unacceptable. All the languages you mentioned take advantage of the fact that they can download those libraries to the filesystem and forget about it. JS has to download libraries over the wire, it's a completely different game.

What I'm trying to say is that the JS ecosystem didn't invent a bunch of problems to solve or that the people running the ecosystem are script kiddies. There are very unique problems that need to be solved in this ecosystem that make it different, especially when referencing the three languages you mentioned in your post.

With all of these "JS sucks" arguments I see a severe lack in empathy or even remotely trying to understand why JS has the problems it does.


>To me this illustrates a fundamental problem that JS has to deal with that is virtually unique in the programming language space: multiple browsers across multiple browser versions. It is incredibly difficult to get all browsers to adopt a "standard library"

You don't need to. The standard library could just be a community curated project (with the help of major browser vendors) that ships as extra code. It could even be on npm.

If the browser has it included, even better, if not, it's referenced the usual way third party dependencies are.

The problem is having a package.json like:

  random lib 1
  ...
  random lib 500
Not having a package.json like:

  function 1 of well curated lib X
  function 2 of well curated lib X
  function 22 of well curated lib Y
  ...
  function 35 of well curated lib Y
where e.g. Y is lodash, or react, and such.

>This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.

That doesn't seem the case either. In major web pages, even from big companies, there are multiple versions of dependencies, even full deps like lodash and co. And people use all kinds of gigantic (web wise) frameworks and third party libs like moment.js with wild abandon.

Besides, even if that was the problem, there's nothing stopping you from having a modular set of libraries (like lodash), that you can cherry pick from the functionality you need and only load that.

The problem the parent mentions is not "JS needs to include big libraries and stop using small dependencies" but JS needs to stop using random small dependencies from here and there.

Using 100 dependencies from all kinds of crappy upstream places (e.g. some crappy leftpad implementation), is different than having a curated set of libraries and loading 100 small dependencies from that.


I had same idea few years ago. Basically, we need a new URL scheme for this, which will be able to uniquely address files from well known sets and be backward compatible with old browsers.

In supported browser, resource will be downloaded once and kept in cache for long period, maybe forever.

For example, `https://well-known.js/common-1.1/lib-1.2.js` . Browser can download just `lib.1.2.32.js` (redirected from lib-1.2.js) every time, or it can download whole `common-1.1` bundle once and keep it in separate long-time cache until it will be deprecated And when it will be deprecated, browser will just revert to old behavior.


That's a good implementation idea!

I agree that download size is a big deal in JavaScript, but it still seems rather unfortunate to use download size as a reason for ignoring the good work of well-established teams with good reputations and relying on random individuals instead.

If tree-shaking isn't good enough, it needs to get better. Instead of having lots of individuals creating tiny packages with one function in them, there need to be fewer, broader libraries that are closely watched by larger teams.


> The more I read about the js ecosystem, the more it seems the inmates are running the asylum

the issue is NPM. NPM is a bad package manager. A package manager that allows duplicates version of the same dependency is broken to begin with. People complain that fetching react-native results in hundreds of packages installed on their computers. How many freaking dupes? off course nobody is going to audit all that crap. Conflicts should be resolved upstream, which would lead to more stable packages to begin with.

NPM was developed by people who were clueless about package management and now are profiting directly from that shit-show, and even Node.js creator went on recording saying tying it to NPM was a mistake.


Nah. Package manager that does not allow multiple versions of same library is the bad one.

Npm allowing it is a good thing about npm.


This thread and the complains about Node.js broken ecosystem proves you wrong.

It's not the dependencies that cause me concern. It's the dependencies of the dependencies.

This. The direct dependencies are usually quite manageable. As a project developer, I know what they do, why they were chosen, roughly how it works, and have decided that the risk is worth it. This is not the case for sub, subsub... dependencies, and there are soo many. If the tree was flatter due-dilligence would be much easier.

Library authors. Please be extremely conservative in when to pull in dependencies. I will prioritize this higher for projects that I maintain.


This reasoning has a problem. We should always be concerned with dependencies and always keep them to the minimum you can correctly get by with.

The problem with thinking these x packages are OK for this library is how we got where we are. Yes we could just avoid using trivial libraries, but even for the other ones the decision to add it into your lib has to be made in the context of the app or lib, or lib's lib using your library. If you only imagine your lib being directly used, your scales a d judgements could be way off the actual cost/bebefit.


As someone else alluded to, I think the problem stems more from the fact that major official libraries are glued together using a lot of dependencies that aren’t necessarily trustworthy. This doesn’t happen nearly as much in the other ecosystems (at least none that I have seen). In Android dev, for instance, you often will see the official Google support binaries as dependencies but nothing else. In .NET you’ll often see Newtonsoft’s JSON serializer but very few non-Microsoft third party dependencies.

It does to a lesser, but still worrisome, extent in Java.

I think it's precisely because it lacks a good committed (commercial) owner like Android and .NET have. I could see it going from bad to worse in the future, as Oracle continues to jettison responsibility for bits of the ecosystem.


I think it’s healthy to have an ecosystem of large, well vetted, third party libraries maintained by long-standing organizations like Google (guava), Apache (commons, hadoop), or Pivotal/Dell (Spring).

The early decade of Java open source was mostly driven by the Apache Foundation

Anecdotal, but I find the Node std lib + the language primitives to be good enough for me. I never find myself reaching for external dependencies outside of React or an equivalent on the front end, a testing framework, etc. maybe 5 dependencies total for a moderately complex app. I’m honestly not sure why the JS ecosystem is so dependency happy. It’s not necessary, and the downsides are massive and obvious.

>It's a long time since js needed a good stdlib.

Yes. Huge package ecosystems are hurtful. For the basics, language communities should come with a nice, batteries improved, standard library.

For the rest, people should only trust community projects with big following, processes, etc (like Apache stuff, Django, moment.js, postgres, etc), or open source projects supported by companies (e.g. nginx, mongo, React, etc).

The rest, sorry, but you got to write them yourself.

Downloading and using 1-person libs for trivial stuff like lefpad and such is madness.


> The more I read about the js ecosystem, the more it seems the inmates are running the asylum

Let's safely call it immature. The principle objectives here:

    * obtain employment immediately
    * prevent unemployment forever
Part of the problem is that people are taught one thing in school, expected to immediately find work, and the then perform something immediately different at work. That, or they are simply Java (or whatever fill in the blank) and get retasked to write JavaScript.

What ends up happening is that people want to do their previous job in the context of their current job, such as write something that looks like Java in JavaScript. When that becomes the failure you would expect they blame the technology for not being their previous thing.

Compounding that training failure is that JavaScript has a low barrier of entry. Some of the people writing in this language are less than brilliant. The end result is a cluster fuck as evidenced by frequently used terms like imposter syndrome, snowflake, Dunning Kruger, and so forth. The insecurity, fear, and delusion clearly evident to me as a corporate JavaScript developer.

How do you solve for the obvious emotional failure that comes from the described lack of preparation? You hide under layers and layers of abstractions. You make life easier to the point of hoping code exists that does your job for you. Unfortunately that easiness is the opposite of slim or simple. It's great until it isn't if you aren't in a hurry.


Npm should delete the top useless libraries and force the package maintainers to rewrite the code to be less dependant on them.

Or they should do what they appear to be incapable of which is to provide certification and verification of certain packages. If people really need a lib for inarray then npm should maintain and certify it.

It should not be difficult for npm to raise the funds needed to do that.


"FOSS was never about trust in software owners."

Actually, it is. That's why foundations such as Apache, Mozilla, Khronos, etc exist. Transfers of ownership, abandonment, and bad faith are not new. We need to trust not only in the software for today, but for tomorrow as well. Foundations step in because they're able to harness the financial clout to attract maintainers.

"We must make software simpler."

We must make software INTERFACES simpler. And that means opinionated solutions, preferably with a HEAVILY opinionated top layer API for the 80% and a less opinionated lower layer API for the 20%.

This is where "wide open" development systems such as Javascript and Lisp fail: They offer too much freedom. You need a standard, opinionated library that provides most of what people need, just for interoperability sake. You need standard ways of doing the common tasks so that people can harness those patterns into reusable modules that stand a greater chance of working well with others.


Yeah for all the good points in the article the author flubs these two major takeaways. Trust is required for all human interactions, not just software. But at scale you have to build the tools to enable trust into the system. And JavaScript/NPM has a lot more and bigger built-in systemic risks, including a much greater level of trust required and a lot fewer tools to enable it.

As someone with the slightest consciousness for security, this always terrifies me.

The Gradle version in the Ubuntu repos is too old? Just add a PPA from some random guy on the internet. The Gradle wrapper also exists, but you'd need to type "./gradlew" instead, which is apparently enough reason to use the PPA instead.

Need a Gradle plugin? Oh, yeah, just add Maven Central and JCenter as repos. No idea who (/if anyone) audits those, but we'll just trust them anyways.

Need a Docker image? Just go to Docker Hub or Quay and download one that looks good.

Don't quite like Eclipse and the company won't pay for the IntelliJ Ultimate Edition? Just install the Community Edition, with no idea if that Privacy Statement actually means they could phish your SSH keys, phone number etc.

Need a way to transfer files? Google Drive is a great way to do that.

Need an operating system? We have either Windows 10 or Windows 10 for you, which has been shown to transfer encrypted, undisclosed packages to Redmond, even in the Enterprise Edition.

Office suite? MS Office, for which the same is true.

Need to look up some specific issue with security critical software you use? Just type it into Google.

I even once saw someone who typed a password into the Chrome URL bar to show the guy next to them how it's spelled.

With the sheer disregard I frequently encounter for any sensible behaviour, I've really stopped wondering how hacking, industry espionage etc. work. I'm already quite content if it's not just all out in the open.


For any electronic device of your choice, search for its manual and look for this notice:

"This device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation."

Everything is backdoored, absolutely everything: your hardware, your firmware, the compiler used to build your software, your software, libraries used by your software, your network infrastructure, your crypto, your sources of entropy, the machines you communicate with, everything.


It is worrying every time I install a gnome extension.

Understandability is definitely a nice thing to have, though I believe it's ultimately limited by essential complexity of the problem domain. Also, it's not a solution here, because a malicious actor can make the code very easy to understand in a way in which you gloss over it and not notice subtle bits here and there that add up to an attack.

Still, this is not the main problem highlighted by this (yet another) NPM fiasco. I believe there are two and only two core problems that caused this issue, and that will enable future incidents like this:

Problem 1: some scoundrel violated the commons. The commons have no effective means of tracking them down and punishing them, so that a) they'll deeply regret what they've done, and b) other scoundrels will be deterred from trying. Lack of means of effective policing means various open source communities will keep having such problems.

Problem 2: people don't check their dependencies. Yes, I can already hear all startups screaming, "we can't afford it". Well, sucks to be you, but you'd better hustle and find a way. The licenses of almost all the software you use disclaim any responsibility for anything whatsoever, so if you expose users to that software and that software harms the users, it's your fault. You mishandled them. So find a way to vet your software, or buy some insurance against yet another NPM compromise. Or don't, and accept you're taking a risk.

To be clear, I'm not advocating a general "caveat emptor" attitude to software. We've build a civilization in part on systems and regulations that allow us to not vet everything we interact with, and yet be quite confident in our safety. But FOSS is not there yet (Problem 1). It's built on trust, but most communities have little means of protecting that trust. As for companies, I have little sympathy (Problem 2), like I wouldn't have much for any other company in any other industry that said they can't afford to do their basic job right.


I don't know about the feasibility to understand all that's going on on your computer with the ongoing stuffing of layers over layers (kernel, userspace, "container", language VM such as node/v8 or JVM, framework, etc.). If anything, I'd expect younger devs to understand less of it all, since they don't learn from the ground up (eg from hardware). But there has always been the option to use interchangable standard components based on POSIX, SQL, etc. as a remedy to become critically dependent and vulnerable. In fact, node.js started as just one of many implementations of the CommonJS module loading mechanism and base API, in addition to being based on one of the most portable languages of all time.

But I guess no amount of education will make the kind of developers go away who think 400+ package dependencies for a trivial app is a good thing, or that you absolutely have to write a proprietary not-quite-JavaScript language compiled to JavaScript via babel, or that node.js in itself is a goal, rather than merely an implementation.


Not knowing much about node.js, wouldn't the most logical step now to make a standard node.js library that implements the majority of the functions of those ~400 packages? Vetting one package is easier than vetting 400. I guess that's what Rails did in the very beginning, I keep on discovering in-built magic

Node does have a standard library. It's not massive, but there's a lot more available out of the box than is available in a browser.

The tricky part is that lots is npm packages end up bundled into an app that runs in the browser. It's just a guess, but I'd guess that significantly more npm imports end up running in the browser than in Node. So even giving Node a standard library as extensive as Ruby's wouldn't help as much as one would hope.


i think a bigger issue is that many people don't even write javascript, but another language that transpiles to it

New to web dev. New to node. Looks like a complete mess to me, deserves criticism, as does much of the web dev ecosystem.

Package maintainer should indeed have found someone to pass it onto (see Cathedral & the Bazaar). And that doesn’t include the first person he’s never heard of stepping up.

BUT this applies to all package managers, maintainers, and OSS at some level.

The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that. The idea that you “don’t have to trust” the authors is untrue, in the current workflow. FOSS relies entirely on trust.

I’m not convinced FOSS is even a good idea at this point, but with the advent of widespread cyberwarfare we need to either introduce a sophisticated accompanying trust model, or exclude FOSS when working commercially.

This is a business opportunity. Audit FOSS and sell your audit guarantees in a contract. Offer services to audit more recent versions on the proviso that you can sell that audit elsewhere.

This will have the incidental benefit of encouraging clean software to be written in languages that minimise audit costs, as those projects will get used more.

Some commercial arbitration of FOSS now looks inevitable.


> The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that.

This may sound strange, but once tech companies actually had to either write or buy all of their software, and if they didn't have a contract in place that made someone else responsible for its quality, they were. So, basically, the world was absurd.

The way FOSS works is if users agree that a common good is important enough to invest in, and then they all benefit more than they would if they invested alone; it's anti-competitive. Free (and Open Source) software should be thought of as "free as in beer" i.e., the next round is on you. If you can't audit every line, audit some of them, or pay someone else to do it. Coordinate to get code coverage. If you use a project that is inadequately covered, you're responsible for everything that goes wrong.

If you don't even know what libraries your project depends on, how could that ever be thought of as anyone's fault but your own?


> Coordinate to get code coverage.

This may be a central part of the issue. It's a coordination problem and creating common knowledge.

Each corporation might be willing to pay to have some bits of their dependencies audited as long as others cover other pieces. But to do that they need to be able to announce the audit result and scan their dependency trees for pieces that are not audited and pick from those. You'd also need some common standards what constitutes an audit and the lawyers would probably want some limits on liability so results should be considered as best-effort or whatever.

There are no conventions and social protocols in place to support this.


> Package maintainer should indeed have found someone to pass it onto (see Cathedral & the Bazaar). And that doesn’t include the first person he’s never heard of stepping up.

WITH NO WARRANTY.

Do you not understand what this part of the licensed mean?

Mainter doesn't have to do shit. That is the point.

You want to start putting arbitrary ethics and morals on these developers? The "Fuck you. Pay me." talk comes to mind.

FOSS works fine like this and has been for a long time.

We're seeing issues now because of nodes lack of a standard library. Not trust and certainly not instead issues with free and open source software.


Lack of standard library only makes the trust issue manifest with more hilariously basic libraries, instead of more slowly with the libraries. We've still seen this in Python, the poster child for batteries-included.

I think it's been said elsewhere, this is true but the frequency and severity is orders of magnitude less with the languages with good standard libraries.

When the entire ecosystem depends on left pad I think you have a problem.

When the entire ecosystem depends on express it is less of an issue because more eyes are invested in auditing it and changes are more widely reviewed.

You think a nefarious leftpad function would make it's way into express? It could, but it is way less likely.

Updating the dependency to a newer version of left pad isn't going to raise an eyebrow. And that is the issue.


I might be missing your point or simply have a hard time understanding this anti FOSS rethoric.

I'm reading that you'd pay a third party so you can trust open source code and think that FOSS somehow exposes commercial code to more risk in some kind of cyber warfare? How is that not complete FUD? You already have the option to pay vendors like redhat for many open source software components if liability is your only concern, the same is true for many of the more complex libraries out there.

Closed source on the other hand would mean buying every single piece of code or paying in house devs to write that code. I get the quality concerns raised here up to a point but just because a company paid somebody to write something doesn't mean it's not effectively written by a solo dev under heavy time constraints. Except with FOSS you at least have the chance to go in and inspect/fix the thing yourself if needs be.


Two things:

1) The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.

Personally, I prefer the commercial license/source-included model, with the emphasis on the author/company getting paid to ensure that the situations like the one described here are avoided. You can then have additional educational licenses for ensuring access to developer tools for educational purposes, but that's up to the author/company.

2) If you directly pay someone to write software, I would expect any such arrangement to include the source code as part of the work product, regardless of the ultimate visibility of the source code to outside parties.


The history of cryptocurrency in particular and business law in general shows that adding money to the system doesn't automatically result in trustworthiness. Even the giant corporations providing cloud computing do decide to abandon products and discontinue services, or dramatically raise prices.

As someone else suggested, maybe the way to go is to rely on foundations. Maybe individuals shouldn't be taking on the burden of maintaining software alone? Maybe JavaScript needs a more slow-moving organization like Debian to handle package integration, with all the bureaucracy that entails?


Absolutely - adding money doesn't automatically result in trustworthiness. What it does do, however, is make the transaction fall under legal commerce, which gives the purchaser/user rights and remedies that they do not have with free (as in cost) software.

With foundations or any other form of over-arching bureaucracy, you risk stultifying software developers and harming innovation. It's really, really hard to beat the self-organizing aspects of free markets combined with commercial legal frameworks.


Well okay, but these aren't opposites. Large businesses cooperate internally using bureaucracy. They cooperate externally using (and funding) foundations and other open source work.

There is market demand for stability and it can be a competitive advantage over innovative but unstable alternatives. (Consider why Go and Docker are so popular.)

And why do companies start and fund foundations? Because their customers have doubts. It's better for stability than a market that's not based on standards.


Do you have any rights with paid devices? A kindle comes with 8 hours, 59 minutes of disclaimers

https://www.youtube.com/watch?v=sxygkyskucA


>The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.

And now Microsoft uses linux on the majority of their own cloud offerings. Open source beats propriety software on economic terms a lot of the time. It doesn't matter that both can work in economic terms, it matters which one is better in economic terms.


I don't think this book has been completely written yet. I think we're just now starting to see some of the major issues with FOSS, so don't throw up that "Mission Accomplished" banner yet.

FOSS is very much like the internet, in general: it was great when it was a small group of technical, like-minded, dedicated individuals working towards common goals. It starts falling apart, however, once you introduce the rest of the world into the system because the world primarily works on the basis of ruthless self-interest.


FOSS was going nowhere until the rest of the world got introduced to it. It was of more or less purely academic interest for a very long time.

TANSTAAFL. You're literally getting the source code for free. I agree that it absolutely makes sense to pay a third party to audit your dependencies if you can't do so yourself. The alternative is to restrict yourself to code from a source you trust implicitly.

> Audit FOSS and sell your audit guarantees in a contract

Pay Redhat enough and they will do that. Although you will be limited in what you can use.


Don't know why you're being downvoted, because Red Hat does indeed audit all of the code we ship in RHEL (and believe me it's pretty tedious). We don't ship much in the way of Javascript libs though.

> The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that.

Big business absolutely do that. Code quality review, security review, legal review. Every line of every 3rd party dependency.

Of course, for the most part big business doesn't take 3rd party dependencies. If you have a big enough software org, you write everything above the std library in-house. Why do you think so many of the big open-source frameworks are vended by big 10 tech firms?


> Big business absolutely do that.

If would be true somebody would have noticed this hack before. Has been online for 2 months and they only found out because of a deprecation message.

> Of course, for the most part big business doesn't take 3rd party dependencies.

I worked for many big companies, definitively they use 3rd party deps.

> std library in-house

And that contains 3rd party deps.

Big companies have some 3rd party to check the libraries but it looks like they are not good enough because they didn't catch this one.


Maybe this particular dependency just isn't used by many big firms?

I've had to go through code, security, and legal reviews at both Amazon and Facebook when desiring to import 3rd party libraries. They were fairly thorough.


> The idea that say a startup has time to audit every line of every dependency is absurd.

I kind of agree, but remember you are getting free software. Not a little, but a ton of free software and you feel like somebody should guarantee all works fine.

Choices:

- See what is going on in all your deps and waste a lot of time - Risk it and use the software without knowing what is doing - Pay somebody to guarantee that the software is not malicious


> I’m not convinced FOSS is even a good idea at this point, but with the advent of widespread cyberwarfare we need to either introduce a sophisticated accompanying trust model, or exclude FOSS when working commercially.

So, you are saying that we should prefer code where it is impossible to have a look at the source because that solves the problem of having to trust the developers of that code?


No, the point is that in practice FOSS code is not any more open than closed source.

That's the point of this piece. For any non-trivial edit on a real project with real deadlines the source code is effectively useless, because no one has the time, the resources, or possibly even the inclination to fix bugs, do full-coverage testing, or make custom modifications.

So you have to take the internals on trust. Which is a ridiculous situation when so many packages are created as hobby projects with - literally - no guarantee of performance or reliability.

I realise it's hard for FOSS advocates to understand this, because it's a fundamental flaw with the FOSS philosophy. The benefits are "obvious" to crusaders, but the objective reality is that large swathes of FOSS are full of casual or hobby code that barely works, has gaping security vulns, and/or is nowhere close to being robust enough for production.

"Make software simpler" is a good goal, but hard to do. Other solutions are also possible. They're hard too. So it goes.

But there will be no solutions at all until the FOSS community starts dealing with professional reality instead of relying on free-to-tinker-without-consequences rhetoric - and understands that there are real problems that need real answers, and not just more "Clap Louder" and "At least we're not Microsoft".


Most widely used oss projects are not hobby projects. Only small trival projects tend to be hobby projects. Notably, apache projects are not hobby projects per foundation requirements.

Quite a lot of contributions are need driven - client project needing something digging into code and fixing it. It is called "scratching own itch". The reason for popularity of oss libraries was that comertial ones had historicall low motivation to improve after point of sale, especially when it comes to performance. That is why you see a lot of oss targeted at developers and very little at general public. Closed source libraries were unable to compete not just because of price, but because business of selling libraries does not value quality.


> No, the point is that in practice FOSS code is not any more open than closed source.

You seriously believe this? I mean, it's so obviously nonsensical I can hardly believe you are seriously making that point. With FOSS, you have the option to look at the code, with closed source you don't. And you are seriously saying that that is the same level of openness?

Also, even if it were true: How is that relevant? FOSS is just as bad as closed source, therefore you should exclude FOSS? How does that follow?!

> So you have to take the internals on trust.

So, I can not look at FOSS code? Like, you are telling me it is impossible for me to look at the code that I am in fact looking at when selecting code to use for something? I mean, really? That is your point?

And the solution then is to not use FOSS, because then you don't have to take the internals on trust?


Their point is, clearly, that to the end user, code they cannot understand is for all practical purposes no different than code they cannot view.

Code I cannot understand, I can ask for help with understanding. Code I cannot view, I cannot.

That's true. But most people using free software can't simply ask for "help understanding" millions of lines code written in possibly multiple languages. Like a lot of the axioms of FOSS, the utility of that breaks down at the scale of modern software.

But for one that still doesn't mean that not being able to view the code is somehow better, as was claimed above, and which I objected to. At best that means that being able to view the code does not necessarily imply the quality of that code is better. Which really is just obvious.

And also, that does not at all mean that the utility breaks down. Back in the day, it was normal for devices to come with schematics. Like, if you bought a TV, the schematic was included. Almost noone who owned a TV could read schematics. But the schematics were still useful to the owner, because those schematics were what enabled you to take your broken TV to any independent repair business of your choice and have them fix it at a competetive price.

You can profit from the wide availability of knowledge without having to learn it all yourself. If there are ten competing car repair businesses in your city that all understand how to fix your car, that is better for you than the manufacturer having a monopoly on repairing your car, even if you don't have the slightest clue how your car works.


I think you have pretty much proved his point.

>I realise it's hard for FOSS advocates to understand this, because it's a fundamental flaw with the FOSS philosophy. The benefits are "obvious" to crusaders, but the objective reality is that large swathes of FOSS are full of casual or hobby code that barely works, has gaping security vulns, and/or is nowhere close to being robust enough for production.

>And the solution then is to not use FOSS, because then you don't have to take the internals on trust?

I think the point is to realize FOSS is not a utopia and has tradeoffs like everything else.


> I think the point is to realize FOSS is not a utopia and has tradeoffs like everything else.

But that wasn't what this thread was about. The statement that I was responding to above was this:

>> or exclude FOSS when working commercially.

I.e., that there isn't a tradeoff, but that the solution to shortcomings of some FOSS that aren't unique to FOSS in any way is to not use FOSS at all.

And that was then defended using equally nonsensical logic.

So, noone here is claiming that FOSS is utopia. But people are implying that proprietary software is. Which I am asking people to justify. So far, nobody did.


No, I think what people are implying is that commercial licensing of software solves the trust/responsibility aspect of software. With proprietary software, there are legal remedies to malfeasance.

In which case that is still both obviously nonsense and irrelevant on two counts.

It's irrelevant because this was about FOSS vs. closed source, not about commercial licencing vs. noncommercial licencing. Even if commercial licencing were the solution, that says nothing about whether the commercial licence should be FOSS or closed source.

And it is also irrelevant because there are broadly the same legal remedies to malfeasance in all cases. If you are breaking the law, you are sill breaking the law if you are publishing your source code, and you are still breaking the law if your are doing it non-commercially.

And in so far as you mean liability for defects rather than malfeasance, it is obviously nonsense that there are any generally applicable effective legal remedies against terrible proprietary code if you look at the real-world quality of products in the market. You might be able to put together a contract that helps with that, but (a) that is far from the norm and (b) is obviously still irrelevant to whether the code should be open or closed.


1. Commercial does not imply closed source.

2. It’s about legal responsibility and recourse.


> 1. Commercial does not imply closed source.

How is that relevant?

> 2. It’s about legal responsibility and recourse.

How is that relevant?


You pay for the software, you get the source. I gather this used to be the default distribution method, and I think it still gets used for game engines.

> you are saying that we should prefer code where it is impossible to have a look at the source

Just because not everybody has access != a person inside a gated area doesn't get to see. You made that leap.

If you are building software on top of software that carries no guarantees then you are liable unless you also somehow make no guarantees. Are you able to sell software without guarantees? Maybe?


> Just because not everybody has access != a person inside a gated area doesn't get to see. You made that leap.

Well, I guess that was somewhat of a leap, but it doesn't really make a difference to the argument: The point is that they are suggesting that preferring code that fewer people can look at somehow solves a problem.

> If you are building software on top of software that carries no guarantees then you are liable unless you also somehow make no guarantees.

Well, yeah? But what does that have to do with FOSS? Neither does FOSS imply that there are no guarantees, nor does proprietary software imply that there are guarantees. Hence: What is the relevance?

The only difference between FOSS and proprietary software in this regard is that with FOSS you have the option to do an audit yourself and offer guarantees on that basis without creating a huge unknown risk for yourself, or you could possibly buy auditing services on the free market that come with some sort of guarantee from a third party. There is no option that you have with proprietary software that is somehow impossible with FOSS, which is why the suggestion that not using FOSS for commercial projects somehow solves a problem is strange at best.

> Are you able to sell software without guarantees? Maybe?

Well, given the tons of massively broken proprietary commercial software out there? Yeah, obviously you can?


Best line in the post:

Software must be made understandable. The essence of FOSS for me can be reduced to one fundamental computing right: the right to refuse to run, on my machines, code that I do not have the option to understand. That is it.


To me, those three sentences don't even fit together.

You've always had "the right to refuse to run, on my machines, code that I do not have the option to understand". Nobody is forcing you to run any random piece of code you found on-line. You do that of your own accord. And if you screw this up, and that screwup affects other people, it's your fault. Simple as that.


Not only have we always had this right, the rest of the article argues, correctly, that the option to understand the code you run does not solve the problem. It's like having the option to study before an exam.

Sounds good - but perhaps too general. With binary you also have 'option to understand' - it would just require a lot of work.

The call for a massive reduction in complexity reminds me of the VPRI "STEPS" project (Alan Kay et al) to build a complete end-user operating system in under 20,000 lines of code. Sadly, it looks like the project is dead or over, and the links I've found pointing to it on vpri.org are dead today.

Until there's some external stimulus, I don't think the industry is going to change. It's a lot cheaper to add new flashy things if you don't care about complexity (or the consequences of it, like bugs, and security). Getting a consumer to care about the complexity of the software in their computer or phone is like asking a Ruby programmer to care about the microcode in their CPU. It's not that we can't understand the problem but it's not a concern until it gets so bad it impacts my level of abstraction.

I'd love to see programs start putting little badges on their webpages that brag about how few lines of code they have, how low their cyclomatic complexity is, or how short their dependency tree is. I'm terrible at marketing but surely there's a way to make this sound appealing.


The project completed successfully and wrapped up. Final report (2012) http://www.vpri.org/pdf/tr2012001_steps.pdf

Discussion on HN (2016) : https://news.ycombinator.com/item?id=11686325

I think Brett Victor is one of the people driving the effort beyond that project: http://worrydream.com/


> I may be wrong, but in the millions of downloads of event-stream and it’s 1.5k dependent packages, there should be at least 5 companies involved which, had they spared 1 dev-hour per week each, should be more than enough to keep event-stream kosher.

This is a really good observation. If you know some important transitive dependency your job depends on is missing a maintainer, tell your nearest supervisor you'll be spending an hour a week taking care of that. Or a similar amount of time helping a direct dependency get rid of the broken transitive one, whichever makes more sense.


This might work for the most privileged class of software developers with the most clout, but I'm pretty sure that most devs don't have the luxury of informing their supervisor what they'll be working on.

I'm sure you may be correct in regards to some areas of the world, but where I am, software engineers are paid very well. That means low supply and high demand. That means you probably have more leverage than you think.

If your employer is dumb and don't realize that's why they pay you so much, you can probably exchange some of that high pay for more respect by going elsewhere, taking at most a slight cut to your pay.

If it's about personal insecurity (it used to be for me), think of it this way: all that money they give you? It's because they want you to be a professional, which involves informing management of when you take corrective action based on your expert knowledge.

I'm not telling you to be dumb about it. Sometimes it does notmakes business sense to run on maintained software. But if it does, and your manager may not know that, they trust you to have the integrity to inform them. Any good manager, that is.


Agreed. More likely that management will say ‘don’t use that dependency’ or ‘we will write our own from scratch’

You need to find a better employer.

I wasn't talking about myself, but some colleagues I have elsewhere. I pretty much come and go as I please, define the technical direction of the product and get paid handsomely for it.

Devs are still treated like children in many companies.


Without assigning blame to anyone, a simple-to-follow best practice would be probably to just never hand over a (personal) project. Make it as deprecated and let the community fork it. If you feel like it, point to the new, maintained fork.

Absolutely. I don't see why the author brushed off the maintainer's responsibility so casually.

> lets roll for a bit with the assumption that a small amount of extra care on Tarr’s part could have avoided this mess.

Are you kidding? The bare minimum of not just handing a repository under your name to a random stranger is apparently a hard to fathom concept for this author.


Whilst this particular instance focuses on the JS ecosystem and NPM, realistically the general points apply much more widely.

The number of commercial services and pieces of software that are largely reliant on huge piles of OSS is significant.

In contrast to that we're not really seeing those companies focus enough on the security of the OSS that they rely on.

To provide two other well known instances, both Heartbleed and Shellshock sat as vulnerabilities in their respective OSS software for a large number of years. The packages were widely used by commercial software vendors, yet none of them discovered the issues...


It should be noted that in npm, examining the source of the package tells you absolutely nothing about whether you should trust the package - the actual blob that you download need not be generated from the same source.

Support for tree-shaking in Javascript build tools is improving. This means that unused functions are not included from a dependency when they are not used. This could be a foundation for a more trusted utility library.

Such a library could be made by crawling NPM stats to find out what kind of small packages/functions are most used, and select a subset of these. The code could be lifted from the original packages (with proper licenses and attribution). The package maintainers could be approached if they are interested in helping to maintain this new collection package. The thesis being that such a package can be more efficiently maintained by a group, than each individual maintainer and package alone.

Once it exists, one can post PRs to common dependants to use the new package.

Could it work?


> Support for tree-shaking in Javascript build tools is improving.

I agree tree shaking could help a lot. But every time I have tried to implement tree shaking properly in a JS codebase, it is not trivial to solve. If one configuration is not properly setup then the entire library gets inlined during compilation.

Furthermore, libraries like lodash use a lot of inter dependencies so using something like `get` could bring in a ton of stuff that is technically used but totally unnecessary.


The common consensus is that building JS apps without an endless list of dependencies is so slow that you're going to get creamed in the marketplace due to inefficiency. Why should we spend $5m re-inventing the wheel when we could spend $2m re-purposing a bunch of other people's work?

Whether or not that's true is largely irrelevant, since the decisions are made based on what people believe, and since the same decision is made every time, there's not much data for the other side of the comparison. When was the last time someone deployed an app with less than 10 (total, not direct) dependencies? Hell, even 100? Some people have probably never built one with less than 1,000.


This just makes me think a primary design goal when writing software is to Optimise for Deletion.

Everybody says that they emphasise ongoing maintenance when writing code, but the best way to take that to extremes is to optimise the ability to remove (and replace) code en masse.


This looks somewhat similar to good old modularity and decoupling. Does "optimize for deletion" differ in some way from that?

Don't forget horrendous build systems. Ever try to build a Google product? I wanted to use Skia for my game's GUI - I never got it to build.

I have also evaluated Skia, and came to the same conclusion.

Have been using nanovg instead: somewhat lower level, but at least it builds, and also it’s quite simple to use. The main downside it doesn’t support ClearType for text rendering, even when using it with FreeType.


I agree, dealing with Google open source is never a pleasant experience.

I don't think that reading through millions lines of code is a sane idea. Therefore I suggest an alternative to author's opionion:

- select a Linux distribution one can trust, like Debian

- throw away your Node.JS and switch to PHP. We have frameworks like Symfony made by a company that can be trusted and that has proper code review processes so you don't have to worry that tomorrow there will be a backdoor

Another alternative would be a community code review. Everyone using OSS code can contribute some time for a review of the project they like the same way they contribute the code. Reviewing is even easier than writing a code. Then they sign the lines they have checked. It would also help to find poorly written, difficult to understand code and find unreliable code that can contain vulnerabilities.

The companies using OSS code could be interested in this. They would prefer to use the code checked by other corporations rather than the code that has never been reviewed. And they probably are already doing the reviews privately so it wouldn't cost them anything.


So many people here seem to be missing the point by assigning blame to npm or node or whatever specifically.

This situation could happen with packages in any language.


I agree 100%. I don't happen to use npm/Node, but I'm terrified that this could happen in the language/package ecosystems that I do use.

I don't think that throwing wide-ranging functionality into the standard library is the answer, but I think with a bit of tooling it might be feasible to provide a "trusted" tier of packages that have been signed off on by code reviewers. Instead of every vendor needing to vet every dependency in their product, the work could be spread around. When a vendor finds they absolutely must have the functionality provided by a package in the untrusted "playground" (or whatever), they might be motivated to sponsor a review to upgrade it.


Could. Yes. But many orders of magnitude less likely.

This is why languages have standard libraries. It prevents the dependency hell we have with node.


Reduces, not prevents.

Prevents vs acts to prevent. Sorry, I misspoke.

Fact is. Nobody builds a standard library to reduce dependency hell. They do it to eliminate it.

The standard library uptake, quality and efficiency is all intertwined sure and developers can always go shoot themselves in the foot anyway but, wait, I'm sorry, did you have a point?


No standard library contains everything most people need, and sometimes the stdlib version is not the preferred.

Look for instance at Python. Lot of people use requests instead of urllib for HTTP client. No-one uses the HTTP server. Until 3.x no support for async IO or event loops, so we have Twisted/gevent/eventlet, mostly incompatible.

My is only that there are painpoints around dependencies event with a stdlib.


In e.g. E a module gets no authority to do anything but compute. You have to pass it the wallet explicitly.

This doesn't help as much as you think. Just as with the users, where if they want to see dancing rabbits they will accept everything and enter every password until they see dancing rabbits, in software the devs will try everything, including passing the wallet to an untrusted package, to make the damn thing work.

I am really rooting for understandable software. I do not consider a software without appropriate documentation to be free (as in freedom).

Lately I met many companies claiming they make "open source", when their software is in fact hardly usable as is, and you would anyway be required to upgrade to "pro" features or "pro" support to get anything useful done with it.

At this point I consider such piece of code de facto closed source. Disclosing the sources or part of them in itself does not bring much value and does not increase the number of careful eyes auditing the code.

I really wonder why they decided to open-source such product to begin with, is it marketing? Are there other advantages?


Open Source is not Free Software. It never was. Open Source is the development model where you open up your source so more people can see and debug it. In practice, it could mean you write commercially important software by employing a thousand monkeys with text editors and then dump the impossible maintenance task onto the "community".

> FOSS was never about trust in software owners.

I get what the author is saying, but I think these kinds of originalism arguments are always a bit tedious. Do we really think that the original utopian intentions of hackers in the 80s are going to provide us with magical insight into this problem? The context is completely different from anything they would have imagined.

Open source / free software isn't a gift from heaven that will work out if you just believe hard enough. The situation described is a new problem, and we have to think hard about how to solve it


I get annoyed by languages/frameworks that want to download code from mysterious web sites, bypassing the distribution's package manager. Ruby seems to do this, though I might be wrong: that I can't easily tell where the code is coming from is bad in itself. I prefer to know that there's at least a Debian maintainer between me and upstream.

I definitely would notice if Debian itself got taken over!

Perhaps the rule of thumb should be: if it's not properly in Debian, then don't use, it unless you have a very good reason for doing so.


> Ruby seems to do this,

Ruby uses https://rubygems.org as central repository for the gems (something like node's modules). You can however host your own having only inspected software. However you know where the code is comming from and you can download it and inspect before installing.

Idea (benefit) behind this is two-fold: easy way to have same library in multiple versions (you often need that) and having per-application isolated bundles of gems (again, it's easier to clean everything afterwards this way).

Could all this be handled via system package manager? Sure, however it would be much more work for the developers (since gems generally work even on windows and other platforms; what package manager would you use on windows without `gem` command?).

PS: I don't argue that there is room for improvement in some aspects, but the general idea isn't bad imo.


Debian/Ubuntu etc generally have packages for a subset of programming language libraries.

The problem tends to be that users don't like the lag introduced by the packaging process.

It's definitely a valid strategy to get "more trusted" libraries to restrict yourself to ones that are available the the OS package manager.

Of course you should understand the limits of that trust ( for example AFAIK there is no security code review involved in the debian package process)


Additionally, packages from the Debian repos always install reliably, which is much more than can be said for the likes of Ruby packages.

Let's not forget how those trustful Debian maintainers packaged openssl 0.9.8...

It was 10 years ago...

For people interested: https://www.isotoma.com/blog/2008/05/14/debians-openssl-disa...


Homesteading the Noosphere by Eric Steven Raymond seems relevant in this discussion. Originally written 20 odd years ago.

"I examine the actual customs that regulate the ownership and control of open-source software."

http://www.catb.org/esr/writings/cathedral-bazaar/homesteadi...


I've been saying this forever and it still feels like an unpopular opinion, but stuff like this is why I hate micro modules as opposed to frameworks. A lot easier to trust say, RxJS (with just 1 small runtime dependency) instead of 40 different tiny modules that may depend on other tiny modules. Actively developed frameworks seem less prone to this kind of malice.

This is nothing to do with the free software. It's just that NPM ecosystem and culture is horrible.

Ah, culture...

There is a law of the transition from quantity to quality by Hegel (http://www.pnas.org/content/97/23/12926).

As soon as number (e.g. of modules in NPM base) reaches some point the overall quality changes dramatically. It is a step function - from good to bad. From enthusiastic acceptance to full rejection of NPM and similar code sharing mechanisms.


That is a fantastic read, thank you. It's amazing how many structures and institutions have evolved from that simple concept.

OK - that whole talk about making software more understandable instead of churning new features - sounds good. But his rejection of reliance on trust looks like a pipe dream.

What is he proposing in concrete terms? How one should decide that his code is understandable enough? What to do with the recursiveness of libraries that use other libs that use libs ...? The only concrete advice he proposes is the two weeks idea: """ is it possible for a new (but experienced) developer to understand 80% of the part of the codebase they’ll be responsible for within their first 2 weeks (80 hours)?

If not, this person is overworked from day 1, """

But is this practical? All code should be understandable in two weeks?


This is less about open source, and more about how people consume it. If you take open source as simply one of many available resources, read the code, and make deliberate decisions whether to use it, fork it, modify it, etc, then actively engage with its presence in your codebase, you are fine. On the other hand, if you "npm install <whatever>" as the easy answer to every question, you will get burned: At best by code bloat, at worst by malicious actors.

But no one has time for that, right?

This is what really blows my mind. Yes, Facebook is the main force behind React. But where are the other companies stepping up to maintain chunks of the JS ecosystem?

If you work at a company using FOSS software, please try to convince your managers that helping to maintain said software is a good use of (some fraction of) your time. It's in your own best interest.


Interesting - we in theory, get to trust the code cos we can read it.

But in reality we trust the authors. So in theory we could build a dependnacy chain of signed commits, with a in theory monetary payment going to those whom we trust

So apart from Linus getting a Xmas bonus, it would directly encourage ... what?


Wow, beautiful!

Looking forward to the next chapter on trust by review. (And I don't mean 5 stars on some "app store" you've never used before by reviewers you've never met.) Big ups.


part of this issue is that it's difficult for maintainers to capture value for their work even when they're successful

i've written a "paid use" license that attempts to capture many of the freedoms of FOSS while allowing a revenue stream if a package is successful. would love any feedback

https://github.com/db4j/pupl


The problem is that people want open source software to be like their own software. That is, it is written like they would have written it, and in such a way that they can go through the source code and understand it and modify it, but without them having written it themself. Necessary for this, I think, is that the software is legally indistinguishable from software they wrote themself.

Many people license their code under a "public domain license". What's interesting is that such a license has no requirements of placing a copyright notice anywhere, and allows someone to freely relicense without modification. This is precisely as it would be if you had written the software yourself.

Your license recognizes the value of freeware as a marketing tool - the more people with their hands on the software, the more likely some of them will stand up and pay for it. But I think the personal ownership aspect is more foundational.

In advancing that, the license I would write would be one that places the software under a free license after some fixed date. You might state, for software released in 2019, that starting in 2024 it will be licensed under the GNU GPL. Until then, it would be licensed under some proprietary license, perhaps the one you propose.

This forces people who use the software to pay for it's development, under whichever terms the proprietary license requires. Simultaneously, it ensures that those paying for it ultimately own the software, and therefore any improvements they make to it.


mariadb's Business Source License (https://mariadb.com/bsl11) is consistent with what you describe, though without any assurances during the proprietary phase

possible that combining the two as you suggest would be the best of both worlds. i need to think about the business implications for the author


the event-stream story remind me the true/false story of the color logs : https://hackernoon.com/im-harvesting-credit-card-numbers-and...

The article reminds me posters of USSR time that I saw a lot:

"Soviet developers, make your application simpler and readable!"

"Soviet developers, be responsible, do not pass the code to strangers!"

"Soviet developers, read the code of your dependencies!"

"Soviet developers, do all this with a joy and free of charge!"

Replace "Soviet" by "FOSS" above or vice versa for your taste.

While basic ideas (e.g. Communist Party Manifesto as the Commandments it is derived from) are perfect the merciless reality is not. Human nature, sigh.


Well, free software is communism, so it's not surprising.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: