It's a long time since js needed a good stdlib.
Everybody was using this package but apparently no one but an adversarial player stepped up to actually maintain it.
(And don't get me started on "let's always get the latest version of the package")
> You want to download thousands of lines of useful, but random, code from the internet, for free, run it in a production web server, or worse, your user’s machine, trust it with your paying users’ data and reap that sweet dough. We all do. But then you can’t be bothered to check the license, understand the software you are running and still want to blame the people who make your business a possibility when mistakes happen, while giving them nothing for it? This is both incompetence and entitlement.
Not surprising. Not surprising in the least. "Oh wow somebody 0wned the package I needed". Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project. Maybe because in the extreme opposite of NIH, people feel the need to import a module for every small thing they want to do? "Stack overflow programming", "how do I add 2 numbers using React, is there a module for that!?!?!"
However developing robust software is possible in JS, the same way it was possible in PHP. You just need to take some time to vet dependencies a bit better and not use every fad when it surfaces. Common sense and experience help a lot. But it's incredible what can be built with modern toolchains and how maintainable it can be. Don't let the anti-JS feeling scare you from trying React / Vue /..., just don't forget that state management, separation of concerns and similar concepts are still valid.
It's wonderful that programming is becoming a kind of "equalizer" on the job market. But without supervision from software engineers (i.e. people with the experience and aptitude to design software properly) you end up with a mess.
The fact that there's no panacea is a fact of life. There's a tradeoff, but it is not reflective of the people in the JS community.
In both cases you are expected to at least sort of understand the code you are copying/importing.
If anything, taking for granted that the JS ecosystem has a higher caliber of developers in it than PHP only serves to make the JS ecosystem look worse in contrast.
Edit, a culture difference example:
a dev wanted to get the video length in a mp4, so he installed a npm package but it did not worked, the issue was that ffmpeg was not setup in the PATH so he asked me to solve it, I asked to use the absolute path of the ffmpeg binary but that was not an option n the package.
In the end I checked the package he found and used and I showed that it just called ffmpeg from the command line and if we just do that we can pass the exact options we want and not have to install a third party package.
As a PHP dev when I have to perform something is to find if there is a Linux CLI app that does this, install the app on the server and call it (like ffpmeg, wkhtml2pdf, an epub to mobi app, etc) I can trust a CLI app that is packaged in Linux then a random package.
PHP has composer and we use that for third party integration like Dropbox,Facebook,Amazon have official packages that we can grab and use.
This is only better because your distribution has a curated set of packages with trusted maintainers that (in theory) read through the source code to make sure it's not doing anything malicious.
I noticed that my colleagues that never used Linux are not familiar with the CLI tools and the power this tools have so they are not trained to think "hey we can use this CLI tools and we can solve our problem" but they think let's search SO, GitHub, or NPM for a solution on how to do X with node/PHP etc.
So in my example the job was to get the video length, the dev tried to use https://github.com/eugeneware/ffprobe
if it all would have worked would have been faster but at the cost of not knowing what is happening under the hood and depending on this package that also seems to depend on 3 others(not sure if those also depend on others)
IMO browser vendors should check some statistics , see what helper libraries are most popular and put those functions in the browser, similar for node, see what is popular and install that. Also the culture needs to change, when someone wants to explain how to create a thumbnail using image magic or other task that is just a wrapper then they should show the code on how to do it and avoid creating a new NPM package.
I assume but I could be wrong that some developers think that having published npm packages, github repos, blog posts are helpful n getting better jobs so the culture of "CV programming" appeared where a lot of packages,side projects and blogs are created but the quality suffers.
I think the biggest problem with the JS community is that the mindset is always "implement first, understand second", rather than the other way around. It's ok to use libraries but you should at least have a think about how you would implement it yourself first, and understand at a basic level what's happening under the hood.
Whenever I've brought up this idea the pushback in the real world is absolutely insane. Here on HN people are more receptive to the idea, but in most companies the mindset is just "shortest path to implementation at all costs".
The ironic thing is that bigger companies, rather than giving their devs the time to learn, will force them to spend 80% of their effort on processes and unit testing to try and account for the shit code being produced in the other 20% of their time.
No, both are terrible, garbage languages, languages that have been whipped into reasonable shape over a decade or two by good programmers forced to use them due to awful monocultures that arose through successions of largely arbitrary events. This reasonable shape means that the most recent versions can be used with relative pleasure if you ignore half of their syntax, clamp massive libraries to them to replace the other half, and assume that your end product will be as fragile as glass.
I have a checklist I've been building which is purely based on my experience and is subjective. Appearing on the checklist makes teaching someone else more difficult and looks bad for the language in general.
I believe the ecosystem is part of the language. You can't do much without running into npm if you use node but you can avoid it if you just use JS - are they separate? I treat them so. If you aren't a general purpose language being used as a general purpose language, that's partially the language's (including ecosystem) fault. eg don't make a UI with erlang.
* Language has a toxic ecosystem - node
* Language makes it easy to do the wrong thing - node, js, PHP, lua, Perl (notice no Python)
* Language makes it hard to do the right thing - node, js, PHP, Perl, erlang, Python, Haskell, Java
* Language is based on an esoteric design principle - erlang, Haskell, lua (meta-things)
* Language which is internally inconsistent - node, js (floats, time, etc), PHP (bifs), Java (type system)
etc. I think there's plenty of languages which have problems and few seem to be shoring them up because we still don't have a consensus on how dynamic typing should be implemented, so we build upon the sand of flawed languages and argue about triviality.
I've seen lots of bad python code, I don't think it's hard to do the wrong thing.
but sure if one wants want to classify incompetent developers in the PHP bucket, then everything is "the new PHP"
What? Start small and vanilla:
And I can guarantee you, from personal experience, that PHP is not the only language in which string concatenation with variables is the most common means of writing SQL queries.
It is true that in both cases, eventually top notch engineers got involved in the projects’ core and cleaned things up to the extent possible. But it’s a heck of a lot easier to improve a spec than a culture.
Also, it is much faster than the suggested alternative (pdo).
I think in 2012 a frameworks got popular that simplified the task of building JS apps that are as complex as apps in C++ or Java. Actually using these frameworks wasn't trivial when they came out. Now that's pretty easy but the deps are crazy. I wonder what happens next, maybe some super advanced package manager will come...
I would hold out hope for Webassembly but I'm certain it's just going to be swallowed up by the Node monster along with everything else.
But in all other aspects I think JS is the new PHP.
Even if with respect to the implementations, if you file a bug with any of the major browser vendors then you'll probably die of old age before the ticket is even triaged, let alone fixed.
To me this illustrates a fundamental problem that JS has to deal with that is virtually unique in the programming language space: multiple browsers across multiple browser versions. It is incredibly difficult to get all browsers to adopt a "standard library" and even then it takes years for all users to adopt those browsers that support the standard library. Even on top of that, not all browsers implement the standard library properly. It really is a nightmare grown out of competing browsers with users that do not update them enough.
> Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project.
This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.
Let's flip to Python. Let's say magically you can run python inside a browser starting tomorrow. The second you bring in a library like `numpy` you're looking at a bundle size of 40 MB, and that's just one dependency. In the JS world that is utterly unacceptable. All the languages you mentioned take advantage of the fact that they can download those libraries to the filesystem and forget about it. JS has to download libraries over the wire, it's a completely different game.
What I'm trying to say is that the JS ecosystem didn't invent a bunch of problems to solve or that the people running the ecosystem are script kiddies. There are very unique problems that need to be solved in this ecosystem that make it different, especially when referencing the three languages you mentioned in your post.
With all of these "JS sucks" arguments I see a severe lack in empathy or even remotely trying to understand why JS has the problems it does.
You don't need to. The standard library could just be a community curated project (with the help of major browser vendors) that ships as extra code. It could even be on npm.
If the browser has it included, even better, if not, it's referenced the usual way third party dependencies are.
The problem is having a package.json like:
random lib 1
random lib 500
function 1 of well curated lib X
function 2 of well curated lib X
function 22 of well curated lib Y
function 35 of well curated lib Y
>This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.
That doesn't seem the case either. In major web pages, even from big companies, there are multiple versions of dependencies, even full deps like lodash and co. And people use all kinds of gigantic (web wise) frameworks and third party libs like moment.js with wild abandon.
Besides, even if that was the problem, there's nothing stopping you from having a modular set of libraries (like lodash), that you can cherry pick from the functionality you need and only load that.
The problem the parent mentions is not "JS needs to include big libraries and stop using small dependencies" but JS needs to stop using random small dependencies from here and there.
Using 100 dependencies from all kinds of crappy upstream places (e.g. some crappy leftpad implementation), is different than having a curated set of libraries and loading 100 small dependencies from that.
In supported browser, resource will be downloaded once and kept in cache for long period, maybe forever.
For example, `https://well-known.js/common-1.1/lib-1.2.js` . Browser can download just `lib.1.2.32.js` (redirected from lib-1.2.js) every time, or it can download whole `common-1.1` bundle once and keep it in separate long-time cache until it will be deprecated And when it will be deprecated, browser will just revert to old behavior.
If tree-shaking isn't good enough, it needs to get better. Instead of having lots of individuals creating tiny packages with one function in them, there need to be fewer, broader libraries that are closely watched by larger teams.
the issue is NPM. NPM is a bad package manager. A package manager that allows duplicates version of the same dependency is broken to begin with. People complain that fetching react-native results in hundreds of packages installed on their computers. How many freaking dupes? off course nobody is going to audit all that crap. Conflicts should be resolved upstream, which would lead to more stable packages to begin with.
NPM was developed by people who were clueless about package management and now are profiting directly from that shit-show, and even Node.js creator went on recording saying tying it to NPM was a mistake.
Npm allowing it is a good thing about npm.
Library authors. Please be extremely conservative in when to pull in dependencies. I will prioritize this higher for projects that I maintain.
The problem with thinking these x packages are OK for this library is how we got where we are. Yes we could just avoid using trivial libraries, but even for the other ones the decision to add it into your lib has to be made in the context of the app or lib, or lib's lib using your library. If you only imagine your lib being directly used, your scales a d judgements could be way off the actual cost/bebefit.
I think it's precisely because it lacks a good committed (commercial) owner like Android and .NET have. I could see it going from bad to worse in the future, as Oracle continues to jettison responsibility for bits of the ecosystem.
Yes. Huge package ecosystems are hurtful. For the basics, language communities should come with a nice, batteries improved, standard library.
For the rest, people should only trust community projects with big following, processes, etc (like Apache stuff, Django, moment.js, postgres, etc), or open source projects supported by companies (e.g. nginx, mongo, React, etc).
The rest, sorry, but you got to write them yourself.
Downloading and using 1-person libs for trivial stuff like lefpad and such is madness.
Let's safely call it immature. The principle objectives here:
* obtain employment immediately
* prevent unemployment forever
How do you solve for the obvious emotional failure that comes from the described lack of preparation? You hide under layers and layers of abstractions. You make life easier to the point of hoping code exists that does your job for you. Unfortunately that easiness is the opposite of slim or simple. It's great until it isn't if you aren't in a hurry.
Or they should do what they appear to be incapable of which is to provide certification and verification of certain packages. If people really need a lib for inarray then npm should maintain and certify it.
It should not be difficult for npm to raise the funds needed to do that.
Actually, it is. That's why foundations such as Apache, Mozilla, Khronos, etc exist. Transfers of ownership, abandonment, and bad faith are not new. We need to trust not only in the software for today, but for tomorrow as well. Foundations step in because they're able to harness the financial clout to attract maintainers.
"We must make software simpler."
We must make software INTERFACES simpler. And that means opinionated solutions, preferably with a HEAVILY opinionated top layer API for the 80% and a less opinionated lower layer API for the 20%.
The Gradle version in the Ubuntu repos is too old? Just add a PPA from some random guy on the internet. The Gradle wrapper also exists, but you'd need to type "./gradlew" instead, which is apparently enough reason to use the PPA instead.
Need a Gradle plugin? Oh, yeah, just add Maven Central and JCenter as repos. No idea who (/if anyone) audits those, but we'll just trust them anyways.
Need a Docker image? Just go to Docker Hub or Quay and download one that looks good.
Don't quite like Eclipse and the company won't pay for the IntelliJ Ultimate Edition? Just install the Community Edition, with no idea if that Privacy Statement actually means they could phish your SSH keys, phone number etc.
Need a way to transfer files? Google Drive is a great way to do that.
Need an operating system? We have either Windows 10 or Windows 10 for you, which has been shown to transfer encrypted, undisclosed packages to Redmond, even in the Enterprise Edition.
Office suite? MS Office, for which the same is true.
Need to look up some specific issue with security critical software you use? Just type it into Google.
I even once saw someone who typed a password into the Chrome URL bar to show the guy next to them how it's spelled.
With the sheer disregard I frequently encounter for any sensible behaviour, I've really stopped wondering how hacking, industry espionage etc. work. I'm already quite content if it's not just all out in the open.
"This device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation."
Everything is backdoored, absolutely everything: your hardware, your firmware, the compiler used to build your software, your software, libraries used by your software, your network infrastructure, your crypto, your sources of entropy, the machines you communicate with, everything.
Still, this is not the main problem highlighted by this (yet another) NPM fiasco. I believe there are two and only two core problems that caused this issue, and that will enable future incidents like this:
Problem 1: some scoundrel violated the commons. The commons have no effective means of tracking them down and punishing them, so that a) they'll deeply regret what they've done, and b) other scoundrels will be deterred from trying. Lack of means of effective policing means various open source communities will keep having such problems.
Problem 2: people don't check their dependencies. Yes, I can already hear all startups screaming, "we can't afford it". Well, sucks to be you, but you'd better hustle and find a way. The licenses of almost all the software you use disclaim any responsibility for anything whatsoever, so if you expose users to that software and that software harms the users, it's your fault. You mishandled them. So find a way to vet your software, or buy some insurance against yet another NPM compromise. Or don't, and accept you're taking a risk.
To be clear, I'm not advocating a general "caveat emptor" attitude to software. We've build a civilization in part on systems and regulations that allow us to not vet everything we interact with, and yet be quite confident in our safety. But FOSS is not there yet (Problem 1). It's built on trust, but most communities have little means of protecting that trust. As for companies, I have little sympathy (Problem 2), like I wouldn't have much for any other company in any other industry that said they can't afford to do their basic job right.
The tricky part is that lots is npm packages end up bundled into an app that runs in the browser. It's just a guess, but I'd guess that significantly more npm imports end up running in the browser than in Node. So even giving Node a standard library as extensive as Ruby's wouldn't help as much as one would hope.
Package maintainer should indeed have found someone to pass it onto (see Cathedral & the Bazaar). And that doesn’t include the first person he’s never heard of stepping up.
BUT this applies to all package managers, maintainers, and OSS at some level.
The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that. The idea that you “don’t have to trust” the authors is untrue, in the current workflow. FOSS relies entirely on trust.
I’m not convinced FOSS is even a good idea at this point, but with the advent of widespread cyberwarfare we need to either introduce a sophisticated accompanying trust model, or exclude FOSS when working commercially.
This is a business opportunity. Audit FOSS and sell your audit guarantees in a contract. Offer services to audit more recent versions on the proviso that you can sell that audit elsewhere.
This will have the incidental benefit of encouraging clean software to be written in languages that minimise audit costs, as those projects will get used more.
Some commercial arbitration of FOSS now looks inevitable.
This may sound strange, but once tech companies actually had to either write or buy all of their software, and if they didn't have a contract in place that made someone else responsible for its quality, they were. So, basically, the world was absurd.
The way FOSS works is if users agree that a common good is important enough to invest in, and then they all benefit more than they would if they invested alone; it's anti-competitive. Free (and Open Source) software should be thought of as "free as in beer" i.e., the next round is on you. If you can't audit every line, audit some of them, or pay someone else to do it. Coordinate to get code coverage. If you use a project that is inadequately covered, you're responsible for everything that goes wrong.
If you don't even know what libraries your project depends on, how could that ever be thought of as anyone's fault but your own?
This may be a central part of the issue. It's a coordination problem and creating common knowledge.
Each corporation might be willing to pay to have some bits of their dependencies audited as long as others cover other pieces. But to do that they need to be able to announce the audit result and scan their dependency trees for pieces that are not audited and pick from those. You'd also need some common standards what constitutes an audit and the lawyers would probably want some limits on liability so results should be considered as best-effort or whatever.
There are no conventions and social protocols in place to support this.
WITH NO WARRANTY.
Do you not understand what this part of the licensed mean?
Mainter doesn't have to do shit. That is the point.
You want to start putting arbitrary ethics and morals on these developers? The "Fuck you. Pay me." talk comes to mind.
FOSS works fine like this and has been for a long time.
We're seeing issues now because of nodes lack of a standard library. Not trust and certainly not instead issues with free and open source software.
When the entire ecosystem depends on left pad I think you have a problem.
When the entire ecosystem depends on express it is less of an issue because more eyes are invested in auditing it and changes are more widely reviewed.
You think a nefarious leftpad function would make it's way into express? It could, but it is way less likely.
Updating the dependency to a newer version of left pad isn't going to raise an eyebrow. And that is the issue.
I'm reading that you'd pay a third party so you can trust open source code and think that FOSS somehow exposes commercial code to more risk in some kind of cyber warfare? How is that not complete FUD? You already have the option to pay vendors like redhat for many open source software components if liability is your only concern, the same is true for many of the more complex libraries out there.
Closed source on the other hand would mean buying every single piece of code or paying in house devs to write that code. I get the quality concerns raised here up to a point but just because a company paid somebody to write something doesn't mean it's not effectively written by a solo dev under heavy time constraints. Except with FOSS you at least have the chance to go in and inspect/fix the thing yourself if needs be.
1) The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.
Personally, I prefer the commercial license/source-included model, with the emphasis on the author/company getting paid to ensure that the situations like the one described here are avoided. You can then have additional educational licenses for ensuring access to developer tools for educational purposes, but that's up to the author/company.
2) If you directly pay someone to write software, I would expect any such arrangement to include the source code as part of the work product, regardless of the ultimate visibility of the source code to outside parties.
With foundations or any other form of over-arching bureaucracy, you risk stultifying software developers and harming innovation. It's really, really hard to beat the self-organizing aspects of free markets combined with commercial legal frameworks.
There is market demand for stability and it can be a competitive advantage over innovative but unstable alternatives. (Consider why Go and Docker are so popular.)
And why do companies start and fund foundations? Because their customers have doubts. It's better for stability than a market that's not based on standards.
And now Microsoft uses linux on the majority of their own cloud offerings. Open source beats propriety software on economic terms a lot of the time. It doesn't matter that both can work in economic terms, it matters which one is better in economic terms.
FOSS is very much like the internet, in general: it was great when it was a small group of technical, like-minded, dedicated individuals working towards common goals. It starts falling apart, however, once you introduce the rest of the world into the system because the world primarily works on the basis of ruthless self-interest.
Pay Redhat enough and they will do that. Although you will be limited in what you can use.
Big business absolutely do that. Code quality review, security review, legal review. Every line of every 3rd party dependency.
Of course, for the most part big business doesn't take 3rd party dependencies. If you have a big enough software org, you write everything above the std library in-house. Why do you think so many of the big open-source frameworks are vended by big 10 tech firms?
If would be true somebody would have noticed this hack before. Has been online for 2 months and they only found out because of a deprecation message.
> Of course, for the most part big business doesn't take 3rd party dependencies.
I worked for many big companies, definitively they use 3rd party deps.
> std library in-house
And that contains 3rd party deps.
Big companies have some 3rd party to check the libraries but it looks like they are not good enough because they didn't catch this one.
I've had to go through code, security, and legal reviews at both Amazon and Facebook when desiring to import 3rd party libraries. They were fairly thorough.
I kind of agree, but remember you are getting free software. Not a little, but a ton of free software and you feel like somebody should guarantee all works fine.
- See what is going on in all your deps and waste a lot of time
- Risk it and use the software without knowing what is doing
- Pay somebody to guarantee that the software is not malicious
So, you are saying that we should prefer code where it is impossible to have a look at the source because that solves the problem of having to trust the developers of that code?
That's the point of this piece. For any non-trivial edit on a real project with real deadlines the source code is effectively useless, because no one has the time, the resources, or possibly even the inclination to fix bugs, do full-coverage testing, or make custom modifications.
So you have to take the internals on trust. Which is a ridiculous situation when so many packages are created as hobby projects with - literally - no guarantee of performance or reliability.
I realise it's hard for FOSS advocates to understand this, because it's a fundamental flaw with the FOSS philosophy. The benefits are "obvious" to crusaders, but the objective reality is that large swathes of FOSS are full of casual or hobby code that barely works, has gaping security vulns, and/or is nowhere close to being robust enough for production.
"Make software simpler" is a good goal, but hard to do. Other solutions are also possible. They're hard too. So it goes.
But there will be no solutions at all until the FOSS community starts dealing with professional reality instead of relying on free-to-tinker-without-consequences rhetoric - and understands that there are real problems that need real answers, and not just more "Clap Louder" and "At least we're not Microsoft".
Quite a lot of contributions are need driven - client project needing something digging into code and fixing it. It is called "scratching own itch". The reason for popularity of oss libraries was that comertial ones had historicall low motivation to improve after point of sale, especially when it comes to performance. That is why you see a lot of oss targeted at developers and very little at general public. Closed source libraries were unable to compete not just because of price, but because business of selling libraries does not value quality.
You seriously believe this? I mean, it's so obviously nonsensical I can hardly believe you are seriously making that point. With FOSS, you have the option to look at the code, with closed source you don't. And you are seriously saying that that is the same level of openness?
Also, even if it were true: How is that relevant? FOSS is just as bad as closed source, therefore you should exclude FOSS? How does that follow?!
> So you have to take the internals on trust.
So, I can not look at FOSS code? Like, you are telling me it is impossible for me to look at the code that I am in fact looking at when selecting code to use for something? I mean, really? That is your point?
And the solution then is to not use FOSS, because then you don't have to take the internals on trust?
And also, that does not at all mean that the utility breaks down. Back in the day, it was normal for devices to come with schematics. Like, if you bought a TV, the schematic was included. Almost noone who owned a TV could read schematics. But the schematics were still useful to the owner, because those schematics were what enabled you to take your broken TV to any independent repair business of your choice and have them fix it at a competetive price.
You can profit from the wide availability of knowledge without having to learn it all yourself. If there are ten competing car repair businesses in your city that all understand how to fix your car, that is better for you than the manufacturer having a monopoly on repairing your car, even if you don't have the slightest clue how your car works.
>I realise it's hard for FOSS advocates to understand this, because it's a fundamental flaw with the FOSS philosophy. The benefits are "obvious" to crusaders, but the objective reality is that large swathes of FOSS are full of casual or hobby code that barely works, has gaping security vulns, and/or is nowhere close to being robust enough for production.
>And the solution then is to not use FOSS, because then you don't have to take the internals on trust?
I think the point is to realize FOSS is not a utopia and has tradeoffs like everything else.
But that wasn't what this thread was about. The statement that I was responding to above was this:
>> or exclude FOSS when working commercially.
I.e., that there isn't a tradeoff, but that the solution to shortcomings of some FOSS that aren't unique to FOSS in any way is to not use FOSS at all.
And that was then defended using equally nonsensical logic.
So, noone here is claiming that FOSS is utopia. But people are implying that proprietary software is. Which I am asking people to justify. So far, nobody did.
It's irrelevant because this was about FOSS vs. closed source, not about commercial licencing vs. noncommercial licencing. Even if commercial licencing were the solution, that says nothing about whether the commercial licence should be FOSS or closed source.
And it is also irrelevant because there are broadly the same legal remedies to malfeasance in all cases. If you are breaking the law, you are sill breaking the law if you are publishing your source code, and you are still breaking the law if your are doing it non-commercially.
And in so far as you mean liability for defects rather than malfeasance, it is obviously nonsense that there are any generally applicable effective legal remedies against terrible proprietary code if you look at the real-world quality of products in the market. You might be able to put together a contract that helps with that, but (a) that is far from the norm and (b) is obviously still irrelevant to whether the code should be open or closed.
2. It’s about legal responsibility and recourse.
How is that relevant?
> 2. It’s about legal responsibility and recourse.
Just because not everybody has access != a person inside a gated area doesn't get to see. You made that leap.
If you are building software on top of software that carries no guarantees then you are liable unless you also somehow make no guarantees. Are you able to sell software without guarantees? Maybe?
Well, I guess that was somewhat of a leap, but it doesn't really make a difference to the argument: The point is that they are suggesting that preferring code that fewer people can look at somehow solves a problem.
> If you are building software on top of software that carries no guarantees then you are liable unless you also somehow make no guarantees.
Well, yeah? But what does that have to do with FOSS? Neither does FOSS imply that there are no guarantees, nor does proprietary software imply that there are guarantees. Hence: What is the relevance?
The only difference between FOSS and proprietary software in this regard is that with FOSS you have the option to do an audit yourself and offer guarantees on that basis without creating a huge unknown risk for yourself, or you could possibly buy auditing services on the free market that come with some sort of guarantee from a third party. There is no option that you have with proprietary software that is somehow impossible with FOSS, which is why the suggestion that not using FOSS for commercial projects somehow solves a problem is strange at best.
> Are you able to sell software without guarantees? Maybe?
Well, given the tons of massively broken proprietary commercial software out there? Yeah, obviously you can?
Software must be made understandable. The essence of FOSS for me can be reduced to one fundamental computing right:
the right to refuse to run, on my machines, code that I do not have the option to understand. That is it.
You've always had "the right to refuse to run, on my machines, code that I do not have the option to understand". Nobody is forcing you to run any random piece of code you found on-line. You do that of your own accord. And if you screw this up, and that screwup affects other people, it's your fault. Simple as that.
Until there's some external stimulus, I don't think the industry is going to change. It's a lot cheaper to add new flashy things if you don't care about complexity (or the consequences of it, like bugs, and security). Getting a consumer to care about the complexity of the software in their computer or phone is like asking a Ruby programmer to care about the microcode in their CPU. It's not that we can't understand the problem but it's not a concern until it gets so bad it impacts my level of abstraction.
I'd love to see programs start putting little badges on their webpages that brag about how few lines of code they have, how low their cyclomatic complexity is, or how short their dependency tree is. I'm terrible at marketing but surely there's a way to make this sound appealing.
Discussion on HN (2016) : https://news.ycombinator.com/item?id=11686325
I think Brett Victor is one of the people driving the effort beyond that project:
This is a really good observation. If you know some important transitive dependency your job depends on is missing a maintainer, tell your nearest supervisor you'll be spending an hour a week taking care of that. Or a similar amount of time helping a direct dependency get rid of the broken transitive one, whichever makes more sense.
If your employer is dumb and don't realize that's why they pay you so much, you can probably exchange some of that high pay for more respect by going elsewhere, taking at most a slight cut to your pay.
If it's about personal insecurity (it used to be for me), think of it this way: all that money they give you? It's because they want you to be a professional, which involves informing management of when you take corrective action based on your expert knowledge.
I'm not telling you to be dumb about it. Sometimes it does notmakes business sense to run on maintained software. But if it does, and your manager may not know that, they trust you to have the integrity to inform them. Any good manager, that is.
Devs are still treated like children in many companies.
> lets roll for a bit with the assumption that a small amount of extra care on Tarr’s part could have avoided this mess.
Are you kidding? The bare minimum of not just handing a repository under your name to a random stranger is apparently a hard to fathom concept for this author.
The number of commercial services and pieces of software that are largely reliant on huge piles of OSS is significant.
In contrast to that we're not really seeing those companies focus enough on the security of the OSS that they rely on.
To provide two other well known instances, both Heartbleed and Shellshock sat as vulnerabilities in their respective OSS software for a large number of years. The packages were widely used by commercial software vendors, yet none of them discovered the issues...
Such a library could be made by crawling NPM stats to find out what kind of small packages/functions are most used, and select a subset of these. The code could be lifted from the original packages (with proper licenses and attribution). The package maintainers could be approached if they are interested in helping to maintain this new collection package.
The thesis being that such a package can be more efficiently maintained by a group, than each individual maintainer and package alone.
Once it exists, one can post PRs to common dependants to use the new package.
Could it work?
I agree tree shaking could help a lot. But every time I have tried to implement tree shaking properly in a JS codebase, it is not trivial to solve. If one configuration is not properly setup then the entire library gets inlined during compilation.
Furthermore, libraries like lodash use a lot of inter dependencies so using something like `get` could bring in a ton of stuff that is technically used but totally unnecessary.
Whether or not that's true is largely irrelevant, since the decisions are made based on what people believe, and since the same decision is made every time, there's not much data for the other side of the comparison. When was the last time someone deployed an app with less than 10 (total, not direct) dependencies? Hell, even 100? Some people have probably never built one with less than 1,000.
Everybody says that they emphasise ongoing maintenance when writing code, but the best way to take that to extremes is to optimise the ability to remove (and replace) code en masse.
Have been using nanovg instead: somewhat lower level, but at least it builds, and also it’s quite simple to use. The main downside it doesn’t support ClearType for text rendering, even when using it with FreeType.
- select a Linux distribution one can trust, like Debian
- throw away your Node.JS and switch to PHP. We have frameworks like Symfony made by a company that can be trusted and that has proper code review processes so you don't have to worry that tomorrow there will be a backdoor
Another alternative would be a community code review. Everyone using OSS code can contribute some time for a review of the project they like the same way they contribute the code. Reviewing is even easier than writing a code. Then they sign the lines they have checked. It would also help to find poorly written, difficult to understand code and find unreliable code that can contain vulnerabilities.
The companies using OSS code could be interested in this. They would prefer to use the code checked by other corporations rather than the code that has never been reviewed. And they probably are already doing the reviews privately so it wouldn't cost them anything.
This situation could happen with packages in any language.
I don't think that throwing wide-ranging functionality into the standard library is the answer, but I think with a bit of tooling it might be feasible to provide a "trusted" tier of packages that have been signed off on by code reviewers. Instead of every vendor needing to vet every dependency in their product, the work could be spread around. When a vendor finds they absolutely must have the functionality provided by a package in the untrusted "playground" (or whatever), they might be motivated to sponsor a review to upgrade it.
This is why languages have standard libraries. It prevents the dependency hell we have with node.
Fact is. Nobody builds a standard library to reduce dependency hell. They do it to eliminate it.
The standard library uptake, quality and efficiency is all intertwined sure and developers can always go shoot themselves in the foot anyway but, wait, I'm sorry, did you have a point?
Look for instance at Python. Lot of people use requests instead of urllib for HTTP client. No-one uses the HTTP server. Until 3.x no support for async IO or event loops, so we have Twisted/gevent/eventlet, mostly incompatible.
My is only that there are painpoints around dependencies event with a stdlib.
Lately I met many companies claiming they make "open source", when their software is in fact hardly usable as is, and you would anyway be required to upgrade to "pro" features or "pro" support to get anything useful done with it.
At this point I consider such piece of code de facto closed source. Disclosing the sources or part of them in itself does not bring much value and does not increase the number of careful eyes auditing the code.
I really wonder why they decided to open-source such product to begin with, is it marketing? Are there other advantages?
I get what the author is saying, but I think these kinds of originalism arguments are always a bit tedious. Do we really think that the original utopian intentions of hackers in the 80s are going to provide us with magical insight into this problem? The context is completely different from anything they would have imagined.
Open source / free software isn't a gift from heaven that will work out if you just believe hard enough.
The situation described is a new problem, and we have to think hard about how to solve it
I definitely would notice if Debian itself got taken over!
Perhaps the rule of thumb should be: if it's not properly in Debian, then don't use, it unless you have a very good reason for doing so.
Ruby uses https://rubygems.org as central repository for the gems (something like node's modules). You can however host your own having only inspected software. However you know where the code is comming from and you can download it and inspect before installing.
Idea (benefit) behind this is two-fold: easy way to have same library in multiple versions (you often need that) and having per-application isolated bundles of gems (again, it's easier to clean everything afterwards this way).
Could all this be handled via system package manager? Sure, however it would be much more work for the developers (since gems generally work even on windows and other platforms; what package manager would you use on windows without `gem` command?).
PS: I don't argue that there is room for improvement in some aspects, but the general idea isn't bad imo.
The problem tends to be that users don't like the lag introduced by the packaging process.
It's definitely a valid strategy to get "more trusted" libraries to restrict yourself to ones that are available the the OS package manager.
Of course you should understand the limits of that trust ( for example AFAIK there is no security code review involved in the debian package process)
For people interested:
"I examine the actual customs that regulate the ownership and control of open-source software."
There is a law of the transition from quantity to quality by Hegel (http://www.pnas.org/content/97/23/12926).
As soon as number (e.g. of modules in NPM base) reaches some point the overall quality changes dramatically. It is a step function - from good to bad. From enthusiastic acceptance to full rejection of NPM and similar code sharing mechanisms.
What is he proposing in concrete terms? How one should decide that his code is understandable enough? What to do with the recursiveness of libraries that use other libs that use libs ...? The only concrete advice he proposes is the two weeks idea:
is it possible for a new (but experienced) developer to understand 80% of the part of the codebase they’ll be responsible for within their first 2 weeks (80 hours)?
If not, this person is overworked from day 1,
But is this practical? All code should be understandable in two weeks?
This is what really blows my mind. Yes, Facebook is the main force behind React. But where are the other companies stepping up to maintain chunks of the JS ecosystem?
If you work at a company using FOSS software, please try to convince your managers that helping to maintain said software is a good use of (some fraction of) your time. It's in your own best interest.
But in reality we trust the authors. So in theory we could build a dependnacy chain of signed commits, with a in theory monetary payment going to those whom we trust
So apart from Linus getting a Xmas bonus, it would directly encourage ... what?
Looking forward to the next chapter on trust by review. (And I don't mean 5 stars on some "app store" you've never used before by reviewers you've never met.) Big ups.
i've written a "paid use" license that attempts to capture many of the freedoms of FOSS while allowing a revenue stream if a package is successful. would love any feedback
Many people license their code under a "public domain license". What's interesting is that such a license has no requirements of placing a copyright notice anywhere, and allows someone to freely relicense without modification. This is precisely as it would be if you had written the software yourself.
Your license recognizes the value of freeware as a marketing tool - the more people with their hands on the software, the more likely some of them will stand up and pay for it. But I think the personal ownership aspect is more foundational.
In advancing that, the license I would write would be one that places the software under a free license after some fixed date. You might state, for software released in 2019, that starting in 2024 it will be licensed under the GNU GPL. Until then, it would be licensed under some proprietary license, perhaps the one you propose.
This forces people who use the software to pay for it's development, under whichever terms the proprietary license requires. Simultaneously, it ensures that those paying for it ultimately own the software, and therefore any improvements they make to it.
possible that combining the two as you suggest would be the best of both worlds. i need to think about the business implications for the author
"Soviet developers, make your application simpler and readable!"
"Soviet developers, be responsible, do not pass the code to strangers!"
"Soviet developers, read the code of your dependencies!"
"Soviet developers, do all this with a joy and free of charge!"
Replace "Soviet" by "FOSS" above or vice versa for your taste.
While basic ideas (e.g. Communist Party Manifesto as the Commandments it is derived from) are perfect the merciless reality is not. Human nature, sigh.