Almost all programming languages now have similar standard package managers. I still remember back in the day when everybody would joke about how Maven downloads the whole internet on the first build. This is still true today for Maven, NPM, pip, composer, you name it. You are downloading hundreds of packages for which you implicitly trust their maintainer to not do anything shady. In case of Maven - and likely most others - packages are not even digitally signed by the publisher and on top of it are binaries, that you cannot easily inspect. I'd argue NPM is slightly better than that, at least it (often) includes the source code, though you still have illegible instructions in form of minified/uglified code.
I think NodeJS already has a decent stdlib and there will always be the need to pull dependencies in for basic functionality (apache commons, guava etc. in the Java world). That's just how OSS works. There should be a simple way to display and verify the chain of trust though.
NodeJS has a feature-rich standard library, but it's let down by poor ergonomics. Compare, for example, the built-in http library[0] with request-promise[1]. I think the difficulty of using the standard library makes NodeJS devs feel better about downloading lots of little helper libraries.
I think a lot of it is NodeJS settling on its standard library a bit ahead of the larger language ecosystem itself. Promises were standardized shortly after Node bumbled into a callback-heavy design. The efforts on the ES2015 Module syntax occurred after Node settled for "CommonJS" which wasn't all that common at the time.
Node's HTTP library as a specific example, particularly suffers from callbacks instead of promises. It also suffers here in that it also predates the Fetch standard making its way into browsers now.
Hopefully the Node standard library will continue to converge with the overall JS ecosystem and catch up to the work that the Browsers have been leading here.
> In case of Maven - and likely most others - packages are not even digitally signed by the publisher
Last time I explored the atrocious state of language-specific package managers, Maven Central was (and I'm guessing still is) the only language repo that requires that packages are signed [1][2].
Now, whether package signatures are verified on retrieval is another question... (they are not, unless you use a plugin such as pgpverify-maven-plugin [3]).
Obviously anybody with the private key can still introduce malicious code even if you verify your package signatures, but at least it's better than allowing any oppressive regime with a root CA trusted by Mozilla/Microsoft to MITM rust/python/npm/ruby/whatever packages downloaded by its residents.
I don't know if you're joking, or if the people downvoting you think that your idea isn't worth hearing, but it's actually a good idea (if we can look past our snobbishness about how over-hyped blockchain technology has become).
Part of the threat model around malicious software updates requires that we implement reproducible builds and store the hashes of builds in a distributed permissionless append-only database.
which implements public key transparency in the Ethereum blockchain, using smart contracts, and there are ideas for implementing the gossip feature of certificate transparency logs using blockchains:
In my view the core problem is the complete lack of standards and interfaces around what constitutes a “package”, “module”, “library” or even “framework”. (It’s an industry-wide problem, not just JavaScript.)
In npm’s single global namespace, “is-even” is a package. So is the Firebase SDK, even though the former is a single line of code and the latter is around 70 MB. But a package can also be something that’s not even JavaScript code: maybe a CSS “framework”, or that infamous dictionary text that gets replicated a dozen times in any large npm-dependent project because some popular package uses the text in its tests.
If hardware parts were distributed like this, there would be no standard fittings, nothing would be categorized. A hardware store would just be an enormous pile of jumbled parts and tools. “Oh, you need a bolt? No problem, John will jump into the pile with scuba gear and bring you some packages that look suitable. It’s going to take a few days, hopefully we’ll find something that fits.”
What are you looking for exactly? A category system? The argument seems kind of weird. A bit similar to how a .zip file might contain just a few things, but also a lot of things. I think it's kind of a good thing that the same delivery mechanism can be used for various things, but I'm guessing you have a problem with discoverability?
The problem affects not just discoverability of new dependencies, but also development and maintenance of settled dependencies.
When you see a line like this in package.json...
"dependencies": {
"foobarfrobblizer": "^0.9.31",
There's no simple way to determine what foobarfrobblizer does, where it's actually used in the app, what side effects it may perform when called, what dependencies it pulls, why it was included, and whether it could be replaced with something else that offers a similar API.
For a single package, I can figure all this out manually. But projects these days are seriously dependency-binged. If I use "create-react-app" to make a Hello World web app, it gives me 150 MB of stuff in node_modules. I'm never going to look at even 1% of those packages to see what's actually in there.
I got started in programming when a dependency meant either a DLL file provided by someone else, or a static library that you manually link into your executable. Taking on a new dependency was a big decision: do I really want to rely on this DLL file being there for me, or having to carry this separately built binary blob in my project? I still get that same dread when I have to add an npm package to a project, but it doesn't seem like most people feel that way.
On top of those DLL files 25 years ago, there were systems like Microsoft COM that let you use them interchangeably through standard interfaces. You could make a plugin UI component that works seamlessly in Visual Basic by simply conforming to a COM interface. It even worked from multiple languages: that UI component could be written in C, C++ or Object Pascal if you wanted.
The web development ecosystem has a long way to go before it matches the usability of 1995 desktop development — and that's a painfully low bar.
> Taking on a new dependency was a big decision: do I really want to rely on this DLL file being there for me, or having to carry this separately built binary blob in my project?
I'd argue that the NPM ecosystem (although not perfect, hence OP) is a lot more evolved specifically because of how easy it is to include a wide range of dependencies.
The speed with which you can write software is an awful lot higher, and (if done properly) you can rely on safe dependencies that are tested and used by everyone else. Why reinvent the wheel in every project?
>There's no simple way to determine what foobarfrobblizer does, where it's actually used in the app, what side effects it may perform when called, what dependencies it pulls, why it was included, and whether it could be replaced with something else that offers a similar API.
Isn't this an application issue? The manifest for each project should tell you that, it should be possible to build the dependency tree as a preview and see all of the metadata in aggregate.
Comparing is-odd with left-pad is a bit disingenuous.
Left-pad had a fairly trivial implementation but that implementation is easy to get wrong, so it's something you want to make sure you have covered by tests and you want to define the edge cases. It's something that should be (and as of ES2017 is) part of the language and not something you want to keep reinventing (alongside all the edge cases and potential for annoying bugs).
The is-odd and is-even packages OTOH are blatantly ridiculous. I had a look in my dependency tree and it seems is-odd made its way into it via a package called "nanomatch" which used is-odd because the maintainer apparently was of the opinion that (to paraphrase) "if it's difficult enough to get wrong on the first attempt it's difficult enough to warrant delegating to a dependency" -- which he seems to have taken to the logical extreme.
That said, the package in question removed the dependency on is-odd after what seems to have been a lot of negativity about his decision not to inline the logic previously: https://github.com/micromatch/nanomatch/pull/11 so once that change propagates through the dependency tree, it should at least no longer show up for some people.
> Left-pad had a fairly trivial implementation but that implementation is easy to get wrong, so it's something you want to make sure you have covered by tests
...What? That makes no sense. `left-pad` is trivial to implement and test. There may be edge cases but for most people, writing tests to cover the edge cases they care about rather than pulling in a dependency just to handle something as simple as padding a string.
Not to mention, Node's near-complete lack of a standard library is at fault here, not developers, nor the ECMA technical committee.
You're not even contradicting me. I said it has a fairly trivial implementation and you concede that it has edge cases. You're just disagreeing on whether that warrants using an external dependency rather than writing it yourself (and writing all those tests yourself too).
> Node's near-complete lack of a standard library is at fault here, not developers, nor the ECMA technical committee.
So you think String.prototype.padStart should have been a Node built-in module rather than a language feature? Are we talking about the same language that also has String.prototype.italics and String.prototype.bold?
There's nothing Node could or should have done about the lack of string padding. Node's "standard library" is first and foremost concerned with enabling network and filesystem IO. JS on the other hand even lacks a built-in way to handle dates properly (the Date class is largely an afterthought based on Java).
String.prototype.padStart deserves to be a language feature and it should have been a language feature from the start. Most other languages provide this out of the box. The reason these things end up on NPM is not that JS developers are idiots but that they don't want to reimplement the same basic utility functions every other language would offer as a standard library or copy folders of code around.
Yes, this is absurd when it comes to things the language actually DOES offer (or basic maths like `value % 2 === 1`) but for the most part NPM is fulfilling JS developers' need for a standard library -- and that's entirely on not just Node but also the language itself (and by extension TC39).
EDIT: It's worth pointing out that left-pad has zero dependencies, as it should. Compare this to is-even (i.e. `value % 2 === 0` as a function) which depends on is-odd which depends on is-number -- all written by the same author and 100% serious.
> There's nothing Node could or should have done about the lack of string padding. Node's "standard library" is first and foremost concerned with enabling network and filesystem IO. JS on the other hand even lacks a built-in way to handle dates properly (the Date class is largely an afterthought based on Java).
That's absolutely incorrect. Now, I might be using a slightly contrived example here, but take RPython and compare it with Python. Both use mostly the same syntax (i.e. ECMAScript vs Node.js), but one is extensively much more feature-filled than the other because it targets general-purpose programming(Python) vs a very specific purpose language, used as a lower level "Framework" if you will (RPython). RPython has no need to implement something like `left-pad` (although because it's a subset of Python, it's sort of already implemented)
With RPython, it's intended you build things on top of it (which is how I view ECMAScript), whereas with Python (more like Node.js) you'd expect that to...exist.
The fact of the matter is, the language teams in these examples had completely different goals and I personally believe that Node.js should have gone more the Python route and had an extremely strong standard library that would handle mundane tasks like `left-pad` does. It disappoints me that the Node.js team (outside of the ECMA technical committee, which designs the language itself) does not thing it should be responsible for this kind of simple tooling and instead rather passes it off to developers.
Is that really the idiomatic JS way of creating string?
Jeez. Reading the code at first, I thought to myself: "this stinks. There has to be a better way", but apparently not. At least not unless you want to support anything lower than es6.
In this case, You could argue that it's not the lack of standard library but the lack of language features (and thus the ECMA committee). Because in the meantime left-pad has been added as a language feature on the String prototype.
There is one separate JS context for every tab you have opened right now and every feature has its cost in terms of memory & cpu. That's a major reason why javascript has very thin "stdlib".
Also, big stdlibs are compatibility problem so it's definitely better to have as much as possible in user-space.
BUT, we really should use few big utility libraries like lodash instead of dozens of helper-function modules.
What disgusts me the most about the JS ecosystem these days is that even the most basic developer tools have tons of depencies.
Let's say you want to use webpack and TypeScript on a project: Boom. 425 packages installed. And that's before you have written a single line of code. (It's probably worth noting that the offender is webpack. TypeScript seems to be dependency-free.) [1]
Every single one of those 425 packages could have a malicious postinstall script that deletes your entire home directory or sends your private ssh keys to a remote server.
npm makes it too easy to add dependencies to a library.
[1]: If you also want webpack-cli that's another cool 441 packages for you.
I remember trying to play a little with node and vue.js a few months ago (I'm a devops/backend developper by trade). Having seen the size of my node_modules directory, nearly 1GB, I was kind of horrified (to be fair, I checked most of the testing options, so a lot of the libraries were coming from that).
What are the exact downsides of having lots of small packages? Rather than a few big ones? I keep hearing that its a complaint, but not one instance of why it is actually a bad thing.
It's interesting to compare with JVM ecosystem, where each dependency can only have one version, and you get version collisions if multiple packages list different versions of the same sub-dependency. I feel there is more thought in adding sub-dependencies, while in the node ecosystem because you won't have collisions, devs don't really seem to care how much they add.
This is a large part of why, in my opinion, language-specific package managers are generally bunk (though language-specific packaging tooling is not necessarily so). I install all of my Python packages system-wide through my distribution's package manager, and if no package is available I make one. Distribution packages are cryptographically signed and reviewed by a dispassionate human being before being published, and are available on mirrors around the world to mitigate the single source of failure language repositories tend to suffer from.
Of course, this is difficult for something like Node, whose package ecosystem is out of control.
Assuming we could avoid the legacy leverage, would it be possible to build or use an already existing package manager to perform this, wrapping all specific logic for each language?
The human element is important imo. I don't think automating it away is wise. However, many distros have tools for speeding the work for particular languages.
What I'm interested in — most open source licenses such as MIT require that a copyright notice is preserved for each dependency, direct or transitive. Many licenses even require that such copyright notices for all dependencies are shown in an about menu in the software.
How do you do that with node.js projects? Do you just have an about screen with 15'000 licenses and copyright attributions in your software? Do you just violate the license?
For Node.js projects, the MIT clause should be covered by simply having the entire package installed, provided the author actually put the license file in the package.
However when using a bundler; you are potentially violating the licenses of a bunch of packages.
I am absolutely flabbergasted that the `is-odd` package on NPM [1] was downloaded 3 million times last week. In any project, a check for an odd number would half the time not even have its own _function_.
Even an aggressive build server that redownloads and builds 50 times a day would require 60.000 separate installs...
Does anyone have any reasonable explanation for this?
Correctly checking if a variable is odd is not quite as simple in a dynamic, type-coercing library like ECMAScript as you might think. You have to consider things like: is the variable null or undefined? Does it have the Number type? Is the number an integer? Is the integer odd? In a statically typed language, the compiler would guarantee many of these things for you, but in ES you need to do it yourself.
It's not rocket science, but it's tricky enough that you could perhaps make a mistake or overlook something. To be safe in these situations, when writing a small amount of "nearly trivial" code, programmers have generally resorted to a package manager called "Google / Stack Overflow".
This package manager works as follows: You Google for "safely check if odd in JavaScript", open the top Stack Overflow link, then copy and paste the answer (usually a single function) into your codebase. Then you add a comment saying "taken from https://stackoverflow.com...". Whenever you have another piece of "nearly trivial" code to write, you install it the same way.
ES developers thought that this workflow could be optimised a bit. To that end, they invented a package manager called "npm". Unlike the Google / Stack Overflow manager above, this lets you install functions like "is-odd" without having to manually copy and paste them from your browser. It also keeps track of where they came from, who authored them, and even lets them be updated, if necessary.
> You have to consider things like: is the variable null or undefined? Does it have the Number type? Is the number an integer?
This is a non-problem. If you really have to consider all these edge cases before testing if a number is odd or even that's a sign you need to seriously rethink your domain model, not reach for a library that has already thought about these edge cases for you.
Well written dynamic code should be mostly statically typed.
In that case, sounds like is_odd/even should be part of the language, given how widely used and basic it is. Useful methods get added to strings, arrays and objects each ES release.
Yes: the author of the package uses it himself in some of his other packages, which are popular and used in popular projects, e.g. [1]. So it isn't that millions of people think isodd is a sensible package: just at least one person does, and he made it.
As a JS dev, this actually offends me. I wouldn't expect a specialised package to silently convert strings to numbers.
I think is-odd and friends are the aftermath of the microlibrary craze that followed the popularity of "toolbelt" libraries like jquery, underscore and lodash (which featured some equally inane methods but bundled them along with a load of far more complex helpers you wouldn't want to implement and maintain yourself).
The Javascript tradition is coercing '1' to 1, so it is a number for most practical purposes.
A language with more restrained type conversion rules would have a different, and more trivial, isOdd() function.
>Have the large projects maintain their own dependencies? i.e. React, Webpack, Typescript, Express...
>Create and maintain your own packages?
>Review each package and save a copy in your own repo for re-use?
Yes? It is possible for things to be too convenient, IMHO.
>Upgrading would be a pain.
Upgrading should be at least a bit painful. Upgrading without pain means you don't really know what changes you're making to your codebase, and you don't care to know, you just want the magic black box to work and not bother you with the details. There should be at least more effort put into upgrades than the least amount possible.
Yes, so let's all keep reinventing the wheel, over and over again.
I know the author uses pretty extreme examples, but I still don't see what's inherently wrong with the mentioned packages.
- it does what you think it does:
`isOdd(2)` is a lot easier to understand than `value % 2`
- even though most of these are one liners, it's one line of code less that you have to maintain
- if the package is at least somewhat popular, it's highly likely it has measures for edge cases; stuff I wouldn't have thought of when writing it myself.
On top of that, mentioning left-pad is really cliche and weak at this point (bear in mind this happend 2 years ago), npm has taken measures and nothing similar has happend since then, and even then, it was still a fairly isolated event - the whole thing actually took 10 minutes.
I'd much rather take a rare, yet-to-happen-again chance that a package goes down, over having to rewrite simple utility functions over and over again, every month.
> Yes, so let's all keep reinventing the wheel, over and over again.
Writing basic code, like operator calls is not reinventing the wheel.
> `isOdd(2)` is a lot easier to understand than `value % 2`
It's not tho, an operator is fundamental and everyone knows what it does. The function on the other hand you have no clue and have to check the source to see what it does.
> On top of that, mentioning left-pad is really cliche and weak at this point (bear in mind this happend 2 years ago), npm has taken measures and nothing similar has happend since then, and even then, it was still a fairly isolated event - the whole thing actually took 10 minutes.
Mentioning left-pad because the culture has not changed since then, this is a culture problem and not a technical one IMHO.
Shouldn't the main concern highlighted here already be taken care of with package-lock.json and yarn.lock? They record the dependency trees, so a rogue package update should not really affect you.
Assuming you audit what was locked, and audit whenever you update, then yes. I'd be worried a coworker would go on an update spree just because they can, and wouldn't audit the new code.
Yes but then you're no longer grappling with the limitations of the ecosystem. Introducing a rogue developer who updates dependencies irresponsibly is a danger in any language.
If you are hobby programming, then not really, but in most workplaces, dependencies are usually pinned and not updated until necessary(bug fixes mostly).
I say unto you: one must still have chaos in oneself to be able to give birth to a dancing star. I say unto you: you still have chaos in yourselves. FN
1. Over-reliance on tooling. JS was made as a scripting language, if you can't use it as such, its purpose is lost. And that tooling works against betterment of the ecosystem - lib authors rely on tooling to fix issues of the platform, rather than seeking a definite solution.
2. Any serious JS project is a long term maintenance burden. The prevalence of "single line libraries" makes future-proofing of the code impossible. You don't only have to version-freeze your immediate dependencies, but dependencies of your dependencies. You will have to manage their version manually, it all becomes your own problem.
3. Authors of JS VMs swear on their firstborns that JS performance has been greatly improved, yet, it sucks. For practical purposes, JS code is barely as quick as perl or python.
4. Too much evals in 3rd party code everywhere. The only way you can secure your front-end is to ensure no third party code policy on your page. When it comes to the back-end, things become more grave. On the back-end, you can say that there is no universal solution.
5. Zoo of standards on the front-end, browser APIs changing on a whim.
6. Zoo of sublanguages: Coffeescript, LiveScript ClojureScript, MoonScript, AtScript, TypeScript. Having to mix anymuch of them in a single project is, you can guess, a supernightmare.
Yet, I don't completely hate people who want to move JS forward. TC39 may well be too heavy on being staffed by theoreticians, but they have right priorities: standardization, having whatever going into the standard being highly dry before even approaching the pre-standard status, enforcing a semblance of synchronicity to the ecosystem.
What JS needs in my opinion:
1. HARD VERSIONING
2. Compliance certification through testing with a hard version
3. An even harder enforced decoupling of the paper standard and implementation. While the majority of modern JS implementations are VM based, this should not make the standard to accommodate design decisions dictated by a VM implementation.
4. Readiness to do breaking changes in between major hard versions, and no reservations for backward compatibility.
5. Many browser APIs are de-facto parts of the standard library as such. They need to be managed as a part of the standard.
There's some truth there in your comment but also a lot of wrong things.
> 1. Over-reliance on tooling
Everything has tooling, I defy you to find a single language you can use in a medium-size project without any tooling, good luck, I don't know any.
> 2. Any serious JS project is a long term maintenance burden. The prevalence of "single line libraries" makes future-proofing of the code impossible. You don't only have to version-freeze your immediate dependencies, but dependencies of your dependencies
It's the same for any language. Yarn has a yarn.lock for this exact purpose, for other languages you will have [your language package manager].lock, it's the same.
> For practical purposes, JS code is barely as quick as Perl or python.
It's currently 3 times faster than Python and the gap is only widening every year. You picked the wrong languages to compare the speed, at least you could pick Rust, Go or C++.
> 5. Zoo of standards on the front-end, browser APIs changing on a whim.
It's getting better with vendoring nowadays and the core features are there everywhere. You only need vendor-specific APIs if you build something specific.
Only TypeScript is alive nowadays, all the others are kind of dead so this isn't true. Also MoonScript is for Lua so it's not even related to Javascript.
> 4. Readiness to do breaking changes in between major hard versions, and no reservations for backward compatibility.
Most features are not backward compatible, that's why you need to use some tooling. You cannot have no tooling and no breaking changes at the same time.
>Everything has tooling, I defy you to find a single language you can use in a medium-size project without any tooling, good luck, I don't know any.
On my memory, large projects on PHP and Python falling into that category are all around the place
>It's the same for any language. Yarn has a yarn.lock for this exact purpose, for other languages you will have [your language package manager].lock, it's the same.
No it isn't. Heck, the prime example of a "fragile language" - C, doesn't have these issues take to such extremes even on the most complex and projects thanks to good developer culture omnipresent emphasis on API compatibility, and good API design. Gtk+ for examples has huge list of dependencies, yet, not as single time my code failed to run properly because Gtk's own dependencies somehow influenced my code. This is not due to lack of data structures, objectized data is being used all around modern C.
>It's currently 3 times faster than Python and the gap is only widening every year.
No, the figure that you are giving is for synthetic benchmarks that do nothing more than arithmetics with primitive data types. I would like to see performance figures a tree sort or merge with complex data structures. The performance will suck equally.
>You cannot have no tooling and no breaking changes at the same time.
When you forego backward compatibility for the benefit of this approach, you do not resort to perversions with tooling to bring it back, resulting in a worse result.
It isn't because most other dependency managers don't allow dependencies to bring their own versions of other dependencies, instead you're stuck with version conflicts preventing you from upgrading to a recent version of library X because you're also using library Y and both libraries have a dependency on library Z (but the new release of library X uses an incompatible version of it).
I was a Python developer in a previous life and in any Django project this would eventually happen and require you to either replace these unmaintained dependencies or ignore their version requirements and pray for the best (which was usually a gamble because semver wasn't a thing).
> On my memory, large projects on PHP and Python falling into that category are all around the place
You need at least a package manager in a large project. On PHP you will have composer, on Python you will have PIP. If your PHP project is web-based, you will likely have some tooling to setup your environment (SQL migrations, static pages...), that's also tooling.
> Gtk+ for examples has huge list of dependencies, yet, not as single time my code failed to run properly because Gtk's own dependencies somehow influenced my code
I've also been a GTK developer and I did had lots of issues with GTK incompatibilities between version, it's not a coincidence a lot of people moved to Qt recently, they did experience the same thing. You probably picked the worst library for backward compatibility in your example. Also GTK is not a language but a library so I don't see your point.
> No, the figure that you are giving is for synthetic benchmarks that do nothing more than arithmetics with primitive data types. I would like to see performance figures a tree sort or merge with complex data structures. The performance will suck equally.
I've never seen any test where Python has the same speed as Javascript, arithmetics, sorting or whatever else, it generally lags behind. All of that is due to the massive competition between JS engines and the massive amount of money people are throwing at it, Python can't match any of that, it's not 2003 anymore.
> When you forego backward compatibility for the benefit of this approach, you do not resort to perversions with tooling to bring it back, resulting in a worse result.
I don't really understand your point. In Javascript you can't control the client who executes your code. That's why you have to use some tooling to make sure it works even for the crappiest browser you support.
Important thing is: The VM (or the interpreter, or whatever runs your code) is almost always backward compatible. If you write code now, it keeps on running. JS code from the pages coded a decade ago runs just fine. I can't say the same for (from the top of my head) Android, for example.
>JS code from the pages coded a decade ago runs just fine.
Far from truth. A lot of breakage happen due to browser API security policy changes, obsolescence of browser APIs.
In on case, I even saw a code failing because the behavior of mathematical operators with floats have changed over time.
This is why there is little point for JS drafters being hellbent on backward compatibility to the absurdity (remember few pages long typeof null flamewar)
>I didn't know about this. Do you have a source or explanation? (by the way, all numbers in JS are double-precision 64-bit format IEEE 754-2008 numbers so you mean they actually changed the operators?)
As I understand, browsers' js engine, under some conditions, can provide more precision than what browsers were giving in nineties. That way if you vere reliant on if(1 - trickyFunction(0.000006464)) doSomething(), the condition will eval differently if function will return a value just a little bit bigger than 1.0000000000
> In on case, I even saw a code failing because the behavior of mathematical operators with floats have changed over time
I didn't know about this. Do you have a source or explanation? (by the way, all numbers in JS are double-precision 64-bit format IEEE 754-2008 numbers so you mean they actually changed the operators?)
> A lot of breakage happen due to browser API security policy changes, obsolescence of browser APIs
Non-standard features, yes. Standard features? Rarely (like WebSQL, opening HTML data URLs directly from code and so on). Standard features with wide adoption across browsers? I have just one case because of the recent CPU bugs. Standard features with wide community adoption? Never.
Ah yeah it's true, there's very few features that are actually removed from the browser. I just remember some kind of iframe version of window.alert (can't remember the name now) which got removed a few years ago but it wasn't used that much.
Kinda makes you think the person you're replying too is not a JS developer and hasn't touched the ecosystem at all, and yet it's here spewing nonsense.
Genuine question: does transpiling from Go, for example, make things any better? I've often wondered if I'd be doing my self any favors or just making things worse if I opted for that path. My goal would be to avoid even looking at JS at all whilst still being able to develop for the front end.
Thoughts?
Edit: no idea why you're being down voted. Perhaps you hurt the feelings of some JS developers? ;-)
>Genuine question: does transpiling from Go, for example, make things any better?
No, transpiled code is bad. Moreover, I see close to no point in transpilation of a code made to run in browser from an another language. Instead of writing an in-browser app in Go, and then spending an amount of time equal to writing the original code fixing bugs brought by transpilation, why not to just write the thing in Javascript from the start?
JS is not a complicated language.
>Transpiling isn't complicated or magic, you can get any C compiler to "transpile" your code into assembly and then compile that without any issues (same for C-LLVM "transpiling").
C is a very simple language. For a language/tool intentionally made to align with target's capability. Reliability is not that hard to achieve. An example of that is Vala.
The more feature rich languages you have on both side of the transpilation, the more you loose in comparison to the native code.
>Edit: no idea why you're being down voted. Perhaps you hurt the feelings of some JS developers? ;-)
I am a JS developer. But before, I tried myself at C, C with glib, electronics engineering, and other, more normal, languages. I worked in tech since 2007, and it was only after 2013 the job market steered me completely into webdev.
My rants at JS are exactly because I saw the language from a side, see its deficiencies, and have a clear idea to what to compare it to.
> No, transpiled code is bad. Moreover, I see close to no point in transpilation of a code made to run in browser from an another language. Instead of writing an in-browser app in Go, and then spending an amount of time equal to writing the original code fixing bugs brought by transpilation, why not to just write the thing in Javascript from the start?
Transpiled code is not inherently bad. Transpiling is still compilation, we just give it a different name to distinguish "high-level" to "high-level" compilation from the more usual "high-level" to "low-level" compilation (whatever high/low-level exactly means).
Compilers exist to support nicer/more high-level syntax on a common platform, Babel is no different than Java, GCC or LLVM in this regard. A nice side-effect of compilation is certain checks for correctness, and compile-time checks don't get magically erased because you're transpiling to a different high-level language.
Note that when he says "write the thing in Javascript from the start", that's not giving all the facts. Javascript now has several versions, how do you target older ones, or if you're using older ones, how do you take advantage of newer concepts? You'll either be using a load of shims, or probably transpiling Javascript. Babel is pretty sane as far as the Javascript ecosystem goes, and has some great upsides. IMO it's better to use Babel so you get to use the newer ECMAScript features (which is future-proof for when browser adoption progresses), instead of infesting your codebase with cancerous "helper" libraries like underscore that are painful to remove or replace with modern ECMAScript equivalents.
So, if transpiling helps you write more maintainable code, don't be put off by this. Assembly isn't that practical, neither is really old Javascript.
Not my experience. While GopherJS ( a Go-JS Transpiler ) has it's warts ( blocking activity in callbacks, filesize ) none of them is "the transpiled code is more buggy than it should be".
The code that GopherJS produces is in my experience very reliable and errs on the side of being slow rather than incorrect.
Transpiling isn't complicated or magic, you can get any C compiler to "transpile" your code into assembly and then compile that without any issues (same for C-LLVM "transpiling").
It also means we get that "sweet" "run same code in client and server" feature that Node.js always rave about.
First of all, no it wasn't (it was however marketed as such not to cannibalise on Java). Secondly, by that logic you probably shouldn't use AJAX to transmit anything other than XML documents either -- technologies evolve beyond the niches they were originally invented for.
No, but if flask or pip or some other widely-used piece of the python ecosystem pulled in unmaintained packages, trivial packages, or very large package trees, it might make the python ecosystem chaotic and insecure.
I'm not the author of the article, but could you expand on why it's click bait?
Personally, I think NPM is a crazy, mental concept. As a Go developer (so mostly server-side work) I can't fathom the amount of deps that are pulled from NPM when I'm working with or watching a JS code base being used. I assume server-side NodeJS code bases are the same their front end cousins in terms of deps pulled(?)
Hi, appreciate the followup, apologies for the previous short comment.
I feel like it's clickbait, not worth reading, for these points:
1. Even if the argument is valid, the points made in this article have been expressed, shared, (whined about?) probably 9+ times in my own reading. It's really time to move on.
2. There is nothing inherently wrong with using small/simple modules. Once you start to reuse code in many projects, the value becomes clear. Some are juicy and complex, some are eye-bleeding simple. It's better than copying and pasting code around folders. Use org modules/private modules and the problem is solved.
I absolutely believe that npm is ripe for *potential abuse, although I have yet to have any issues.
No one has time for Breitbart-esque fearmongering, or the regurgitation of stale points while an ample selection of real world problems exist to be solved.
> apologies for the previous short comment.
> Jeez that reads harsh. Apologies, it's late.
All good, friend.
> ... It's really time to move on.
But is it? I fear that if this is a genuine issue, why would I want to move on from it? Essentially npm is enabling people to build bridges in the same way steel is helping people to build actual bridges. If there was a flaw in some steel production process, I wouldn't ignore it. People could suffer.
Extreme(?) examples a side, if it's an issue, why not aim to solve it? This seems to pop up a lot, so perhaps it's time for a (long-term) solution?
> Use org modules/private modules and the problem is solved.
Yes! This is where I personally see the long-term solution: pick your top-level deps, (permanently) cache them locally and validate them, then implement/use them. Only update the cache when you're confident the remote, official copy has been changed in a manner that's safe for you and or your users.
> I absolutely believe that npm is ripe for *potential abuse, although I have yet to have any issues.
How do you know you've not had any issues? If I was attacking you via some sort of "side channel" attack, like breaching and injecting bad code into an NPM module I know you're using, how would you know it's happened?
Are you using a CI/CD process? The changes are high that you are and therefore unless you've version/commit pinning, it's likely you're blind to those deps being updated during your build process, enabling me to perform said "side channel" attack.
> No one has time for Breitbart-esque fearmongering, or the regurgitation of stale points while an ample selection of real world problems exist to be solved.
If it's being regurgitated so much, perhaps it is a "real world" problem to be solved.
> If it's being regurgitated so much, perhaps it is a "real world" problem to be solved.
I don't think most people deny that it's a problem or that it needs to be fixed. But completely blown out of proportion with Breitbart-esque fear-mongering is something that needs to stop. People still talk about "leftpad" as if it wasn't fixed the same day, sometimes as if it were never fixed, sometimes even as if it isn't even possible to write software in NodeJS. I haven't seen a fair and factual comparison chart of npm vs pip vs crate vs composer, and I'm calling BS that the majority of this "concern" is anything more than trendy JavaScript-Hate until I see one.
Having written my fair share of tiny trivial modules, I think tree-shaking may actually help with the "everything's a package" craze that followed the early size concerns. Lodash and date-fns for example are similarly feature complete and "big" as underscore and moment.js but their structure allows using them as "a la carte" collections of tiny helper functions without bloating the frontend bundle.
The fearmongering does get on your nerves but I think it's undeniable that npm's version conflict resolution (i.e. "have your cake and eat it too") has led to more complex dependency trees than e.g. in Python where the need to avoid version conflicts creates an inherent aversion to adding too many dependencies. However I think it's also undeniable that this has in turn enabled a lot of growth and innovation.
I wouldn't say it's similar to front-end JS. With front-end dev you're dealing with programming logic as well as the chaotic world of browsers, DOMs, 'standards' (or the lack thereof) and other quirks which manifests itself in complex ways. Server side JS is more foucsed in what it does though you still get the dependency trees.
Sure, you may consider npm crazy; consider what it's providing to the developer ecosystem. Conversely to your opinion, coming from C#, Java, Node... I would consider `go get` just as puzzling, as well as community members providing multiple package managers (with go dep only now being worked on). There are definitely good reasons for those no doubt, just as there are good reasons for npm's solution.
I see npm has "crazy, mental" because of HOW it's used, not for its sheer existence as a "thing." Technically it's a good solution. Very solid.
For me, it's more the mentality around just accepting a dependency tree made up of thousands of modules, with little to no validation or vetting, versus doing something to make sure what's being pulled is safe and even worth it.
With a Go application, I would 'dep ensure' to vendor what I needed and then actually look at what was pulled. Once I know what's been brought down and I'm happy with the code, I "off line" (cache) those dependencies and never update them again unless I need to (security, bugs or big features.) Also now that they're vetted and off line, I can reuse them over and over in confidence.
> Don’t trust package managers, every dependency is written by some random developer somewhere in the world.
How are we to take that seriously?
Ever heard of not invented here syndrome?
npm has problems, I won't deny it... but its not unique to javascript developers (which your comments about come off as extremely offensive), and its difficult to take seriously given the lack of any meaningful alternative offered.
Its just a rant... and people love to hate on javascript... so... click bait.
(also submitted what 3 times previously? And here I was hoping it would just wash away without triggering a js rant this time too..)
> npm has problems, I won't deny it... but its not unique to javascript developers (which your comments about come off as extremely offensive), and its difficult to take seriously given the lack of any meaningful alternative offered.
But it's not about npm as a technology, but about the mentality around how it's used. It enables laziness.
Pardon my ignorance but last time I checked Go deps where a mess: you list deps as github repos always pointing to latest master. The concept of version control did not exist. I have written some go and write a lot of js. I'd choose npm over that everyday.
> I assume server-side NodeJS code bases are the same their front end cousins in terms of deps pulled(?)
Most frontend code bases use actually very little dependencies, the big blob of (dev) dependencies are in the build pipeline: you need to transpile, compile, optimize, bundle, etc. the codebase. You only need those if you want to compile the frontend.
In order to run go code you need the compiler which is a binary of at least 100 megs. You can of course compile manually on your machine and push an executable, but you can do the same with a frontend (if your NODE_ENV is production npm will not download all compile deps).
I do a lot of go development and while it's not the most sane environment atm, there are solutions and the ecosystem makes it less of a problem.
Breaking API compatibility is something that happens very rarely (I think I had to fix up applications of mine about once a year because some dependency changed), most packages with active development are very mature.
Additionally we have tools like dep and (soon) vgo that allow you to vendor and pin certain dependencies at certain versions (you can do it manually too, /vendor/ is where go looks first for any dependencies so you can replicate "dep init" by copying all dependency paths in your gopath into /vendor/)
There is also less "make a package for everything", packages tend to have much broader feature scope. is-even and is-number would most likely become part of a bigger package concerned with mathematical operations and helpers or a datastructure package.
> is-even and is-number would most likely become part of a bigger package concerned with mathematical operations and helpers or a datastructure package
NPM does have an extra use case: Web development. Tree shaking in JS-land is still far from perfect and people don't want to (more like, are asked to) send as little as possible over the wire. So, different use cases, different trade-offs.
Yes but go can do tree shaking and will not include packages in the binary that aren't used (that also means parent packages if a subpackage does not reference those).
I'm mostly pointing out that despite vanilla go's complete lack of version pinning, the community has (had to necessarily) step up and solve this by engineering.
Tree Shaking in JS should be trivial if the parent package implements no code and imports should be easy if they aren't of a dynamic nature (ie, "require('package/' + variable)") I'm not sure what is stopping NPM or Yarn from doing this atleast in the packaging step
> I'm not sure what is stopping NPM or Yarn from doing this at least in the packaging step
You had already answered that on your previous sentence :)
> go can do tree shaking and will not include packages in the binary that aren't used
Again, different languages, different trade-offs. JS isn't statically compiled. This is a much harder problem to solve for JS. I'm talking from personal experience when I say that it still isn't completely solved (requires ES2015 module syntax and has many assumptions). Even Go has hard-time optimizing-away some code because of its limited type system.
Go isn't limited by the type system per se. IIRC from the various papers on the matter, the only real way to get Go to stop dropping unused functions is to use the runtime and reflection packages since then Go can't tell what functions might be used and fields accessed. Though also IIRC the go compiler is decent at localizing this instead of stopping all optimization.
>You had already answered that on your previous sentence :)
Yes but NPM/Yarn should be capable of atleast dropping packages that can be statically proven to not be imported. That means if a package imports require("xy-" + variable) then you'll have to be clever but if a package consists only of static imports then you can shake a lot.
Go didn't have any significant track record until glide came. But it still doesn't have a central repository so every update check means querying all the individual repositories, chokes sometimes on hg repositories, takes tens of seconds even with a few dependencies...
I also don't like how all people became "X developers". We are software developers. I don't think anyone who can code JS would have difficulty switching to Go or vice versa (unless you are a mostly non-technical person, probably from marketing, who learned just enough jQuery, something which I also respect - I never learned just-enough marketing).
I think NodeJS already has a decent stdlib and there will always be the need to pull dependencies in for basic functionality (apache commons, guava etc. in the Java world). That's just how OSS works. There should be a simple way to display and verify the chain of trust though.