But I usually have to do some re-exporting shenanigans for the one or two third-party library I really need. This is a great step away from that - thank you!
- Distributing modules that work in both the browser as well as on the back-end
- Working with and developing an ecosystem for a language that wasn't really developed for years and years, was still missing quite a bit of functionality, and then suddenly gained a lot of traction
- That language having to catch up on years and years of developments in computer science, and having to do so in a backwards compatible way
These are real problems that have influenced and caused a lot of the perceived idiosyncrasies in the Node/npm module ecosystem, and that do not have simple solutions that can be simply copy-pasted from other languages or ecosystems.
You could say that they are two different runtimes, think CPython / pypy
That said, even if those runtimes are as different as Node and the browser are, it's the interplay of that restriction in combination with the other characteristics I mentioned that make this a new challenge in its own right.
Not intentionally you see. This is completely anecdotal, so take it with appropriate grains of salt, but I have observed that the majority of people is the JS ecosystem are not proper students of computer science/development. Most are either self-taught or come from fast paced bootcamps. So they lack the historical knowledge of software engineering and basic passing knowledge of other ecosystems.
Now for the most of us that have spent time studying the field, it is easy to identify the problems and at least remember that some solutions exist. Most of the people in JS community do not. So they go through the same process others have gone through before and land in the same mess.
There is nothing anyone can do about it.
The main issues are that browser tech has advanced rapidly and is/was a non-homogeneous runtime environment. Also JS eco is built from OSS by an enormous pool of developers.
"Hmm, uses the term 'modern' but the README has limited image macro memes... seems like an early 2019 release."
For reference: https://github.com/facebook/jest/issues/6694
I've got about 200 integration tests too with a package that builds itself into an OS dependent state so I'm stuck running things in a VM. Probably easier to mark up the tests so I can run unit tests most of the time. That should make things bearable.
C++ has a similar issue. There's a very distinct difference between C++11 and what came before. And C++20 could be another big shift (especially with modules). Modern has a useful meaning, even if it's fuzzy and temporal.
Edit: apparently it is not a package repository, my bad.
Could someone please explain to me what exactly it is and its use cases? Is this supposed to replace some of the features of npm?
> Anyone can create a different website to aggregate it's results.
Yes, I know, and it is fine. Just wondering.
Specifically, I would think 99% if stuff being shipped today is bundled with tools like webpack.
The consequence of that is that projects ended up with hundreds of tiny dependencies (and sub-dependencies) which increased the attack surface and introduced their own bugs and/or vulnerabilities.
I think that the Node.js community is wiser now. Vulnerability detection tools like Snyk.io have been useful in encouraging module authors to remove unnecessary dependencies from their modules.
Now the trend seems to be to use a fewer modules which offer more functionality that is more closely matched to the use case.
Smaller dependencies are easier to maintain, test, and understand. Rust also has a relatively small standard library and so you tend to rely on packages (some produced by the rust project itself) for some things you might use the stdlib for in other languages.
Both approaches have their advantages. I'd say that for security and reliability, you really need to know what packages you are running. Often you can delegate the responsibility to bigger upstream projects/groups.
For example if Facebook works with and on React, you can put a good lower bound on the reliability/security of React and the packages it pulls in. I'd be a lot more suspicious of packages which are rarely used by significant other projects.
Contrary to your statement this is pure disadvantage.
"For example if Facebook works with and on React, you can put a good lower bound on the reliability/security of React and the packages it pulls in."
I don't think this is true. You could easily depend on something that react pulls in which they later drop months before it turns into a vector for malware.
I don't see how trust translates down the dependency graph AT ALL.
Nothing is perfect. NPM and PyPi try to mitigate this problem with security audits and notifications. NPM checks your project for known vulnerabilities at every install.
If you're paranoid, you just don't upgrade packages unless you really need to and audit stuff yourself. That comes with its own costs. As does writing the software all by yourself. Or buying it from commercial vendors with similar tradeoffs applying.
The reason not all packages support this, besides legacy, is that this also requires your runtime environment to support and benefit from this. In other words, this is useful when you're targeting modern browsers. When a package can potentially also be used in Node projects, or projects that require support for relatively widely used browsers such as Internet Explorer, however, supporting this module system might not be possible or worth the effort.
In other words, it has absolutely nothing to do with it being too easy to publish to npm.