NodeJS/npm is in no way unique here. Apt, yum, gem, Homebrew, Docker hub, etc. are just as bad if not worse, not to mention "git clone" followed by "./configure" or "make." Any time you bring down code onto your machine and execute it you are... well... bringing down code onto your machine and executing it.
I've been increasingly thinking that this could be a very fertile area for AI research. Security is really an AI hard problem. There is no combination of sandboxing, permissions, auditing, formulaic static code analysis, firewall hacks, etc. that will yield a system that is both (a) usable/convenient and (b) secure.
Take 'npm' for instance. It's amazingly convenient but it (and all other packagers like it) is a security nightmare. Operating securely would require one to set up a shadow mirror of the entire NodeJS ecosystem and then have someone ($$$$$) manually audit every single thing in there that you are going to use and every single change that comes down from above. That's not tenable for anyone but the most lavishly funded organizations, and anything like that is universally reviled by developers since it slows them down. I've worked in environments like that before (government), and we had people quit because it was just "impossible to do my work."
An "Open File" dialog box would let the user see exactly what they're picking, but just return an opaque & reusable handle to the program, meaning there's no change to the user experience. There would be more permission requests to the user in other circumstances, but that's the result of distrusting code that's running on your box.
I wouldn't expect an AI to try to preempt all the problems of a black-list style system, but rather a white-list as above would be far more manageable.
Maybe it's not too late to do the same on desktop!
Actually, you don't even need to follow it with ./configure or make.. An ext:// git url to clone from will execute arbitrary code:
https://en.wikipedia.org/wiki/Object-capability_model contradicts this, or anyway offers an alternative to the pile of hacks. It's true that none of the systems listed there are familiar; it's very hard to supplant the basic assumptions we've built on and built on since the 1970s.
Un-forgeable references can be created using crypto. In a way things like Bitcoin addresses or cryptographically authenticated network endpoint addresses might qualify.
So why did I bring up the less general model? Because the parent comment claimed usable security is impossible, even at the local level. If local machines have no security, the distributed case isn't going to be any better.
The complexity explosion I'm seeing is in the bail-and-patch approach.
Security isn't a separable concern; the epicycles grow out of trying to treat it as separable -- you have your program, and then you have your rules restricting the program. ('By admonition' in the paper.)
Here's a modern example of the alternative: https://sandstorm.io/how-it-works#powerbox "Notice how in this example, the application never gains the ability to send spam. And yet, the user experience is no worse and arguably better than before. The user is never prompted with any sort of security questions, yet the app is only able to email them with their consent."
P.S. if my remarks came across as combative, I didn't mean them to. I'm just offering links about how some of us think we're not stuck with the current untenable situation -- life can get better.
If this were possible, something like this could have the potential to infect quite a few npm modules.
> The post install script can be like any other script the user can run. There's no sandboxing so it can access anything the running user can access.
Wow. This just seems wrong that the script has such far reaching privileges.
Once the package is installed, it is already too late for a code review, or any mitigation. A well-written worm will never be detected.
It's getting 30k+ downloads a month.
The actual package people are looking for is here: https://www.npmjs.com/package/uglify-js
If you run `npm install` on a project, you're simply installing its dependencies (and actually running any pre-publish hooks too, for some stupid reason).
1) Legal -- copyright violation in the code itself, patent infringement, full text of a novel somehow found its way into the docs, etc.
2) Bugs or security holes, particularly if a bug is being actively exploited in the wild.
Hopefully, neither of these comes up often, but when they do, you need for the mechanism to be available.
Is there something analogous for npm?
> If you wish to lock down the specific bytes included in a package, for example to have 100% confidence in being able to reproduce a deployment or build, then you ought to check your dependencies into source control, or pursue some other mechanism that can verify contents rather than versions.