Hacker News new | past | comments | ask | show | jobs | submit login
Do not use NPM 5.7 (github.com/npm)
544 points by jguimont on Feb 22, 2018 | hide | past | favorite | 225 comments



Excuse me, but what the fuck?

Looks like the line responsible checks if the npm binary is run as sudo and then uses the UID and GID of the invoking user when chowning the directory. [ https://github.com/npm/npm/blob/latest/lib/utils/correct-mkd... ]

I feel like screaming, who thought this was a good idea? If I invoke something as sudo, why does anyone think it should try to detect that and do anything about it? I want to run as the user sudo has set, not my own user, OBVIOUSLY.

Don't try to be smart about sudo, you will break stuff.


The entire JS ecosystem is a case study in trying to be too clever on a well-solved problem.


That's an emergent property, a complex behaviour of a collective arising from interactions at scale, and is presumably an unintended consequence. What interests me is whether this arose by chance, or from some aspect of language design and/or ecosystem initial conditions and subsequent context.


Honestly, I feel this would happen to most systems that have the following properties:

- One size fits all, (clients can only run one language, JS, so it has to fit many use cases).

- Popular (JS seems to be the most popular language at the moment, having beaten Java on most scales in the past year).

- Not owned by a corporation, (whilst probably a good thing, this does lead to "design by committee" issues.

- Backwards compatible (in all this time, only a minimal amount of breaking changes).


Of course. Try running npm on a shared fs (fusehgfs). Most things don't work. This is just shit software and the people who make it need to concern themselves with fixing their software rather than trying to prevent imaginary security problems they shouldn't be trying to prevent anyway (I don't know what their security delusions are exactly, but I do know they are delusions). I'm an engineer and a sysadmin and I I know when something should be run as root or not. That should be my choice and my choice alone. I shouldn't be nagged about it and I certainly shouldn't be prevented. But this is npm. That's why people created yarn, I think.


I completely agree with you.

But in fairness, I can't count the number of times that I've needed to fix things after people treated `sudo npm` as Simon Says[1].

I'm sure they struggled a lot with that issue before coming to this solution. Was it the right solution? Absolutely not. But that's not the point I'm trying to make.

It's all too easy to tunnel vision on a particular solution. I've done it plenty of times, and I'm thankful to those who have helped me to see other alternatives in time.

[1]: https://xkcd.com/149/


One would be forgiven to think that npm run as root would install packages to a system wide location, where multiple applications may utilize them without having write privileges to its code. That what pretty much every other package manager does. Not hose your system and irreversibly render it inaccessible.


I think the easy solution here would be to disable global installs. Pip does that same stuff and it also is known to get people's computers into quite advanced states.

Ideally npm should simply setup a dedicated directory in /opt or /usr/local/ (ie, /usr/local/node/bin or /opt/node/bin) in which it dumps all the global stuff. That way you can easily set permissions for a user and/or contain any damages to that folder. If npm blows up that way it doesn't murder the entire system, you'll still be able to SSH in. (That is unless you use a SSH agent based on node.js in which case; "why?")

Once npm has implemented such a location it should refuse to run with sudo and demand the user setup the correct permissions within the node folder (maybe setup a group "npm-manage" during install?)


> easy solution here would be to disable global installs

I think that's not optimal. Having package being installed "globally" (as in available on your PATH) is nice. You can install `yarn` by doing `npm install --global yarn`.

The trouble is how people setup their node/npm installation. Instead of having global packages setup under the home directory, people use the default which requires root access.

Instead, default installation should be in user-accessible place and running npm with sudo should be exiting without doing anything.


> The trouble is how people setup their node/npm installation. Instead of having global packages setup under the home directory, people use the default which requires root access.

No, like you say in your next sentence the trouble is that it's default. This is NPM's fault, not the user.


I don't get what is the issue on requiring a "global" installation to be added to $PATH, a lot of other tools ask for that during installation and it's by far the safest way to do it.


I basically want what you mentioned later but also available to other users.

Home folder installation only works when you A) only want to install for a user B) the user you run under exists on disk and has a home folder and C) the user can be setup to perform updates.

A lot of the time I run tools that will run as a service. The user it runs under might not have a home folder, probably not even a login or shell. I still need access to the tool installed as root and as normal user for maintenance.

So ideally, I add the path into /etc/profile and install the stuff into /usr/local/node/bin where it is perfectly isolated from the rest of the system.


I think you're saying the same thing. Later in his comment, he mentions installing "global" stuff to `/usr/local/node/bin` or something like that, which is easy to add to your PATH (and can be scripted / documented to be easy to set up). Essentially do what homebrew does, which never seems to run into problems and IIRC completely refuses to run as root nowadays.


It seems not to be entirely without problems, or at least hassle: https://stackoverflow.com/questions/41840479/how-to-use-home...


I don't remember exactly what tool does this, but one package manager (can be homebrew but long time ago I used homebrew) warns users if you are running it as root, since it should install packages as the user.

I think npm could implement a similar strategy and educate the users how packages should really be installed.


Homebrew it is!

$ sudo brew

Error: Running Homebrew as root is extremely dangerous and no longer supported. As Homebrew does not drop privileges on installation you would be giving all build scripts full access to your system.


I really like that message. It used to be something diffr\erent, which was vague, but this actually tells you why it refuses to run as root.


Bundler does something similar - it complains if you run the command with sudo.

Ignoring the warning can result in exciting permissions errors later, which is what I'm guessing the NPM code is trying to avoid.


Several Arch Linux AUR helpers (pacaur and trizen, for example) refuse to run as root. Instead, they invoke sudo to escalate only during the necessary phases.


That's because makepkg rightly refuses to run as root.

    ==> ERROR: Running makepkg as root is not allowed as it can cause permanent,
    catastrophic damage to your system.


mpirun will also throw an error if run as root:

--------------------------------------------------------------------------

mpirun has detected an attempt to run as root. Running at root is strongly discouraged as any mistake (e.g., in defining TMPDIR) or bug can result in catastrophic damage to the OS file system, leaving your system in an unusable state.

You can override this protection by adding the --allow-run-as-root option to your cmd line. However, we reiterate our strong advice against doing so - please do so at your own risk.

--------------------------------------------------------------------------


npm first needs to fix the issue that by default, if you want to do anything globally, you have to use sudo. Homebrew warns against using sudo, but it's also possible to install things globally without using sudo.


I confess, I've resorted to `sudo npm` in a desperate attempt to get something to work at all, when trying to install frontend assets and not being able to make sense of the errors or get a helpful answer out of the frontend team.


Why is such code even necessary??


I'd go further and say that chown should be "considered harmful".

There are 3 use cases I can think for chown(2):

- Implementing the chown command (or other tools whose purpose is explicitly and only to manage permissions)

- Implementing a file copy/archive command that preserves permissions

- For package managers that set up a daemon user for a package, and want to set up the a writable area of the fs for use by that user

In other words, the ownership of files is something that should be totally up to the user, and not something implicitly done by a tool on their behalf.

I can't think of a single other place where trying to automatically manage file ownership is warranted. Files I touch should be owned by me, files root touches should be owned by root, and the correct way to make sure new files are not owned by root is to not be root. Doing literally anything else with chown is being overly clever and is a guaranteed landmine.


This is really horrific.

The idea that correctMkdir() exists at all seems to me to be so wrong-headed.

This comment from the source says a lot:

    // annoying humans and their expectations!
Good UX is an important, oft-overlooked consideration, but there is definitely such a thing as taking it too far. If your humans are expecting this level of hand-holding, it's because you've trained them to expect it by pandering to them up until now. This is the kind of problem that should be handled with good, detailed, error message display when users don't get the result they expect, not "fixing" it with over-reaching magic.

I'm not sure I'd trust anything put out by the npm team in general from hereonin if they genuinely thought creating the correct-mkdir.js file in the first place was a reasonable idea. Is it? Genuinely open to a counter-argument.


You want to know how this happened?

It all started with ubuntu pointing home to the user's dir when running sudo. This was done out of convenience to have gedit and other Xorg apps work when run with sudo...

Then there is also the terrible fact that ~/.local/bin didn't exist as a "standard" at the time. Which means your only sure-fire non-complicated way to install local bins guaranteed to work for the user was to put them in /usr/local/bin which meant running sudo.

But if you create a package cache dir during sudo in ubuntu in $HOME, thats with root permissions! Then you get errors when trying to run npm without root and it tries to manipulate the cache. How do we fix this, by changing cache dir permissions of course. https://github.com/npm/npm/commit/ebd0b32510f48f5773b9dd2e36...

Through series of refactorings, this became correctMkdirp, which is a non descriptive name of mkdir with (among other things) changing of permissions. And with a name like that it eventually was used in the wrong context and did the wrong thing.

I call this death-by-10-small-missteps. But I would pin the biggest problem on a missing omnipresent `~/.local/bin` standard (at the time). It doesn't cost much if anything, and it would single-handedly obliterate the need for users to much around in their paths (bad usability) or run sudo to install command line tools for personal use (clearly not the best idea).


> But if you create a package cache dir during sudo in ubuntu in $HOME, thats with root permissions! Then you get errors when trying to run npm without root and it tries to manipulate the cache. How do we fix this [...]

<pseudocode>

    if dir !exists:
      mkdir && chown
    else:
      if dir has correct ownership:
        traverse
      else:
        // our code chowns correctly on create
        // so user must have done something
        // independently; better NOT mess with it
        throw "helpful descriptive message"
</pseudocode>

mkdirp creates OR traverses recursively based on whether each directory already exists or not. This is why correctMkdirp() is an insane idea: the "correct"-ing chown step should never be internal to mkdirp because it should never occur on traversal (i.e. when a pre-existing directory is encountered).


What about tarball extraction and native module build artefacts being produced? They'll still have the wrong permissions.

This was not originally about "mkdirp". It was about managing the cache when running with sudo on ubuntu. It only became a "general" mkdir through series of refactoring steps.


FWIW the comment you're calling out here is four years old: https://github.com/npm/npm/blame/d3095ff20b8ea01e7fbf93a4a69..., before npm inc was formed.

The correctMkdir change seems more recent, but not really related to that specific comment.


This could be my inner grumpy old man speaking, but as a general rule of thumb, I look very poorly on editorializing in code comments. Originally because I didn't want my junior devs embarrassing the company when our clients received control of the code we wrote, but that also transferred into my perception of open source.

That comment should not have survived 4 years. Again, inner grumpy old man showing through.

Edit: to be clear, such comments are treated as reflective of the people and organization behind them.


I can see your point (particularly with regard to deliverables), but I suspect the practice is quite widespread - comments often end up being used as a sort of brain dump.

For instance, see this article (from 2004) on comments in the Win2k source code: http://atdt.freeshell.org/k5/story_2004_2_15_71552_7795.html


That's a fair enough point, especially when directly employed working on closed source / proprietary code. You're essentially stuck in an echo chamber, and professional standards are more difficult to maintain when you don't have the whole of the world looking on.

I also imagine that the mental strain of figuring out edge cases and poor documentation in a system as complex as a windows OS would be enough to make anyone at least a little salty.

However widespread it may be, that does not me that I have to like it :D


I think there should definitely be limits to this—some brevity/levity can be positive—so I would always try to err on the side of acceptance, but in general I agree. In this particular case at least, this comment seems to betray some hint of an anti-user sentiment.


An appropriate limit is, as I mentioned, editorializing. To be precise, your clients, peers and users should not the the target of your feelings expressed in comments.

An additional litmus should be professional discipline: express dissatisfaction with a todo (ideally referencing a bug or discussion issue source URL or identifier). Without that reference, it acknowledges an issue without indicating any motivation to solve or re-mediate the original cause, which is (IMHO) indicative of a careless and lazy attitude.


I think it's even more harmful: "there's the comment that code does X, so the code does X" (or in this case, an implicit hint that the code fixes non-X) - in other words, wishful thinking.


Ah, yeah, you're right. I was looking at it in diff[0] and hadn't noticed it lost context.

With the full comment, it seems they're instead bemoaning having to adhere to a user's config. Not sure which is worse...

[0] https://github.com/npm/npm/commit/94227e15eeced836b3d7b3d2b5...



I initially parsed that comment as "this library exists to annoy humans" rather than "we wrote this to satisfy humans, who are annoying in this respect".


There appear to be no unit tests for their entire lib/utils folder. Which includes things like this (misguided) chown utility. https://github.com/npm/npm/tree/release-next/test - and note the lack of testing in the commit linked in the bug report.

I had an inkling that NPM was cancer, but not like this.

Yarn, by contrast, has everything you would expect of a Facebook-engineered library: https://github.com/yarnpkg/yarn/tree/master/__tests__/util

Will be closely evaluating a switch to Yarn for our live apps. This is simply sad.


"everything you would expect of a Facebook-engineered library"

So it collects your personal information, even when not using it, and uses it for profit?


At the risk of troll-engaging: there's a huge difference between using an independently-auditable, multiple-contributing-entity, open-source library that happens to have been originated by a social network's engineering team, and using the identity-tracking public APIs of a closed social network. And both can be useful in certain situations. Be wary, but don't close oneself off to good technology just because it's associated with technology you disapprove of.


You are right that their OSS doesn't spy on you, but...

> don't close oneself off to good technology just because it's associated with technology you disapprove of

I disagree, if you think Facebook is evil, don't use their libraries. Using them gives Facebook positive publicity and good will.


OSS can of course spy on you. You just have a reasonable way to audit that software, and find that out for yourself.

If you can audit it yourself, why not use it? I won't follow you on your quest to rid yourself of things created by corporations that I find to be evil.

It's your opinion, sure, but using a corporation's tech does not ensure positive publicity or good will.


I don’t like Facebook much, but their engineering is very good


Only when your needs align with theirs. Which, truth be told, is fairly common. But they have a tendency to ignore or give low priority to other people's needs, like certain features that don't meet "Facebook scale" or, just, documentation (see: Relay).


I beg to differ [1]. I know people at Facebook, and from what I’ve heard, people just kind of... do stuff; there’s very little organization. Maybe that’s not true anymore, though.

[1] https://news.ycombinator.com/item?id=10066338


Not only does it not have any regression tests, it also fails the CI check, and it's already merged to the next branch.

https://github.com/npm/npm/pull/19889

This kind of thing disintegrates my confidence on npm as a project.


It, in fact, did pass the CI testing. The commit in question with the red X (7dff9d6) was pushed as a branch and then passed here [1].

After passing the test, the PR was made and merged, and the PR-test failed because it branch was already merged and travis-CI has races around that.

[1]: https://travis-ci.org/npm/npm/builds/344892198?utm_source=gi...


It's been almost 2 years since the great left-pad debacle[0]. The last major npm issue[1] was less than 2 months ago. While the underlying npm registry security issues will remain for a while (and other languages don't seem to have these issues with their package managers), there doesn't seem like there's too much I can do other than use yarn. And hope an alternative registry will appear.

Since I 'vote' with my code - this migration page has been helpful today - and I hope it will help others: https://yarnpkg.com/lang/en/docs/migrating-from-npm/

It took me ~5 mins to migrate all of my code from npm to yarn. But I don't have complex CI tasks either.

I use ncu to check updates every couple of days, sometimes more frequently. To further distance myself from npm, can anyone comment on the pros/cons of github repo paths instead of package names in package.json?

[0] https://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos

[1] https://github.com/npm/registry/issues/255

*edit: formatting


Github paths can change way more easily than npm packages, users can rewrite git history + break your stuff and versioning when using it with npm is horrible. NPM also now protects projects from namesquatting and prevents you from deleting them when multiple projects depend on them.


How does NPM protect from namesquatting now?



The left-pad debacle was a registry issue but the current issue is an npm client issue. In my experience, NPM 5 client versions have been shaky and unreliable, and there are problems with popular ecosystems like react-native [0]. I always roll back to npm 4, even on node 9.

[0] https://github.com/facebook/react-native/issues/14209


The quality of what's being delivered since version 5 leaves a lot to be desired. They should really more people to their team and someone better since a lot of people depend on this code. Also, what's up them not understanding how semver works and releasing a pre-release code as the regular version. That's pretty basic man and rarely someone is OSS fucks up this badly actually.


dist-tags/release channels are far superior to classic semver pre-releases.

What is the difference between an unstable and a stable version? Testing by users. So as soon as enough users have tested / enough time has passed without issues, an unstable release becomes stable. In the best case nothing about the code needs to change, the release just needs to be promoted to the stable channel / dist-tag. That is pretty common practice for a lot of software and especially packages on npm.

With dist-tags, you can still make the versions meaningful. Between unstable versions, you can still express in semver terms what type of release it is (patch/minor/major). In classic prereleases, that is not possible. There is no semantic relationship expressed between a -alpha.1 and -alpha.2. Was it a bugfix? Does it add a feature? You don't know.

The issue is that npm did not mention in their blog post that 5.7 was released on the unstable `next` channel. The issue is not release channels in general.


I agree with most of what you said.

> That is pretty common practice for a lot of software and especially packages on npm.

This is mostly what I have a problem with. Almost everyone is other ecosystems do not do something like this and there is a very good reason for it. Why not just follow a known and standard process of releasing things. https://twitter.com/maybekatz/status/966730802187792386

Even the NPM dev acknowledge the current process to be faulty and are considering switching to use pre-release tags.


and that using npm upgrade -g npm ignores the the @latest and @next release channels.


I can also recommend using NSP [0] to check your .lock files for security issues, for what it's worth.

[0] https://disjoint.ca/til/2017/11/10/managing-package-dependen...


> I use ncu to check updates every couple of days

You can use "yarn outdated" for that.


and "yarn upgrade-interactive --latest"


As far as I know it's not feasible to use Github repos instead of npm, at least for client-side code. Most packages are compiled from ES6 (or typescript, jsx etc) to ES5, and the generated code (which is what you want in your project) is usually not included in the git repo.


I just can't feel sorry for folks when I see comments like this one:

> This destroyed 3 production server after a single deploy!

I do think that the developers have a duty to do some testing of their software before putting out releases/updates. However, users also have a duty to perform sufficient testing before they push new versions to their production environments.

In my opinion, it's kinda like losing data because you didn't make and/or test your backups. It's a really crappy way to have to learn a lesson but at least they've finally learned it -- and if they haven't, well, then maybe they will the next time it happens.


What you say is 100% true, but I would go one step further and not have npm installed on any production server.

And it's not specific to npm, I would do the same with gem, pip, cpan, etc. Not to mention curl http://ex.io/install.sh | sudo bash.

Call me old school, but personally, I would avoid installing anything from language specific package managers. I would instead either build an rpm/deb package for every dependencies as a single package or if it's too complex, bundle the dependencies and the application in one package which deploys the bundle under /opt/.

That way I only have one source to check in order to see what is installed on my systems. Also, rpm and dpkg tends to be far better at managing what is installed by each package, and far better at uninstalling everything during cleanups.

Also, mixing a language specific package manager and a distribution package manager can have unforeseen side effects as the two can step on one another (for example, I ran into issues recently with a pip install python-consul overlapping with a yum install salt-minion as both of them download python-requests as a dependency).


The ironic part here is, part of npm's core design - installing all deps to `./node_modules` - makes it extremely easy to "build" on one machine and then zip up the whole project directory which only needs `node` to run.

This is in fact way easier than options for python, ruby (and probably many others) which tend to install versioned dependencies to some shared directory and then add them to the path at runtime. So you're very right, it's trivial to not need npm at all, ever, in production.


Unless you have native modules and are building on a machine that is different (windows, etc) from your deployment target.

And sure, there are ways around that too, but it's not always as simple as copying and pasting the dependencies.


> Unless you have native modules and are building on a machine that is different (windows, etc) from your deployment target.

I really don't understand why people do this. If you deploy on Linux, develop on Linux (and if you deploy on Debian, develop on Debian); if you deploy on Windows, develop on Windows (and you have my sympathy). It just makes all of life so much easier.

As a side note, I'd argue very strongly against picking the deployment environment based on the development environment: choose the best deployment environment, and then specify that developers use it to develop. IMHO there's no deployment environment today which is unsuitable for development (with the exception of Windows, ba-dump-ts), and production is what makes the money and keeps the customers happy.


This is true. Thankfully node-pre-gyp helps here because you can build the native modules once for each target arch/platform. I actually have a node app that uses several native modules, and `npm install` it for an ARM target from an x86 host. It also works cross-OS (e.g. build on MacOS for linux.) The key flags are `npm install --production --target_arch=arm --target_platform=linux`.


This is actually why I like working with C and Go. The compilers are on the build server and I just publish the resulting binary on a barebones production server after the build passes. In the past, working with Django, people always gave me shit for vendoring python package as part of the build, but it's to avoid having needlessly complex production environments and to remove the need for package managers.


I like this idea. Can you elaborate a little bit on how/where you would fetch the gem package without the language based package manager, and how this is linked to system deps? For dynamic web application deploys the self-contained binary seems ideal, but I've fallen back on Ansible to properly configure the server dependencies, along with gems that may have system dependencies (e.g. psql).


> [...] how/where you would fetch the gem package without the language based package manager, and how this is linked to system deps?

You unnecessarily constrain yourself too much. You don't need language specific package manager at all for deployment. For building a binary package you probably need it, but not connected to network. And then you need it with network for downloading source tarballs to include in source package (SRPM or similar). Note that the source package is an important step, as you want to host all necessary code yourself, without relying on randomly changing policies of package registries like NPM.

The sad part is that language specific package managers cram together downloading, building, and installing, instead of providing them primarly as three separate steps. (You usually can run each separately, but crippled in some whay, e.g. you don't get proper dependency solving for download, or you need to manually order building the dependencies.)

> [...] how [binary packages] is linked to system deps?

Normally. Your application requires libpq.so? You mention it in Depends: (or allow the build scripts to detect that). You need sloccount? You mention it in Depends:. You need crontab entries? You put them in /etc/cron.d and add cron to Depends:.


I've done that for a number of languages (ruby, python, perl, java, haskell, elm, c, erlang, lua, tcl, etc.) and their inevitable "yet another package manager" via nix.

Making them all use hashed fixed input sources without allowing any networking or file access during their build and install phases brings me great joy. That's not only useful for deployment but also so you don't end up with development environments that have subtle differences.

This enables me to focus more on my developer job. Getting things done faster and with confidence. I'm sure at some point, just like with programming languages, people will start to ask for more immutability and reproducibility in their OS as well.


Or use a container.


I am the one who reported this ;) In fact that was a single production server that I tried to reinstall 3 times before catching it was not really one of the commits that was doing this. No data was lost or connectivity (as long as you do not reboot it), you just lose any ssh connection/login.

Should I have done this on a staging server? Sure, but that does not change the fact that I would have had to rebuild the whole server there too. It is not expected that updating npm will kill the complete system it is on... It would be expected to have some deploy failure of some sort.

As previously noted, `npm update -g npm` pulls in version 5.7.0. Version 5.6 is still the latest but for some obscure reason if you have thisupdate anywhere in your deploy script you are screwed.


> It is not expected that updating npm will kill the complete system it is on...

I don't disagree with you at all on that. The reality is, however, that sometimes "shit happens".

I'm more of a sysadmin than a developer and I learned many, many years ago that even the smallest little updates can "go wrong" and take the rest of the system with it. After getting burned a few times, even a baby will learn to stop touching a hot stove.

> Should I have done this on a staging server? Sure, but that does not change the fact that I would have had to rebuild the whole server there too.

Yes, but in that case your production servers would still be humming along just fine, no?


The real fix is to not run npm with sudo. Why would you do that in the first place? npm runs install-scripts when you fetch packages, so you basically open up root access for all the packages you download.


This 1000 times. Running npm as sudo is a terrible terrible idea. I remember creating a slack channel in our team called 'never run npm with sudo' and ranting in dramatic fashion to try and overcome the effect of the printed advice which npm used to output in most failure situations to 'try re-running the command with sudo.' This tended to cause developers new to the ecosystem to re-run the command with sudo and create lots of problems for themselves -- in addition to being an extremely bad security practice.

Honestly -- I think npm should be updated to exit without doing anything if it detects its run with root privileges ...


True, but then npm can theoretically still e.g. install a keyboard logger for the current user, so you should remember to never run sudo as that user again. Of course, that's doable, but probably too much to ask from most users.


How is this different than running apt or yum or pacman or most other package managers as sudo?

Somebody has to install system software.


npm is not for managing system software and has not been developed as such. It's a javascript package manager. apt and pacman (and probably yum but never used so can't speak about it) have active maintainers for most packages and the mirrors are well taken care of.

npm is basically a giant array that anyone can add package to.

I'm using them both accordingly.


I believe that node is installed with npm and both are installed in system directories. I think they should by default point the npm global directory to the user dir and not a system dir.


Depends on how you install them, think that's how the default install works but nvm (which I use) puts everything under user directories. Agree with your thinking though, should be changed in the default install for sure.


> npm is not for managing system software

Debatable but irrelevant.

I'm not saying npm is a good or bad system package manager, just that running arbitrary scripts for requested packages and their dependencies is hardly unique.

It's oblivious to single out npm as a package manager that allows you to be pwned by packages in whatever repo you pull from.


It's not unique, but apt/pacman does not run arbitrary scripts. It runs what has been reviewed by others while npm packages are often not reviewed by anyone except the author, that's the difference.


So you're saying it's not the tool or packaging format. It's the curation of the repositories that npm/apt/pacman users tend to consume.


System package managers typically have a significantly stronger trust model — the packages are built and signed by a “trusted” entity who typically takes on a role in verifying that the packages they sign and distribute meet some standard of sanity.

npm gathers sources from a central registry which anyone can upload packages to — and furthermore package references don’t even have to be references to entities in the registry but can also be links to arbitrary git repos ...

Furthermore the set of dependencies to actually be downloaded is quite a bit more dynamic with npm I think because of the version compatability satisfaction algorithm employed by npm — so it’s inherently harder to statically analyze the set of packages a given npm install execution will install vs rpm/apt.


> It is not expected that updating npm will kill the complete system it is on...

Yeah, but that is why you test your deployments BEFORE deploying them.

Hell would be had had any developer at my company ran any such command on a production sever. The notion of even running a command at the terminal on a production server is even scary.

Things like should be done on build servers which are in general throwaway. Your build server should produce a artifact that can then be deployed to your staging servers and if all is well THEN productions servers. npm is a build tool and should not be installed or ran on production servers -- for many reasons more than just stupid stuff like this.


This response annoys me, because it's essentially victim-blaming.

Yes, ideally you have some automation and staging in your server setup. We're grown-ups. We understand this. But it ignores many other dangerous possibilities here.

Not everyone is blessed with working in a mature, well-funded environment full of experts. Maybe we're talking about a new or small organisation that simply doesn't have the resources and/or knowledge to isolate things with containers or VMs and related admin tools.

Maybe even taking out a staging server is still going to waste significant time resetting everything, blocking other development/deployment jobs in the meantime.

Maybe we're not talking about a server at all, but a developer's personal development workstation where they just use NPM to install a few Node-based tools.

It's all very well saying npm shouldn't be run on production servers, but that doesn't really address the fundamental problem. Do we also ban system package managers, and say the only way to deploy anything is via some sort of imaging tool? What if there's an equivalent screw-up in that orchestration tool and it bricks all 100 servers at once?


I'm not sure that reviewing what went wrong and how to prevent that in the future is victim-blaming. Problems happen, and sometimes you need to change the way you do things to prevent problems in the future. Victim-blaming would be telling a victim to change when they really shouldn't need to. It's always a trade-off between security and usability, and in the case of the OP, he should have leaned more towards security. Did OP cause this? No, but OP could have prevented this. Is that victim-blaming? I don't think so.

>[T]hat doesn't really address the fundamental problem.

The fundamental problem of human error is unfixable. Human error can be mitigated through more robust systems, such as separate staging and production environments. Is encouraging more robust protection victim-blaming? I don't think that it is.


> This response annoys me, because it's essentially victim-blaming.

No. The victim is the end-user who suffered from the production outage. jguimont is a professional who has an obligation to his clients.

Adopting a third-party tool or library does not absolve you of the responsibilities that you have to your users. You choose your tools and your libraries.

Both npm and jguimont screwed up here. Mistakes happen, and I certainly wouldn't judge anyone harshly for the occasional learning experience. But, the first step to learning from your mistake is admitting that you made one. jguimont has done that, and I respect him for it.


> This response annoys me, because it's essentially victim-blaming.

I really really dislike this comparison, and it frankly feels intellectually dishonest to see it come up.

Victim-blaming, as it's used in usual discourse, implies that there was a malicious actor that intentionally did something bad to someone else, and that you're telling the victim that they could have avoided malicious actors by modifying their behavior in unreasonable ways that reduce their freedom of movement/expression/etc.

This issue is a result of human error, something you cannot hope to globally eliminate. It's always easy to point fingers as someone who's screwed up, but we all make mistakes. All of us, without exception. That doesn't absolve the npm developers of their responsibility in this, but it is prudent, as a user of the software, to put process in place to ensure that the damage to your systems is limited (or if possible, eliminated) in the face of these kinds of human error.

Running npm on a production server is foolish. Running npm as root on a production server is... worse.

> Do we also ban system package managers, and say the only way to deploy anything is via some sort of imaging tool?

Why not? If your risk tolerance is that low, and you've identified the package manager as a large enough risk to your business, then yes, you do this.

> What if there's an equivalent screw-up in that orchestration tool and it bricks all 100 servers at once?

Again, if your risk profile thinks this is a problem, then you don't do in-place upgrades. You boot new servers with the new software version and swap them in, with the ability to back them out if there's a problem.

It's all a cost/benefit trade off. If the cost of what you believe is a likely failure in any of these elements is higher than the cost of building tooling and process to mitigate the risk of it affecting you, then you do it.

Certainly people have varying levels of maturity in their development and deployment pipeline. That doesn't mean that there isn't always room for improvement. At the end of the day, it's about outcomes: someone in that GH thread lost 3 production boxes due to this issue. They didn't have to if they practiced better hygiene, and I bet because of this, they're going to change their process. And that's great! Sure, blowing away a build box, staging server, or a developer's laptop sucks as well, and requires time and effort to fix, but at least in those cases no customers would be affected.

If you as the "victim" are just going to be a cowboy, then you should expect things like this to happen from time to time. If you want to reduce the risk and incidence of it happening, you change your process so you don't do risky things on production servers. Suggesting that people improve their deployment process isn't "victim blaming"; it's pushing people toward better engineering practices.


I think there is a big difference here. I am responding to a reply that wanted remove any responsibility from running such non-since as npm on a productions server -- note I did not reply to "crap I hosed my stuff".

But lets take a closer look at your comments.

> Not everyone is blessed with working in a mature, well-funded environment full of experts. Maybe we're talking about a new or small organisation that simply doesn't have the resources and/or knowledge to isolate things with containers or VMs and related admin tools.

These are not excuses to knowing your trade. And the size and funding of your environment should not stop you from practicing your trade well.

> Maybe even taking out a staging server is still going to waste significant time resetting everything, blocking other development/deployment jobs in the meantime.

I fundamentally disagree. Having a stating environment will always cut cost and can't EVER be considered to "waste significant time". It can only save time and improve your product. Its these types of attitudes that result in your service going down and lost of real revenue and ultimately the failure of the project. Taking time to setup proper staging environments always pays back in spades.

> Maybe we're not talking about a server at all, but a developer's personal development workstation where they just use NPM to install a few Node-based tools.

Yeah, maybe we are talking about a developers personal station, nope, we are talking about production servers. Nuking a developers station is not even on the same scale as nuking a production system. And had somebody complained about nuking their dev environment my reply would have been about not running tools as root.

> Maybe we're not talking about a server at all, but a developer's personal development workstation where they just use NPM to install a few Node-based tools.

Again, I am okay with a developers system being nuked, at least it was not production!

> It's all very well saying npm shouldn't be run on production servers, but that doesn't really address the fundamental problem. Do we also ban system package managers, and say the only way to deploy anything is via some sort of imaging tool? What if there's an equivalent screw-up in that orchestration tool and it bricks all 100 servers at once?

I would not advise running system package managers on production servers either -- not unless your staging environment had passed such test first. But that being said I am a big fan of fresh install and migrate -- where the migration code is something I own and can test to ensure it works before using it. If you have 100 servers then you should have the resources to handle setting up testing environments to ensure your production rolls out. You should also not update 100 servers at the same time.

Production is production is production is production is production! You don't run things the first time ever in production. If you want your product, company whatever to succeeded then there really is NO excuse for you not to have good practices when building and deploying software. You can come up with 2^64 what if's but if you are running something that for the first time and it nukes your system you are at fault. Things like testing environments or staging environments were created to just dream about, and talk about when things go bad with deploying directly to production. These things came about because they bring real value to a project. The notion that these things are a waste, or cost too much just non-since.

Anybody in this industry of deploying software to servers needs to stand-up to these ideas that these good practices are too costly. These are the ideas and notions I expect from executive teams who have never coded a line of code, the accountants trying to save money, and managers who only care about the next quarter. I don't expect to find these ideas on sites like HN or from peers in the industry, but when I do I think it is important to take a hard line and not allow the notion of bad programming, and deployment practices to be unmet with rebuke for fear of hurting somebodies feelings. So while I clearly towed a hard line in this reply Silhouette, please do not take this as a personal rebuke or attack at you. I am upset with the ideas, and the notion hat we have to settle for less, and have results like production servers falling on their face when we as a industry already know the answers to the problem, and have the solutions to minimize downtime and provide truly awesome software to others.


These are not excuses to knowing your trade. And the size and funding of your environment should not stop you from practicing your trade well.

It's all very well saying "know your trade", but the reality is that most organisations aren't running state-of-the-art orchestration tools. Heck, not so many years ago, many of these modern tools didn't even exist yet, and they've had plenty of problems of their own that make keeping up with the bleeding edge dangerous in itself.

So, while it might not be ideal compared with modern management tools, I think it's neither unusual nor unreasonable in many real world environments for someone to expect to deploy a standard set of packages on a production server using the normal deployment tools and a controlled configuration file, and expect it to work without destroying that world.

Speaking of funding, that affects everything in an environment like a bootstrapped startup or a small non-profit, even things like whether you can afford physically separate machines to run each level of testing/staging/whatever, or whether you can afford to hire someone who understands the recent generation of tools that deploy a snapshot in one form or another instead. It's totally unrealistic to expect this sort of organisation to have mature, state-of-the-art configuration management and deployment systems in place from day one.

Hopefully even in the early stages you would still have some sort of staging set up, and I think you misread my comment there; I was in no way advocating not having staging servers. I was only observing that even if you take out staging catastrophically rather than production, it can still be a pain to set everything back up, just less of a pain than losing production while you're doing it.

Again, I am okay with a developers system being nuked, at least it was not production!

You're OK with a developer's entire workstation being taken out, at best losing everything they've done since last night's backup and then probably taking another half-day to restore from backups if everything goes smoothly?

I'm not OK with that, and somehow I doubt most developers would be either.

If you have 100 servers then you should have the resources to handle setting up testing environments to ensure your production rolls out. You should also not update 100 servers at the same time.

Right, but how many organisations have 100 production servers? If you've reached that scale, you're already probably in some sort of 1% group, and obviously you might have far more resources available to deploy management infrastructure around those servers.

Anybody in this industry of deploying software to servers needs to stand-up to these ideas that these good practices are too costly.

That philosophy might be something you can afford once you're no longer operating in small/early mode, if you get that far. But while you're still worrying about say getting from MVP to ramen profitability in your startup, everything is too costly, and you never have the luxury of doing the ideal thing everywhere right now. Hoping for basic staging isn't out of the question. Hoping for a full-time ops person to deploy the best-in-class orchestration tools that came out last week because you can't trust running apt to install security updates on your production Debian servers without destroying them is probably beyond your wildest dreams.

It's not that I disagree with you on the ideal situation. I just see that an ideal is what it is. Many, many organisations will not have the luxury of doing everything ideally, because they lack the time, people, budget, knowledge or omnipotence to do it all at once. That's the nature of running businesses. It's not unreasonable to expect that when you have to prioritise, the risk of your basic package management tools nuking your entire system should be negligible, and I still think it's unfair to criticise the victims of such a spectacular screw-up until you've walked a mile in their shoes and seen what they would have had to give up somewhere else to get that extra level of protection against something that obviously should never have happened.


I get the feeling you think I am okay with this bug. I am not. I am not okay with any system getting hosed, but I am very not okay with production servers being destroyed.

Your work flow is like a good set of armor. You have different stages where things will fail -- and they will fail. The goal of your armor is to prevent failure on the most important thing -- and that is your production servers. The thing that brings in money, customers, users, whatever, the reason you are here.

So yes, if I had to chose between a developers workstation getting destroyed or a production server I would 10 times out of 10 pick the developers server.

> It's all very well saying "know your trade", but the reality is that most organisations aren't running state-of-the-art orchestration tools. Heck, not so many years ago, many of these modern tools didn't even exist yet, and they've had plenty of problems of their own that make keeping up with the bleeding edge dangerous in itself.

I am not suggesting any such thing. Nobody needs state-of-the-art orchestration tools. If you ever happen to bump into any of my other post you will see I argue against most things like kuberneties. The problem at hand is a very well known problem, and the solutions for preventing production server failures -- or at least minimizing them have been around for at least as long, if not longer than the web its self. Maybe part of the problem is we have wrapped our self in these tools to make things seam easy, and have lost basic system administration skills - because the things you are describing would make it out like I am asking you to be Elon Musk and land rockets on floating barges. I am not. I am asking for simple and free tools to be used to automate the building of artifacts that can then be deployed to simple vms or servers and verified to not cause adverse effects. Then for the same artifacts to be deployed to your production servers. All of the tools needed to do this are free. All of these notions are things that should have been taught in school, or on the job training. This is apparently not the case, this is why post like mine are being made to point out how it should be done, so maybe somebody reading this will learn something new.

> Hopefully even in the early stages you would still have some sort of staging set up, and I think you misread my comment there; I was in no way advocating not having staging servers. I was only observing that even if you take out staging catastrophically rather than production, it can still be a pain to set everything back up, just less of a pain than losing production while you're doing it.

Please my comments about armor above. Its okay and will happen something will fail. The goal is to make sure it is not your production server.

> Right, but how many organisations have 100 production servers? If you've reached that scale, you're already probably in some sort of 1% group, and obviously you might have far more resources available to deploy management infrastructure around those servers.

You introduced the 100 servers number, and that is why I used it. If you have a 100 severs than your first argument of not having "tools" even though If find it faulty in its own right is blasted away by anybody with any real number of servers.

> Right, but how many organisations have 100 production servers? If you've reached that scale, you're already probably in some sort of 1% group, and obviously you might have far more resources available to deploy management infrastructure around those servers.

100 servers is really not that many. But I feel this highlights my point even more. If you are a small organization and have a few servers that means each server represents a larger % of the workload and business. This intern really means you can't afford to not have good practices in place to avoid downtime on servers represent a much larger % of the work load should one go down.

> That philosophy might be something you can afford once you're no longer operating in small/early mode, if you get that far. But while you're still worrying about say getting from MVP to ramen profitability in your startup, everything is too costly, and you never have the luxury of doing the ideal thing everywhere right now. Hoping for basic staging isn't out of the question. Hoping for a full-time ops person to deploy the best-in-class orchestration tools that came out last week because you can't trust running apt to install security updates on your production Debian servers without destroying them is probably beyond your wildest dreams.

You can't not afford to do these things. So you get MVP and your service crashes and now you are a big zero because you lost all your initial clients. Please don't gamble with both investors money and developers you hire to work for you. Writing software is not pulling a lever on a slot machine. It takes real skill and attention to detail to pull off. No point in putting on your best tuxedo top only to enter the ballroom without pants on. You will look good from the car, but be the laughing stock of the event.

> Many, many organisations will not have the luxury of doing everything ideally, because they lack the time, people, budget, knowledge or omnipotence to do it all at once,

These are not luxuries. They are must. If you can't do these things then you don't have a product, budget, or the people that are suitable for the job at hand. You must build your foundation on rock, and if you can't afford that rock then you are not ready to start building anything other than a hobby.


Actually you're answering your own question, rebuilding staging server is like nothing compared to having issue on production.

it's lesson and reminder to everyone out there, be careful.

When dev env broken, alpha skipped, staging unusable, then test on production, you sure like to live on the hell.


But you should have still tried it on the Staging server to begin with.

That's the responsible method. The fact that you'd have to rebuild your staging server is exactly why you should have tested it there.

Sure NPM shouldn't have broken this but any number of things can cause issues during deployment and it's your job to check for them before pushing it out


> Sure, but that does not change the fact that I would have had to rebuild the whole server there too.

   chef-client -z -r 'my-cookbook::npm_web_server'
Obviously the behavior of NPM absolutely sucks and is a total mess here, but "I had to rebuild the server", in 2018, is not nearly the material complaint it was a decade ago.

The tradeoff of using rapidly-evolving tools with minimal oversight from the people creating them is that sometimes stuff blows up and not even always for good reasons. It is incumbent upon you, as the recipient of this enormous, jaw-dropping raft of free stuff that occasionally explodes, to write code and operate your systems defensively. Part of which is implementing those systems to be repeatable and quickly reinstantiated.

If you do not like this tradeoff, you have other options as well.


> It is not expected that updating npm will kill the complete system it is on... It would be expected to have some deploy failure of some sort.

"It is not expected that" is the definition of unexpected behavior, which is the very reason why we use staging servers. So your message is essentially "I didn't use a server meant to check for unexpected behavior, because I didn't expect that behavior to happen". Well, yeah, that's the point.

Also, I'm really not sure what your smiley is trying to convey here, and of all the possibilities I can't see one that's positive and contributive to the conversation. Really un-needed, please refrain from doing that.


If he'd tested it beforehand the comment might instead be "this destroyed my laptop".

You have a responsibility to test before releasing to production, yes. But the amount of fucked up your program has to be for `sudo ___ --help` to wreck the operating system, the unexpectedness of that result... IMO attention should be focused here on the irresponsibility of the npm team, not their users.


The fact that --help can actually DO something makes me quite upset.

That means npm is causing side effects even before reading what the user wants, or is blatantly ignoring the user's request


This may be the only time I'll see accessing --help docs result in borderline malware (it's debatable that this 'sudo commandeering' was intentional by design, but as shortsighted as it gets...)


To play devil's advocate, this was a dev version, without any UA or E2E testing, tagged as a major release. This after there have been major bugs for several releases. This isn't 0.7.0.

Perhaps the problem is that the stability of the world's fastest growing development platform hangs on the implementation of best practices by a two person developer team.

NPM needs to step up its game or we need to make something like yarn the standard.


Yarn is already the standard in every large organization I know of that uses JS. It is still faster and more reliable than even the latest NPM versions so I dont see why anyone is waiting to switch.


What strikes me as odd is that there are a lot of immature comments in that thread.


More than the clueless “+1”, I also have noticed an uptick of basically troll comments on GitHub. And while I understand that this particular bug is really bad, GitHub for me has always been about getting shit done and solving problems, a nice escape from the “normal” Internet with all its drama. The fact that people seem to be actively enjoying the drama and are even trying to fuel it—even if this is currently limited to super high profile bugs likes this—is concerning


There is nothing odd about it. The JS community is today attracting most script kiddies.

During the bubble, it was the same with PHP, and some of us were part of it.

Youngsters must start to code somewhere.


I wish nobody will ever find the posts I’ve made on php.net as a teenager circa 2005


That situation can be easily fixed in RPM-based Linux with rpm --setperms and rpm --setugids.

Correction will be little harder on Debian derivatives and whole incident can be completely prevented on Solaris with file-mac-profile.


Good lord, when I try to follow the link I get the Unicorn error page with the message 'This page is taking way too long to load. Sorry about that. Please try refreshing and contact us if the problem persists.'

Has this issue provoked so much outrage that GitHub can't handle the constant stream of angry emojis on the issue comment thread?


I had the same issue. Internet archive link: https://web.archive.org/web/20180222160101/https://github.co...


I can't even get to that page as it times out as well.

EDIT: I opened the original link in incognito mode and the page seemed to load fine.


Yeah that entire thread is a dumpster fire.


Maybe Github was deploying with npm.


Log out of Github and then reload the page. WFM.


Does not work for me and I am not logged into github.


Apart from being a horrific bug, why are people running npm as root? Why don't they install it somewhere below $HOME and modify $PATH? npm is working fine without root permissions.

Everything is super dangerous as root, one should avoid using root at all costs until there is no other way.


`sudo npm install -g` is one of several examples of the normalization of deviance rife in the NodeJS community. Most command-line utilities distributed through NPM recommend running as root (implicitly—because they all suggest installing it as a global package). Here's[1] Microsoft's instructions to install the TypeScript compiler, for example.

NPM's awfulness notwithstanding, it's trivial to write a shell script to do what you say and add a symlink to ~/bin. But everyone on StackOverflow will tell each other "just run it with sudo", and they do, and then quickly move on with their lives (presumably to be followed with "and break things"). Instead of doing the right thing, raising their hackles about how poorly NPM is designed, and holding its community leaders accountable.

1. https://github.com/Microsoft/TypeScript/blob/b29e0c9e3ab2471...


~/bin is not in the PATH for many systems. So, if you want beginners to be able to use your program, you'd have to provide instructions on how to change the PATH on each platform that requires it. Then you have to hope that they don't accidentally screw anything up in their profile scripts while making the changes, as I did back when I was new to Linux.

It's extra-frustrating writing those instructions, because not only are they platform-specific, but they are different depending on what the user has already done to their system. If some other tool told them to create ~/.bash_profile or ~/.bash_login, the more shell-agnostic option of modifying ~/.profile will no longer work.

Figuring out where to change the PATH is also confusing, and you might come across solutions that seem to work, but cause weird errors later down the road. For example, the tool being unavailable when invoked remotely, because you only changed the PATH for interactive shells.

It's understandable that people use sudo when they don't see it causing any obvious problems. Installing user-local packages should be one simple command, and it's a failure of operating systems and package managers that it's not. As it stands, correct usage is much harder than incorrect usage, and this is the result.


Newer distributions seem to have standardized on always having ~/.local/bin on the PATH, so this particular problem should be solved in a few years.


You can just install packages as part of dev dependencies. You can then run them manually from `node_modules/.bin` or with npm-run, and most tools will pickup the local deps. No need to install anything globally with NPM.


If I run npm without -g, it complains about missing files, and fills my home directory with node_modules junk and a package-lock.json I have to "commit"? Why do I want to commit it, and where?

I'm sure you are now going to tell me there is an easy way to fix that too, and I'd be happy if there was, but for me I just want to use npm to install a program or two.


> for me I just want to use npm to install a program or two.

I don't think you're introducing new information. When I wrote my original comment, it was intended to fully acknowledge that this is what's at play. And rereading it, seems like it does that well enough, but I might be wrong.

But in any case, I'll run it with sudo then, is absolutely the wrong thing to do, regardless of the bad choices on NPM's part—on par with I just want to use my online banking, so I'll click through this certificate error, or I just want to take some notes, so I'll grant this mobile app the full permissions it's asking for.

There's a reason I used the phrase "normalization of deviance". It's a phrase that came out of post mortem investigations into simple process failures at NASA that led to proper (catastrophic, life-ending) failures, and an urge to find an answer to the question, "how in the world did we get here?"


The correct way is to use `npm install -g --prefix` to install it into a directory on your path which is writeable by your current user.

I don't think I've ever seen the install instructions for a npm-packaged tool actually say to do this.


Thanks, I'll give that a try in future.


I'm mostly a Windows user, so maybe I'm misunderstanding *nix stuff here, but I don't see how recommending npm i -g package is remotely the same as recommending sudo npm. Could you clarify?


On Linux you can’t install globally without sudo. Unlike on Windows global means system global, not user-global. Such a thing does simply not exist for Linux’s npm.

If they fixed that, 99% of these issues would go away. This is actually an example of something from the node universe working better on Windows.


> On Linux you can’t install globally without sudo.

thats not right. you've just got npm setup wrong. https://docs.npmjs.com/getting-started/fixing-npm-permission...


I know a lot of inexperienced devs who run things as root to try to fix problems. It’s the first thing some of them do (after restarting). I think it’s a lack of awareness due to missing knowledge. When you don’t know much about how your system works, you can’t effectively troubleshoot it.


Because sometimes you are writing software that interacts with hardware at a root level. This is really annoying advice you're giving since its absolute and without context. No, not everything is "super dangerous" with sudo. Get that FUD outta here!


Running npm as root is _super dangerous_ - full stop. npm install can run a large amount of arbitrary code downloaded from the internet via postinstall script hooks.

Its absolutely banana-pants crazy to run `npm install` as a root user in any circumstance.


It's banana-pants crazy to run npm at all. Even given all the wisdom about running as sudo, best practices etc., this team released an update where `sudo npm --help` breaks the operating system. The recklessness and confusion of ideas that indicates... postinstall hooks, I don't even want npm running. This isn't even the first such shenanigans.


If it runs untrusted code from the Internet, surely it doesn't matter much if you do it as root for most practical purposes? It could still run that spam relay, botnet software, exfiltrate your secrets and install that keylogger.


Is your argument that one is better off to run $(potentially dangerous command) with sudo privileges because it’s also risky to run it without?


NPM does more than just npm install... Not full stop. How can you talk about a tool you dont understand?


Why would you want to run any npm command with root privileges ...?

‘npm run’ seems like it would be the most tempting — but it stills seems like a bad idea ... seems to me for most problems that people are likely to use ‘npm run’ as part the solution, the developer should be able to arrange things such that they don’t need to run those npm workflows with root privileges... for other tasks where they can’t get away from the need, it still seems dangerous — it’s always dangerous to run even trusted tooling with elevated privileges...

If you do use ‘sudo npm’ $(any command) where any command is pretty much any of the forms — (even run), you are more likely to have to run additional npm command with sudo privileges at some point in future as well - you are also more likely to _have_ to run npm commands with elevated privileges in future — which is also bad because it means you are likely to be using the tools in not the best way.


Yeah, it would really suck if those files installed by my operating system--the ones that are trivially verified and easily replaced as they are literally the same on every single computer--were to be damaged. Things are much safer if I run them as the user which owns all of my data and which I spend all my time logged in as, right? I mean, at least I haven't stupidly added any part of my home directory to my path, so I can trust that the software I am running was installed by someone running as root... oh wait :/.

The only reason root even exists on a computer that has two users (root and the user that owns all the data) is to make sure that no software is installed on the system except by root. If you have things set up to also let you install software as the user that isn't root, then you have somehow missed the entire point of peiviledge separation and should just log in as root and do everything as root, as that is at that point fully equivalent.


This isn’t true with the security model employed in eg macOS.

Many places where my most important private data is stored (keychain for example) are not accessible without privilege escalation by processes running with my uid.

There of course local root exploits that exist and plenty of holes — but for many information stores on the system, I definitely want there to be an additional privilege escalation requirement for any semi-trusted code I choose to run ... furthermore overtime I want more of these personal information sources stored in a way that requires escalation — and I definitely am not going to defeat any of these future advancements prior to their release by running code with maximum system privilege for the sake of the defeatism of a previous age ... also it’s ridiculous to suggest that I am better off running untrusted code with more privileges than less — giving untrusted code maximum privileges makes everything about trying to be as practically secure as I can manage orders of magnitude harder. Running semi-trusted code with maximum privileges makes it even easier for the nefarious code to exploit me in ways I will never detect — simply by leaving more of the easy-to-write hard-to-detect exploit vectors completely unimpeded.


npm should never interact with hardware, it's job is to install and manage packages. I could understand that you have to run nodejs with root, since it actually can use the hardware.

But using npm with root user? I can't think of a single usecase.


I am not a node guy but as far as I understand nodejs is a webserver, no? _Never_ run any webserver as root. This is just bad practice.


No, Node is a runtime for javascript code, using the same V8 engine from the Chrome browser. It is similar to the JVM runtime for Java code and the CLR for C#, although of course there is no intermediate compilation step for javascript.

A webserver is one of many things that can be run using Node+JS, the point being that it's an entire runtime and can do pretty much anything any other language can do.


Well think harder. Npm runs scripts from package.json. Most folks wouldn't think twice to run sudo npm start as a replacement for sudo node. I sure wouldn't think npm would start mucking with file permissions.


I'm sorry but that people can't figure out where to put `sudo` is not a usecase for using sudo...

Instead of running `sudo npm start`, have `scripts.start` have the value `sudo node index.js` if you want.

But then again, I'm not "most folks", I try to think when I am the root user and don't run third-party code willy-nilly when I am.


The reason raw hardware access is limited to root is usually because it's "super dangerous", i.e. the consequences of your actions can be more far-reaching than usual and mistakes might have you lose more than just time.


Right, but that means your service is running with elevated privileges, that does not mean your build tool needs too.

Furthermore if you do have a application that requires root level access then the parts that do should be isolated from the parts that don't. You don't get to just get a blank check to run as root because you need to bind to a low port.


http://blog.npmjs.org/post/171169301000/v571

  Thankfully, it only affected users running `npm@next`, which is part of our staggered release system

  #STOPUSINGPRERELEASEWITHSUDO
Really now?

  #ANGRYORANGEWEBSITE #PEOPLEGOTMAD
:)


This is just absolutely unprofessional.

All tags:

#ANGRYORANGEWEBSITE #PEOPLEGOTMAD #STOPUSINGPRERELEASESWITHSUDO #CLIHOTFIX #WEGOTUBB #LITERALLYKILLEDGITHUB

Author:

FEBRUARY 22, 2018 (9:53 AM) @MAYBEKATZ

https://web.archive.org/web/20180222201315/http://blog.npmjs...


I guess they're just refusing to acknowledge that upgrading npm installed the "prerelease" version?


Reminds me of a recent Yarn problem, overwriting which(1). https://github.com/yarnpkg/yarn/issues/4205


Both of these issues seem like a timely reminder that everyday Linux desperately needs a proper application management and security model.

Installing software where your options are

1. running as a regular user, and the install script can put whatever it wants within your user's directories

or

2. running as root, and the install script can do literally anything to anywhere on your system

is not fit for purpose, when the risks from both malice and incompetence are both reaching new heights almost daily.

These are systems we use for real work, but even smartphones and their toy app stores do better now. How do we still not have controls so applications can always be installed/uninstalled in a controlled way, can only access files and other system resources that are relevant to their own operation, and so on?


> everyday Linux desperately needs a proper application management

You mean something, that won't allow two packages to own the same file? Something, like, rpm or apt?


You probably meant rpm and dpkg... Or you'd have to compare yum, zypper, apt, pacman and whatever else is out there.

But, I'm certain the parent didn't mean that. Dpkg and rpm both allow packages to overwrite files from each other and, more dangerously, allow fully authorized post-install scripts. And they're often necessary for sane package management (create user, initiate database), but could be exploited to wreck havoc on the system.


Yes, I meant dpkg.

Not sure about dpkg, but rpm does not allow two packages to own the same file. If you try to install package, that contains file owned by another, already installed package, the installation will fail (you can try that with installing an amd64 package that owns something in /usr/share, and then try to install the i386 version). Yes, post-install scripts are dangerous and rpm folks are doing small steps to phase them out: https://www.youtube.com/watch?v=kE-8ZRISFqA


dpkg will throw a fit in the same way.


it will throw an error message, which the user is probably going to ignore and install anyway.

This will ultimately cause errors down the line. Maybe not right now, but eventually problems will occur.

showing a warning is great, but not needing that warning would be preferable.

but most distributions are already working on solutions to that. ubuntu is working on SnapOn's [0] for example, and i remember hearing about something else from redhad as well.

[0] https://snapcraft.io/


No, it will not install the package at all:

    $ mkdir root/bin/
    $ (echo '#!/bin/sh'; echo 'echo "hi"') > root/bin/ls
    $ chmod +x root/bin/ls
    $ fpm -s dir -t deb -n bad-ls -v 1.0 -C `pwd`/root .
    Created package {:path=>"bad-ls_1.0_amd64.deb"}
    $ dpkg-deb -c bad-ls_1.0_amd64.deb
    drwxrwxr-x 0/0               0 2018-02-22 10:44 ./
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/share/
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/share/doc/
    drwxr-xr-x 0/0               0 2018-02-22 10:44 ./usr/share/doc/bad-ls/
    -rw-r--r-- 0/0             142 2018-02-22 10:44 ./usr/share/doc/bad-ls/changelog.gz
    drwxrwxr-x 0/0               0 2018-02-22 10:44 ./bin/
    -rwxrwxr-x 0/0              20 2018-02-22 10:42 ./bin/ls
    $ sudo dpkg -i bad-ls_1.0_amd64.deb
    Selecting previously unselected package bad-ls.
    (Reading database ... 837129 files and directories currently installed.)
    Preparing to unpack bad-ls_1.0_amd64.deb ...
    Unpacking bad-ls (1.0) ...
    dpkg: error processing archive bad-ls_1.0_amd64.deb (--install):
     trying to overwrite '/bin/ls', which is also in package coreutils 8.28-1
    Errors were encountered while processing:
     bad-ls_1.0_amd64.deb
    $ ls -l /bin/ls
    -rwxr-xr-x 1 root root 134792 Oct  2 10:51 /bin/ls*
It doesn't just warn and leave things in a bad state.


this is strange. I encountered a conflict at work recently with an /etc/ file conflict.

the stdout of the dpkg errormsg gave me the required tag i could use to do it anyway. it was basically just

  !! double click -> middle click


Actually, it will throw an error - on which the higher-level (libapt) tools above dpkg will abort, and going directly to dpkg with a --force-whatever is not quite as easy as clicking "yeah, just do it already". Not to mention that I have needed that twice in a decade, in rather obscure cases.

But yeah, containerizing the apps is probably a way forward, which sidesteps whole classes of issues.


dpkg won't allow one package to overwrite a file from another unless you pass it --force-overwrite, which is not the default.


You mean something, that won't allow two packages to own the same file? Something, like, rpm or apt?

No, not really.

For one thing, package managers are only useful on packages supplied by the distro (or otherwise bundled using that convention), and we need something that allows for installing (and uninstalling, and backing up configurations for, and...) software safely and systematically in the general case.

For another thing, even packages installed with a distro's own package manager can typically dump whatever files they want wherever they want, rather than having the OS restrict them to a controlled environment.


> For one thing, package managers are only useful on packages supplied by the distro (or otherwise bundled using that convention), and we need something that allows for installing (and uninstalling, and backing up configurations for, and...) software safely and systematically in the general case.

There's nothing that limits rpm/deb to distribution. Anyone who publishes a tarball with software, can publish rpm/deb as well. Many do.

> For another thing, even packages installed with a distro's own package manager can typically dump whatever files they want wherever they want, rather than having the OS restrict them to a controlled environment.

The list of files in manifest is checked beforehand and if there's a conflict with existing package, the installation is aborted.


There's nothing that limits rpm/deb to distribution. Anyone who publishes a tarball with software, can publish rpm/deb as well. Many do.

Hence my "or otherwise bundled..." note.

But you're still only thinking in terms of packages that are bundled and installed via the system tool. Anything not installed via that tool can typically do whatever it wants if its scripts run as root, and anything that is installed via that tool typically won't be aware of anything that wasn't and will happily write all over it with no mechanism for backing up what was there before or reverting a breaking change.

The point is that relying on some voluntary convention like this isn't good enough. A modern OS should enforce mandatory restrictions on all installed software. We should be able to do things like checking exactly what is installed, or uninstalling something unwanted with or without also uninstalling any now-unused dependencies or any configuration data, and we should be able to do these things reliably, safely, and without any requirement for the software itself to be "well behaved" in any particular way.


No, they do not have to be bundled. The vendor of given software has to support it.

Vendor A, supporting system B with it's packaging system .xyz, makes deliverables available as a package .xyz. Everything is fine, stuff works as it should.

Vendor C, makes deliverable as a self-extracting installer, that happens to run on system B needs your permission/credentials to install that on your system. If you do that without any auditing, it's your problem, if it overwrites something. You did give the permission (you had to type in that password) and didn't insist on proper packaging.

Because the system provides the facility to achieve what you want; you just chose to override it. You own all the consequences of that.

If you want for a modern OS to enforce mandatory restriction on all installed software, modify your sudoers file to only allow to run rpm/yum or dpkg/apt. Because packages installed via these mean fulfil the conditions that you describe.


If you do that without any auditing, it's your problem, if it overwrites something.

I don't know whether you're genuinely missing my point or just trolling, but this doesn't seem to be a very productive discussion so this will be my last comment here.

Your argument seems akin to saying that you could choose to install only open source software, and to personally audit every line of code in that software including all its dependencies, so if you don't do that then it's your own fault if something bad happens. If you're both a world class programmer and a security expert, and yet bizarrely you have ample free time available and nothing better to do with it, that might work. In the real world, it's totally impractical, and a much better solution is to operate according to the principle of least privilege, enforced at the level of the OS, without having to rely on conventions and/or good will.

If you want for a modern OS to enforce mandatory restriction on all installed software, modify your sudoers file to only allow to run rpm/yum or dpkg/apt. Because packages installed via these mean fulfil the conditions that you describe.

No, they don't, as I've repeatedly tried to explain. At best, even if packages are available and properly constructed, your method keeps track of where files go and can remove them again afterwards. It doesn't enforce any systematic use of the filesystem to contain packages within specific areas; it doesn't manage related issues like configuration files that you might want to back up or preserve across software changes; it doesn't restrict access to files, networking or other system resources that the software has no business touching; it doesn't scale to the many-small-dependencies model prevalent with tools like NPM; and at this point there are already so many fundamental problems with basic robustness and security that anything else is probably moot anyway.

I leave you with a question, which brings us back to where we came in. Given that this broken version of npm exists and that it was made available via at least one production channel that should not have included it as a result of presumed human error by the maintainers, how would anything material have changed today if people had been installing it via an official package and their package manager as you suggest, rather than via npm update?


> I don't know whether you're genuinely missing my point or just trolling, but this doesn't seem to be a very productive discussion so this will be my last comment here.

I'm afraid it is you, who is still missing the point.

No matter what the system does, if you use your root privileges, all bets are off. You are the god of the system, you can do whatever you want, the system has no way to stop you. That includes destroying the system, whether directly, or by scripts run on your behalf.

The only way for the system to enforce anything is to take away root from you. There is and will be no system in existence, that can both provide you with both unlimited power AND handholding you. That's the law of the objective reality we live in. To quote: "Ils doivent envisager qu’une grande responsabilité est la suite inséparable d’un grand pouvoir." (They must consider that great responsibility follows inseparably from great power).

> It doesn't enforce any systematic use of the filesystem to contain packages within specific areas

That's right, because it has no knowledge, what your specific areas are, or what they are allowed to contain.

> it doesn't manage related issues like configuration files that you might want to back up or preserve across software changes;

configuration files are app-specific, "the system" cannot have knowledge of it's internal structure and of your intent. What it can do (and does) is show you the old and new versions, optionally the diff between them and leave the final decision on you. It will never overwrite your configuration without your consent (see the first part of the answer).

If you want the full SCM power over you config, put your config into SCM. Not everyone wants it, but those who want it, have the option available. Others may prefer other ways of management, in the gamut from "none" to "full blown provisioning system".

> it doesn't restrict access to files, networking or other system resources that the software has no business touching;

To the software, or it's installer? It pretty much does to the software, when it is being run. To the installer? See the first part of the answer.

> Given that this broken version of npm exists and that it was made available via at least one production channel that should not have included it as a result of presumed human error by the maintainers, how would anything material have changed today if people had been installing it via an official package and their package manager as you suggest, rather than via npm update?

It boggles my mind, why anybody would run npm as a root. The only thing they achieve is to write files where they otherwise can't, and risk exactly what happened now.

They _could_ run npm as a normal user, which happens to own the target directory, and it would be without the risk of damaging the system.

So the problem is not npm bugs; to problem is people not realizing what they are doing and refusing to take responsibility when it goes wrong.


If you assume that conventions don't work as people will just run whatever crap as root, I don't think you can solve the problem without taking away that right from the user (as is customary on mobile devices).

At that point, solving the problem comes at too high a cost. A few messed up npm installs seem to be the lesser evil here.


If you assume that conventions don't work as people will just run whatever crap as root...

That's not really the issue, I think. Literally everything you install, however legitimate the source and however well-intentioned the people providing it, is "whatever crap" for the purposes of this exercise. What happened here could also have happened using just about anything else you installed on a typical Linux system today, whether from an official distro package repository, or some other source of packaged files, or side-loaded with one of those horrendous "Sure, I'll download your arbitrary script from the Internet and pipe it through sh as root to install your software without even checking it, as you recommend on your web site" things.

There is no reason that our systems should trust arbitrary installation scripts to do arbitrary things, whether they're running as root or not, but especially if they are. I'm stunned at the opposition I'm seeing from so many people on HN to the idea of making a system more secure, even while we're discussing a demonstrated, system-destroying bug in widely used software that was apparently unintentionally rolled out through at least one official channel when it wasn't ready.


This is governance issue.

Build all your software into packages appropriate for the OS you use and then put them in a company repo. Install from there.

If you're just dumping whatever "stuff" you want on a machine in whatever location with no control, you're gonna have a bad time.


Unless you are going to systematically and reliably audit literally everything that any installer in any of those packages does as root, this is not a solution to the real problem, it's just trying to reduce the risk a bit.


Yeah people like to hate on Microsoft/Mac App Stores, but at least they don't let programs vomit files across the disk.

The Linux solution I suppose is Nix/Guix or Flatpak/Snap or Docker a la RancherOS. Perhaps more restrictive SELinix profiles could work as well.


> everyday Linux desperately needs a proper application management and security model.

We already have the necessary tools to do it, eg. firejail. We only have to make every binary run in firejail by default (and write firejail profiles for more binaries).


Both of these issues seem like a timely reminder that everyday Linux desperately needs a proper application management and security model.

I agree. For instance, on recent macOS versions you cannot modify most system directories as root, unless System Integrity Protection (SIP) is enabled [1]. SIP can only be disabled by the user by booting into the recovery OS. Just making these directories read-only prevent accidents and malice.

AFAIK in Fedora Atomic Host/Server some system directories are also read-only [2]. Moreover, Fedora Atomic uses OSTree as a content-addressed object store, similarly to git, where the current filesystem is just a 'checkout'. So, you can do transactional rollbacks, upgrades, etc.

[1] https://support.apple.com/en-us/HT204899

[2] https://rpm-ostree.readthedocs.io/en/latest/manual/administr...


FWIW, "new npm broke Æeeeverything? Meh. Destroy the docker container, force version <= 5.6.0, rebuild" has now saved me from a bigger disaster. This is the 1.5th option, IMNSHO: npm gets its root(-ish) access, host computer is somewhat protected.


Yup, but HN didn’t freak out over that. So most people probably won’t have heard about it.


While it is a pretty big issue, maybe people didn't freak out about it because it was from a new codebase, which was fixed in under 2 months. Meanwhile npm, Inc and their CLI dev team have 2+ year outstanding issues regarding the core functionality of their product not working correctly. Installing.


Except for 99.999999% of use cases what you wrote is absolute rubbish


Doesn't really surprise me when you have other issues like this (https://github.com/npm/npm/issues/17929) that have persisted for a long time. NPM 5.x in general hasn't been very stable.


NPM 5.x in general hasn't been very stable.

Indeed. Another odd thing that it's been doing lately is when I run some NPM scripts on one of our machines, it starts shouting about some sort of update not working (why was it updating anything at all just because I ran `npm run something`?) and gives me instructions on how to fix it from the Linux shell (on a Windows box). The depth of failure implied by that message is disturbing on several levels.


Wow the toxicity on that thread is appalling. I feel like I need to a tool when hiring people that automatically shows me their github comments with the most reactions.


Wayback link, of someone has a better mirror please post: http://web.archive.org/web/20180222170341/https://github.com...


From: https://github.com/npm/npm/releases/tag/v5.7.1

"Thankfully, it only affected users running npm@next, which is part of our staggered release system, which we use to prevent issues like this from going out into the wider world before we can catch them. Users on latest would have never seen this!"

If you are updating to the latest pre-release of something within mere hours of it dropping and you are updating production systems (presumably that have some business value) with no previous testing then the consequences of that aren't on the devs they are 100% on you. And you don't deserve to call yourself an IT (or Ops or DevOps or what-have-you) professional, that is amateurish behavior in the extreme.


My personal opinion is that the root cause of the issue is the ability of a language pacakge manager to mess with system files at all (i.e. do a global install of anything). Shards, the crystal package manager makes the sensible design decision to only install libraries into `$PWD/lib` and binaries into `$PWD/bin`. Everything is local only to your project. If you want a binary on your PATH, you can create an installation method that works for your commandline tool's specific usecase. Hopefully a distro/homebrew package.

I wrote about this in longer form here: https://github.com/crystal-lang/crystal/pull/3328#issuecomme....


npm is one of the few tools that I am afraid to have on my Laptop, Because unlike most tools I have used, When npm does something wrong, It'll ruin not just itself but a lot more directories on my pc which is annoying to fix.


Oh well, I remember fondly that one time I had an important deadline whooshing by (with that lovely sound Douglas Adams knew) and I happened across this cute little bug:

http://appleinsider.com/articles/09/10/12/snow_leopard_guest...

(Yeah, it's that much-vaunted Snow Leopard.)

I do remember scrambling to recover my backups. Back then, I didn't make full-disk backups, so I had to assemble my user folder from various places. Everything else that transpired that night and the day after remains a haze.


Why do people feel the need to update especially on production servers? Shouldn’t production servers be updated only when necessary?


Good question. You would think that some people would have QA/pre-prod servers with a pipeline that would catch this.


I find it interesting that nobody noticed this before public release. And apparently this version is a pre-release? But that isn't specified on the blog post?


What's more, while "npm install -g npm" correctly installs 5.6.0, "npm update -g npm" installs this apparently pre-release 5.7.0 version.


And even worse, 5.6.0 to 5.7.0 is, by semver, one minor point release to another minor point release - no breaking changes, no major bugs. 5.7.0-pre would raise some flags.


Uhh. Does semver actually say anything about bugs?! o_0


I'm pretty sure you don't release pre-release versions without -pre or -beta or -rc tags in the end.


Technically you don't have to with semver, it's just a good practice. From https://semver.org/:

> A pre-release version MAY be denoted by appending a hyphen and a series of dot separated identifiers immediately following the patch version.

Note that it says "MAY" and not "MUST", so it's optional.


I'm pretty sure that "we're now changing permissions willy-nilly" is both a breaking change (which would warrant a major version bump, as per semver), and a bug (even though it's presented as an improvement by the authors). I should have been more clear.


As a semi-outsider to the frontend and node development worlds, it continues to surprise me that a viable alternative to npm still hasn't come along. Not trying to pile more hate on npm, but there's been many years of complaints about instability, horrid UX, bad security model, user hostility, etc. Yarn was just a first step. If there was a system with half the features, but made sense and was secure I think the community would shift very quickly.


Yarn is actually a very good alternative to the NPM cli. While there has been some issues on the package hosting side as well, by far the biggest issues were/are on the client side, and practically a of them are solved by Yarn.


I swear NPM has some absurd showstopping bug every month.

With something that has as many people using it, it's just... I dunno, it's disheartening.

Edit: oh well, this was a @next release only. Not as bad. Still scary.


I really wish node would ship with Yarn instead of NPM. Every serious js project these days already uses it.


Does yarn run npm behind the scenes? Or does it even replicate the bugs in its attempt to be fully compatible? I used yarn to install global packages and see the packages in `/usr/lib/node_modules` with the permissions of my user rather than root.


yarn use npm registry behind the scene.


CI/CD does not mean deploying code to production by fetching source code from GitHub onto a server used by your customers and then compiling or downloading NPM dependencies.

That is a recipe for disaster.


Looks at correct-mkdir. Sees "cb = dezalgo(cb)". https://www.npmjs.com/package/dezalgo

"Contain async insanity so that the dark pony lord doesn't eat souls"

Just... What. I feel like when you need to reach for tools to "contain insanity", you might want to backup and ask someone who has written to a filesystem before... The linked blog about "preventing the release of Zalgo" and the linked https://blog.ometer.com/2011/07/24/callbacks-synchronous-and... seem completely erroneous. The entire point of callbacks is to _surrender_ control to a function - here is a piece of code to run when you are ready - now, sometime, or never, or maybe many times, as you see fit. Waiting until the next process tick seems so completely unnecessary... This strikes me heavily as "a solution in desperate search of a problem" - although I have that feeling with a _lot_ of NodeJS code I read...

The author of the blog linked on the dezalgo project seems to, at the end of the post, imply the purpose is for performance? By deferring work until a later date?

"The basic point here is that “async” is not some magic mustard you smear all over your API to make it fast. Asynchronous APIs do not go faster. They go slower. However, they prevent other parts of the program from having to wait for them, so overall program performance can be improved."

Other parts of the program _other than the work we've asked it to do_? What if we're only "correctly making" one directory? So we intentionally make our code slower... So that "other code" can run? He continues:

"This makes the API a bit trickier to use, because the caller has to know to detect the error state. If it’s very rare, then there’s a chance that they might get surprised in production the first time it fails. This is a communication problem, like most API design concerns, but if performance is critical, it may be worth the hit to avoid artificial deferrals in the common cases."

So it's slower -and- more complicated, and we're gonna hide it behind a meme. Gotcha.


Deferring until next tick is one way to get around call stack problems. If you create a really big series of callbacks which will call other callbacks which call other callbacks... you can run out of call stack.

The other issue is let's say you have some code like...

    var f = 1
    doSomeOperation(function done(){
        console.log(f)
    })
    f = 5
If doSomeOperation calls done() sometimes syncronously and sometimes asynchronously, it will sometimes log 1 and sometimes log 5. If doSomeOperation always works one way it's more consistent. It's not a perf thing it's just consistency.


I wish fixing of npm global directory permissions was part of the npm install page (https://docs.npmjs.com/getting-started/installing-node), or mentioned at least. My first few npm setups always left me in permissions hell as I'd just use the install page, ignoring the next steps.



Do you really need google analytics for this?


need is a strong word. Do I need the entire site - no. The mentality was more like "oh haha joke site?.... I wonder how many people are actually looking at the site and from where.. man I wonder if there is a solution for that... oh right google analytics."


> I wonder how mwany people are actually looking at the site and from where.. man I wonder if there is a solution for that... oh right google analytics.

And I wonder if it's possible to make a 2 dollar website without tracking your users or reporting to google.

I expect most people who dabble with technology use an adblocker anyway which blocks requests to google analytics.

I wouldn't mind a self hosted analytics solution, but with all the captchas and the mails, I feel we give google enough information as it is.


wellllllllll when you make a website as a joke you can choose which if any analytics solution you want. And deal with the critics that use analytic blockers to complain about your analytics solution.

The site collected 10k visits over 3 days from around the world.


A user of NPM that needs to use `sudo npm` simply did not properly install nodejs into the user directory. NPM is packaged with the node version you are running. So if you installed node with a root user or in a directory that requires root user access you will need to sudo to use `npm`. But if you properly install node under your user you will never have an issue. Anyone that does `sudo npm` did not install nodejs under their user. This may be confusing to people because a lot of tutorials tell you to use `sudo npm`. NPM is a piece of software that is consumed by millions of people and different devices. It is crazy to think there will be no side effects to how people use something where it was not designed.


Running npm as root is bad, either install the npm package from your distribution (apt, pacman...) or, to use `npm install -g` edit `.npmrc` add `prefix=/home/<me>/.node` in it, and add `~/.node/bin` to your path.


It's bad, but at the same time it's hard to blame people doing it too much when it's literally in the npm documentation: https://docs.npmjs.com/troubleshooting/common-errors


I am not blaming people, maybe my comment wasn't formulated properly. What I meant is "don't do it, there is an alternative".

I think no documentation should ever include sudo in their commands. You should put a note "depending on your environment, some of those commands might require root privileges" or something.


Reading this and the comments here really makes me feel sorry for the npm people.

If you are reading this: You are doing great work, I wish you the energy and strength to ignore the trolls.


Switching to yarn is not going to fix this. However, this raise some concern about npm cli

- we are relying on 2 people team for our applications. - maintainer doesn't seem to care much about this horrific bug: https://imgur.com/a/v4Ndb


> Switching to yarn is not going to fix this.

Yeah, but if you had switched to yarn beforehand, then you would not be facing this issue.


iirc yarn has a bug regarding `which` cli which is similar to this.

bugs are bound to happen and it's part of software development. however, the team size of npm cli and the way they react to this incident are what make me concern more.


About a week ago, I attended a tech talk by a Google employee, a senior position, who said, if I remembered correctly, that their testing effort uses the most of their hardware resources at entire Google. Software testing can be difficult and challenging, but it is a critical part.


Why would reporter run sudo npm???


sudo yum sudo apt-get sudo pacman

Why wouldn't you naively assume sudo npm was safe if you wanted a global package (I know the behavior of npm...)

This is blaming the victims. If the user can blow their foot off, its not the users fault.


Title should be changed to 5.7.0 as newly released 5.7.1 fixes the bug.


Maturely tagged with `#STOPUSINGPRERELEASESWITHSUDO`

When their Official Blog makes no mention of 5.7.0 being a prelease [0] and is semantically versioned as a stable release. The thread also later details that running `npm upgrade -g npm` instead of `npm install -g npm`will get 5.7.0 instead of 5.6.0 [1].

Is it standard practice for an `upgrade` command to pull a pre-release or beta? When I upgrade Firefox I don't get put onto the Nightly branch...

[0] https://vgy.me/LkvBKS.png

[1] https://github.com/npm/npm/issues/19883#issuecomment-3677268...


This is relevant to the information I posted in my comment here how?

It is not.



I just looked at my /usr/lib/node_modules directory and it's No man's land in there and I'm on npm 5.6.0. How could this go unnoticed for so long?


Destroyed my local packages. Can't resist to yarn anymore. Switched.


Remind me again why there are language specific package managers...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: