Hacker News new | comments | show | ask | jobs | submit login
Many packages suddenly disappeared (github.com)
749 points by xxkylexx 6 months ago | hide | past | web | favorite | 492 comments



PSA: Please be cautious because this is an excellent opportunity for taking over packages and injecting malware by malicious people.

Example: https://www.npmjs.com/package/duplexer3 which has 4M monthly downloads just reappeared, published by a fresh npm user. They published another two versions since then, so it's possible they've initially republished unchanged package, but now are messing with the code.

Previously the package belonged to someone else: https://webcache.googleusercontent.com/search?q=cache:oDbrgP...

I'm not saying it's a malicious attempt, but it might be and it very much looks like. Be cautious as you might don't notice if some packages your code is dependent on were republished with a malicious code. It might take some time for NPM to sort this out and restore original packages.


I just tested, and it definitely looks like a troll / hack.

> duplexer3@1.0.1 install /Users/foo/Code/foo/node_modules/duplexer3 > echo "To every thing there is a season, and a time to every purpose under the heaven: A time to be born, and a time to die; a time to plant, and a time to pluck up that which is planted; A time to kill, and a time to heal; a time to break down, and a time to build up; A time to weep, and a time to laugh; a time to mourn, and a time to dance; A time to cast away stones, and a time to gather stones together; a time to embrace, and a time to refrain from embracing; A time to get, and a time to lose; a time to keep, and a time to cast away; A time to rend, and a time to sew; a time to keep silence, and a time to speak; A time to love, and a time to hate; a time of war, and a time of peace. A time to make use of duplexer3, and a time to be without duplexer3."

To every thing there is a season, and a time to every purpose under the heaven: A time to be born, and a time to die; a time to plant, and a time to pluck up that which is planted; A time to kill, and a time to heal; a time to break down, and a time to build up; A time to weep, and a time to laugh; a time to mourn, and a time to dance; A time to cast away stones, and a time to gather stones together; a time to embrace, and a time to refrain from embracing; A time to get, and a time to lose; a time to keep, and a time to cast away; A time to rend, and a time to sew; a time to keep silence, and a time to speak; A time to love, and a time to hate; a time of war, and a time of peace. A time to make use of duplexer3, and a time to be without duplexer3.



I’m pretty sure they’re referencing the Byrds song and not the Bible directly:

https://m.youtube.com/watch?feature=youtu.be&v=pKP4cfU28vM


No. The Byrds song, which is itself an excerpt/phrase of Ecclesiastes, does not have phrases like "and a time to pluck up that which is planted"


And neither Solomon nor the Byrds said anything about "A time to make use of duplexer3, and a time to be without duplexer3."


Not to mention, it’s a Pete Seeger song, the byrds just covered it. I may be wrong but I think Seeger wrote it for Judy Collins to sing.

Edit: ok nope, Seeger didn’t “write” it for Collins, she’s just another one to cover it. Here they are both doing it if you’re interested: https://youtu.be/fA9e-vWjWpw


Because the bible version isnt subject to copyright takedowns.


Start posting large parts of, say, the New International Version, let me know how that goes for you.

IOW, unless it’s the King James, it is likely very much subject to take down notices. Though I’m guessing a malicious troll is much more likely to know The Byrds than they are Old Testament.


that is, in fact, the king james version.


The Byrds, Turn! Turn! Turn!


I got this today as well! WTF? It showed up today and is preventing me from using npm, node, etc...


And all this is happening just as after the public release of a serious exploit which allows malicious code to do all sorts of nefarious things when it is somehow installed on the target machine. Hmm.

Given that there's hints, at least, that the problems were caused by some particular developer's actions, I wonder about the security model for package-managed platforms altogether now. If I were a big cybercrime ring, the first thing I'd do would be, get a bunch of thugs together and knock on the front door of a developer of a widely-used package; "help us launch [the sort of attack we're seeing here] or we'll [be very upset with you] with this wrench." Is there a valid defense for a platform whose security relies on the unanimous cooperation of a widely-scattered developer base?


With cases like the current one, or the leftpad incident in 2016, I'm surprised package registries still allow recycling old package names after a package was deleted. Really seems like deleted packages should be frozen forever - if the original author never recreates it or transfers ownership, then people would have to explicitly choose to move to some new fork with a new id.

But your point about pressuring or bribing package authors still stands as a scary issue. Similar things have already happened: for example, Kite quietly buying code-editor plugins from their original authors and then adding code some consider spyware (see https://news.ycombinator.com/item?id=14902630). I believe there were cases where a similar thing happened with some Chrome extensions too...


> With cases like the current one, or the leftpad incident in 2016, I'm surprised package registries still allow recycling old package names after a package was deleted.

CPAN requires the old author to explicitly transfer or mark it abandoned-and-available-to-new-owner.

For all the things wrong with perl5 (and I love it dearly but have spent enough time I can probably list more things wrong with it than the people who hate it ;) it's always a trifle depressing to watch other ecosystems failing to steal the things we got right.


This happens all the time. The new generation creates something cool because what our parents created isnt cool any more, only to fail exactly on the same spot as our parents. Only, it was already solved in the parents last version. This goes for cloth design, cars, houses, kitchen wares and so on, as well as software. Just look at the microwave oven earlier...


Genuine question... What happened with the microwave oven?


I think the GP is refering to this: https://news.ycombinator.com/item?id=16089865

Modern microwave ovens have all adopted impractical and quirky new UIs, when the old concept of knobs was simple and worked fairly well in the first place.


My oldest one was just two dials, the second one, 15 years old had loads of buttons and stuff, really stupidly spread out, you had to press watt, minutes, seconds, start and start was not in a corner, not in top/bottom row or any other logical place so you had to search it every time. I glued a rubber piece to it so I could find it again without having to bend down and search.

Since then I have made sure the microwave has two dials, one for time, one for effect.


Remember the watercooker that had just an on & off switch?

Then came one with an option button for 80 or 100 degrees (176 or 212, in freedoms). Never knew I needed that, but that just changed my live and I can not do without it. Reason: 80 degrees water is hot enough for my needs and saves time.

Our latest has 3 buttons, with different possiblities, beebs like a maniac when ready (an option which is not unset-able) and can do things I never knew anyone would need (like keeping it at x degrees for y minutes).

I guess it is like evolution: you experiment, keep what works and get rid of all things unfit.


Packages / projects being frozen. AFAIR that's how SourceForge works/worked. I remember a few years back being baffled that I couldn't delete my own project.

But it makes sense, other projects might depend on it, so it's archived.


It's just npm that's broken. I've never used a package manager for any other language that had these kinds of issues. It's exacerbated by the massive over-reliance on external packages in JS too. `left-pad` really shone a light on how dependencies in JS land really are brought in without too much thought.


You could sign packages and record their signatures along with the version. Which, coincidentally, is basically what https://teapot.nz does, e.g.: https://github.com/kurocha/geometry/blob/master/development-...

Although, I've never considered this in the case of an actual attack. It would make sense to actually fingerprint the entire source tree and record this too somewhere, so when you build it you know you are getting the right thing. Teapot basically defers this to git.


> Is there a valid defense for a platform whose security relies on the unanimous cooperation of a widely-scattered developer base?

The defense is staged deployment and active users. This obviously depends on the blutness of the malicious code.

If I may assume easily noticed effects of the malicious code: A dev at our place - using java with maven - would update the library, his workstation would get owned. This could have impacts, but if we notice, we'd wipe that workstation, re-image from backup and get in contact with sonatype to kill that version. This version would never touch staging, the last step before prod.

If we don't notice on the workstation, there's a good chance we or our IDS would notice trouble either on our testing servers or our staging servers, since especially staging is similar to prod and subject to load tests similar to prod load. Once we're there, it's back to bug reports with the library and contact with sonatype to handle that version.

If we can't notice the malicious code at all until due to really really smart activation mechanisms... well then we're in NSA conspiracy land again.


> If we can't notice the malicious code at all until due to really really smart activation mechanisms... well then we're in NSA conspiracy land again.

What about really dumb activation methods? I.e., a condition that only triggers malicious behavior several months after the date the package was subverted. You don’t have to be the NSA to write that.

What’s scary here is that there are simpleminded attacks that, AFAIK, we don’t know how to defend against.


Mh, I have a rather aggressive stance on these kind of incidents, no matter if they are availability or security related. You can fish for them, you can test for them, and there are entire classes of malicious code you cannot find. For everything you do, turing complete code can circumvent it. There's a lot of interesting reading material going on in the space of malware analysis regarding sandbox detection, for example.

So stop worrying. Try to catch as much as feasible before prod. Then focus on detecting, alerting and ending the actual incident. If code causes an incident, it't probably measurable and detectable. And even then you won't be able to catch everything. As long as a server has behavior observable from the internet, it could be exfiltrating data.


What if it encrypts user data?


You have your tested backups, yeah?


Tested restores with at most 59 minutes of data loss for prod clusters within 90 minutes after order. 30ish minutes of downtime. We could even inspect binlogs for a full restore afterwards on a per-request basis for our big customers.

Cryptolocker on prod is not my primary issue.


Sounds like good hygiene, though it seems burdensome if everyone must do it or seriously risk infection. Ideally there would be at least minimal sanity checks and a formal process before a package can be claimed by someone else.


On top of that, they way countless packages are used everywhere is potentially exploitable: https://medium.com/@david.gilbertson/im-harvesting-credit-ca...


In case anyone was considering sending him $10, no, his hypothetical code would not be running on the Google login page. Google does not pull in external dependencies willy nilly like that.


I'd be surprised if they ran a thorough security audit on all code they import, but I'd rather believe they do.


On Google scale you quite certainly want to do that. Not just for security, but for legal reasons. You really don't want to end up using for example AGPL licensed stuff in wrong places and if you just blindly pull stuff with dependencies from package manager, this could easily happen.


Sure a legal audit is standard and usually much simpler than a full source audit for security, which has a complexity proportional to the project size.


One of the recent True Geordie podcasts features the "YouPorn Guy" who talks about finding it near impossible to get lawyers not on a retainer from Google to fight them.


That's actually even more scary than what's going on now... At least most of us are noticing and can check what's going on...


Wouldn't you need to install those packages as root for the code to have privileges to take advantage of that exploit?


I've known plenty of developers whose automatic response to `packagemanager install packagename` failing is `sudo packagemanager install packagename`.


I sincerely hope all modern package managers, when invoked with sudo, immediately spawn a very-low-privilege process that does most of the work sandboxed to /tmp/whatnot, and the root process just copies files to the right place and runs some system calls to update databases etc.


Most package managers I know support Turing complete install hooks. How would a package manager detect what parts of those require/are safe to run with root?


No. Packages would not need to be installed as root. Additionally, many possible ways to use the exploit in GP could run as unprivileged users.


Wouldn't the package need to be executed as root, though? Or does spectre/meltdown not require privileged access?


No, that's the entire point. They need almost nothing at all but the ability to run code fast in a loop with memory calls. The entire point is that they bypass privilege checks.


An ipfs model would help. People would use a strong hash if the package or something.


I'm not sure if it would help much. That means you either have to have users be able to recognize and eyeball-validate hashes ("sure, this is left-pad-5ffc991e; that's what I want! Wait, shit, it's actually left-pad-5ffd991e, never mind; wrong package), or you need pre-existing databases of trusted hashes (which either puts you right back at a registry a la NPM, or leaves you reliant on a package.lock file or similar, which doesn't cover many common use cases for secure package signing).


I just meant as a solution to the fact that people can typosquat or jack a name when a package is deleted.

If the developers can't get the hash right then there's not much that can be done.


That's a scary scenario, and all too possible.


Detailed description what you could do with a malicious npm package is currently on he front page: "Harvesting credit card numbers and passwords from websites"

https://news.ycombinator.com/item?id=16084575


am I the only one who thinks this could be more than a coincidence?


Hey, I wrote that article :) - yes it was pure coincidence, I just decided with all the security stuff going on this week (Spectre/Meltdown, I hadn't heard about the npm stuff) I'd write and article about it.


I didn't think of it. But it is a coincidence, Good one.


I find it hard to believe, but never say never, of course.


NPM doesn't make the package names unavailable after removal???

EDIT: That would be a massive security problem!


maybe it's time to push for adding signed packages to npm

long discussion here: https://github.com/node-forward/discussions/issues/29


I am very surprised that a package manager of this calibre and impact abstains from best practices when it comes to authentication through code-signing. Other package managers are miles ahead of NPM. For example, Nix, which uses immutability and hashing to always produce the same artifact, regardless of changes of the sources.


So I know rpms and debs are signed, as I've setup repos for both. Docker repositories require a valid SSL key (or you have to manually allow untrusted repos). But do Python packages and Ruby gems have signature verification? How does pypy/pip and gem deal with validating a package is what it claims to be?


Ruby gems can be signed but the percentage of gems authors taking advantage of that is low.

At least we’ve got most people using https to transfer gems now!


PyPI (which is what Pip uses) at the very least does not require authors to sign their packages. I can't say whether it supports signing though.


Traditional python packages support GPG signing: https://pypi.python.org/security

There's new experimental signing in wheels: https://wheel.readthedocs.io/en/stable/#automatically-sign-w...

and the signing defined in PEP: https://www.python.org/dev/peps/pep-0427/#signed-wheel-files


comparing distro package managers is a ton different than free for all spaces like packagist, ruby gems, pypi, npm, etc.


You have a point, but we need to take into account that the technology has been around for a long time, the risks are well known and documented, and safety concerns of most of these package managers have been addressed to maintainers.

The example in the article has come to light accidentally, but we must seriously ask ourselves how many incidents are currently unidentified.

Besides, you can use Nix for 'normal' development. It is suitable for more things than just a distro package manager.


Signing won't help unless the end user specifies the signature or certificate that they expect (signing would only help ensure package upgrades are from the same author).

If you're going to have clients specify a signature anyway, then you don't need to sign packages, you just need strong one way hash function, like SHA-1024 or something. User executes "pkg-mgr install [package name] ae36f862..."

Either way, every tutorial using npm will become invalid.


"npm install packagename" could record the public key in package.json (or package-lock.json) on first save, and only accept installs (or upgrades) matching the same public key. Just like how android app code signing works, or similar to ssh known_hosts trust-on-first-use.

Granted it wouldn't save those adding a new package to a project the first time, but it would save the bacon of anyone re-running "npm install" in an existing project, for example during a deploy, or when trying to upgrade to a newer version of a given package.


Would that mean a package with multiple authors would have to shared the private key with each other in order to publish a new version?


> Granted it wouldn't save those adding a new package to a project the first time

Right, that's the real problem.


independent site that maps packages to author certs that npm uses for verification at install time?

also, this is a problem that every package mgmt system faces. they alert on changes on upgrade but there's a requirement at the end user level to verify that at install time, the cert being trusted is the right one.


I'm surprised there wasn't a global lock-down on new package registrations (or at least with the known names of lost packages) while they were working to restore them.


didn't npm make some changes where a published package name cannot be republished, at least not without npm intervention?


Yes, but the packages disappeared. That people can dupe these suggests that the database was modified.


yeah it looks like one user's packages just disappeared from their database.


I thought so too. I thought they did that after the left pad incident.


How does RubyGems handle a package being removed and replaced by a different (and maybe malicious) actor? Not allow a package to be deleted? Block the package name from being claimed by someone else?


From http://help.rubygems.org/kb/gemcutter/removing-a-published-r...:

> Once you've yanked all versions of a gem, anyone can push onto that same gem namespace and effectively take it over. This way, we kind of automate the process of taking over old gem namespaces.


There are also people requesting that this be changed: https://github.com/rubygems/rubygems.org/issues/1226


So basically--gem bundler beware?


Thank you Eric.


Shit. That's a good point, I downloaded the Heroku CALI during the attack and it uses duplexer3. I got a weird message that seemed "off" during postinstall.


Wait, they both say username = floatdrop [1] for me. What did they say for you?

[1] https://twitter.com/floatdrop


Hi folks, npm COO here. This was an operational issue that we worked to correct. All packages are now restored:

https://status.npmjs.org/incidents/41zfb8qpvrdj


Were any of the deleted packages temporarily hijacked? It seems strongly like this was the case. If so, please confirm immediately so people who installed packages during this time can start scanning for malware.

Even if the answer is “yes, 1+ packages were hijacked by not-the-original author, but we’re still investigating if there was malware”, tell people immediately. Don’t wait a few days for your investigation and post mortem if it’s possible that some users’ systems have already been compromised.


I would also hope for and expect this to be communicated ASAP from the NPM org to its users.

@seldo, I understand that you don't want to disseminate misleading info, but an abundance of caution seems warranted in this case as my understanding of the incident lines up with what @yashap has said. If we're wrong, straighten us out --- if we're not, please sound an advisory, because this is major.


Yeah, these were some core, widely used packages that were deleted. If they were temporarily hijacked, lots of dev machines (including mine) may have been compromised. There’s a major security risk here, if there was any hijacking now is not the timing for information hiding and PR.


Seems like you should have froze publishing instead of saying, "Please do not attempt to republish packages, as this will hinder our progress in restoring them." Especially, to prevent, even temporary, hijacking.


Any chance of a technical write-up so that we can all learn from whatever happened?


Absofuckinglutely. It's being done as we speak.


Good luck explaining this

https://news.ycombinator.com/item?id=16087079

in the face of this

https://news.ycombinator.com/item?id=14905870

Literally nothing was done for 158 days. You yourself asked:

https://github.com/node-forward/discussions/issues/29#issuec...

"How would package signing prevent people from requesting the wrong package? The malware author could also sign their package."

And here is a perfect example. Someone replaced a legit package with a malicious one. Had the original author signed the package, then then NPM users could have defended against the new malicious author, because the new author's signing key would not be in their truststore.

Unsigned packages leave NPM package users defenseless. I hope that is crystal clear now.


You're taking flak for this, but you're right.

When I was doing pentesting, we had an interesting assignment. Our job was to pop a dev project. Then we'd tell them how to secure themselves.

One of our tactics was to set up fake Github profiles with very similar names, then try to get someone internal to the team to `git clone` and run our code. Boom, remote shell.

We didn't execute the plan. But it was thrown around as an idea.

When a package on npm can disappear, and a new package can appear in its place at a later version, by a different author, and there is no connection between those two people, then you're in a bad situation. Just because no one currently runs attacks like this doesn't mean you'll be safe forever. It's worth getting ahead of this.

I don't know whether package signing is the best solution. Maybe yes, maybe no. But the question is, if a package vanishes, what is the proper action to take?

The solution seems like a rollback. Let us have the latest previous version from the same author, by default. That will fix the builds and not require any heavyweight changes.

But package signing would definitely be nice, if it can be integrated in a lightweight and chaos-free fashion.


Yup. Publishing to Clojars requires GPG and is a bit of a pain compared to publishing to NPM. I'd take Clojar's approach any day of the week to this nonsense, though.


[flagged]


That's entirely uncalled for.


Actually I'm doing him a favor ... I completely understand that people talk like that within companies. When emotions are involved, that's what happens. When you're acting in any capacity as a spokesperson for a company (or I guess a government or non-profit too), a bit more decorum is called for. It's not just him - I've been feeling this for a long time. One thing I appreciated about Obama was that he was always dignified (not that I always agreed with what he was saying). Now that the POTUS posts uncouth tweets, maybe it's okay to put statements like that in your SEC filings too.

I got down-voted for calling out some of Kalanick's frat-boy behavior and speech. I'm sure it's not popular on a site predominated by twenty-somethings but since I'm old, I'd prefer to be called old-fashioned or out-of-touch rather than simply being dismissed. If it helps ... I'm sorry that I was so blunt - I should have typed these couple of paragraphs instead.


Well, you got personal out of the blue.

Speaking how he spoke is exactly what the situation called for, and shaming him like this might give people the impression that the community doesn't support it. People feel differently, but for me, it was a breath of fresh air. Finally, someone talking straight with a community! "We fucked up. Report incoming." Done, A+. We can all relate.

Maybe that's not professional enough for certain circles, but hopefully this mindset will permeate to them eventually. We could all stand to loosen up a bit.


+1


[flagged]


It's unprofessional in circumstances such as this imo, but to each their own.


"The less confident you are, the more serious you have to act."


Seems to make the opposite case here. Why the need to swear? Seems not to indicate anything but disingenuous tribal signaling of outrage. At whom?


What was the root cause of the issue?


Yes I'd be very curious to see a debrief on what the technical cause was. Thanks to the npm team for a quick weekend fix, at any rate!


We're working on a full post-mortem now. Until then we don't want to give out misleading/partial information.


Any update on the post-mortem? How long have the binaries been replaced? Is there evidence that malware was injected into the binaries?

Additionally, you should brush up on your code signing implementations. Had you signed it with a trusted code signing cert, consumers could have verified that you produced the binaries...and not a malicious user. Assuming they didnt have access to the private key material of your code signing key.


Not sure if you saw but they did post this: http://blog.npmjs.org/post/169432444640/npm-operational-inci...


Or rather: what were the contributing factors of the issue?


Update: (this is not the post-mortem, this is just more detail) http://blog.npmjs.org/post/169432444640/npm-operational-inci...


> I was here.

> We made history! Fastest issue to reach 1000 comments, in just 2 hours.

> cheers everyone, nice chatting with you. 17 away from hitting 1000 btw!

> Is GitHub going to die with the volume of comments?

Kind of disappointed the NPM community is turning github into reddit right now.


There's probably a large overlap between the two communities.


Considering almost every human I know uses Reddit in some capacity (technical and non-technical), that's pretty likely.


You're in a really self-selecting crowd then. Less than half the people I know use it, mostly because my social group is outside of the tech world.


Reddit is in the top 10 most popular websites according to Alexa. I'd venture to say most reddit users aren't people in the tech world.


I know exactly one person who uses it. To me it always seemed like 4chan-light.


There's a difference between people who come across or read Reddit, and those who actually post and participate on Reddit. The Average Joe is usually part of the former.


Perhaps not all tech, but my point is that it's very unused by many. People in tech are far more common users.


NPM is extremely vulnerable to typosquatting. Be cautious with what you install. The install scripts can execute arbitrary code. NPM's team response is that they hope that malicious actor won't exploit this behaviour. According to my tests, typosquatting 3 popular packages allows to take over around 200 computers in 2 weeks time it takes their moderators to notice it.



That's okay, but it's not enough - it's easy to swap two letters and do similar substitutions to fool many users. If a package is downloaded 10,000 times every day, surely once in a while someone will misspell the name somehow.

Other than that, their reaction to similar incidents was to wait for somebdoy on twitter to notify them, ban the responsible users, and hope that it won't happen again. It's still extremely exploitable and there are surely many other novel ways of installing malware using the repository that we haven't even heard of yet. The NPM security team is slow to act and sadly doesn't think ahead. They're responsible for one of the largest software ecosystems in the world, they should step up their game.


They could(should?) implement edit distance checks on all new packages against existing packages. If the name is too similar to an existing package name it requires approval.


Yup. The best answer I can come up with given their constraints (some self-imposed) is to force all new packages to be scoped.


how many typosquats on scope names will there be I wonder.


Why assume they’ve already seen it? They probably just haven’t


typical JavaScript engineering


Javascript is a very handy language, it's held back by all the gymnastics it needs to do to get over browser/www limitations, and an influx of low skill developers with no diploma.


> it's held back by all the gymnastics it needs to do to get over browser/www limitations,

I suppose, but I think it's the javascript "nature" ( dynamic typing along with the scripting style of wanting to be a "swiss knife" to solve all problems ). Javascript, like perl and even C, gives you a lot of rope to hang yourself. And like perl and C, javascript initially seems simple and easy and it deceives you into thinking novices know what they are doing.

> and an influx of low skill developers with no diploma.

That's true of all languages though. Plenty of incompetent developers at all levels and all languages. I don't think it's a javascript issue.


> Plenty of incompetent developers at all levels and all languages. I don't think it's a javascript issue.

While that's potentially true, I do suspect that there's a lot fewer, say, Haskell, Clojure, or Elixir developers than there are for some other languages. Not that they don't exist, but it seems unlikely that you'd cross paths with them.


There's orders of magnitudes less developers and jobs available for those languages. I'm only not an elixir developer because there's almost zero elixir jobs.


Just realized I meant to say "a lot fewer, say, incompetent..."


Well, both versions are true, the latter mostly following from the former. With a lower absolute number of developers, the number of incompetent programmers is going to be lower in absolute terms, despite the ratio staying the same as in other languages.

But I believe there's a difference in the ratio, too, due to the way Haskell, Erlang, Lisp, etc. programmers learn these languages. Basically, they learn the languages not because someone wants them to (eg. Java, C#, etc.) and not because they have to just to be able to do something they want to do (eg. JavaScript, SQL, etc.). Instead, people learn such languages because they themselves want to, which makes them more probable to delve deeper, learn more and acquire more important skills.

Well, that's a conjecture I can't prove and I may be completely wrong on this, it's just what my anecdotal experience suggests.


Totally agree!

I think it somewhat comes back to the "Python Paradox": http://www.paulgraham.com/pypar.html

Another part of it is that someone who's incompetent in, e.g. JavaScript, is likely to not make it very far trying to do Clojure. And, again some conjecture, I'd bet that someone who is great at Clojure would write very nice JS.

When I was learning Python back in, 2006 or so, I remember someone stating "You can write Java in any language". This was referring to people who wrote Python code with these huge class hierarchies that inherited from stuff all over the place, when a "Pythonic" solution would have just involved a couple of functions.


Would you rather they be low skill developers with a diploma?


Hey, I’ve got no diploma, just 30 years of commercial development. But even I know that all the unit tests in the world can’t paper over the flaws of a typeless scripting language.


Yikes, what is it about node/npm/javascript that makes it feel like a house of cards?


Yikes, what is it about node/npm/javascript that makes it feel like a house of cards?

I think the (short) answer is "node, npm, and javascript".

The longer answer has something to do with the automatic installation of dependencies, and the common use of shell scripts downloaded directly off the internet and executed using the developer's or sysadmin's user account.

I used to use CPAN all the time. CPAN would check dependencies for you, but if you didn't have them already you'd get a warning and you'd have to install them yourself. It forced you to be aware of what you're installing, and it applied some pressure on CPAN authors to not go too crazy with dependencies (since they were just as annoyed by the installation process as everyone else.)

These days I use NuGet a lot. It does the dependency installation for you, but it asks for permission first. The dialogs could be better about letting you learn about the dependencies before saying they're ok. (In general, NuGet's dialogs could be a lot better about package details.)


> CPAN... forced you to be aware of what you're installing

I think CPAN is pretty sweet for variety/wide reach of packages available, but this is flat-out wrong.

CPAN is not a package manager; it is a file sprayer/script runner with a goal of dependency installation. That's perfectly sufficient for a lot of use cases, but to me "package manager" means "program that manages packages of software on my system", not the equivalent of "curl cpan.org/whatever | sh".

CPAN packages can (and do by very common convention) spray files all over the place on the target system. Then, those files are usually not tracked in a central place, so packages can't be uninstalled, packages that clobber other packages' files can't be detected, and "where did this file come from?"-type questions cannot be answered.

Whether CPAN or NPM "force you to be aware of what you're installing" seems like the least significant difference between the tools. When NPM tells you "I installed package 'foo'", it almost always means that the only changes it made to your system were in the "node_modules/foo" folder, global or not. When CPAN tells you "I installed package 'foo'" it means "I ran an install script that might have done anything that someone named 'foo'; hope that script gave you some verbose output and told you everything it was doing! Good luck removing/undoing its changes if you decide you don't want that package!"

There are ways around all of those issues with CPAN, and plenty of tools in Perl distribution utilities to address them, but they are far from universally taken advantage of. CPAN is extremely unlike, and often inferior to, NPM. Imagine if NPM packages did all of their installation logic inside a post-install hook; that's more like a CPAN distribution.


I had very limited contact with CPAN some years ago but I imagine it was slightly more sane in terms of granularity of dependencies.

Whereas a lot of npm modules are relatively small - some tiny - and have their own dependencies. So a simple "npm install blah" command can result in dozens of packages being installed. Dealing with that manually would, in fairness, be a giant chore.

Now of course there's a discussion to be had about whether thousands of weeny little modules is a good idea or not but, to be honest, that's a religious debate I'd rather steer clear of.


Whether it's a good idea or not, that's what JS' lack of stdlib produces.


> I used to use CPAN all the time...

CPAN has a setting that force-feeds you dependencies without asking, but I don't think it's on by default. Also, CPAN runs tests by default, which usually takes forever, so users get immediate feedback when packages go dependency-crazy. The modern Perl ecosystem is often stupidly dependency-heavy, but nothing like Node.


Also. In CPAN there was a culture of having comprehensive unit tests. If something broke, you would likely see it at installation.


I have recently taken over an Angular project (with a C# backend, thankfully) at my job. It took two hours to get it to even compile correctly because some dependencies were apparently outdated in package.json and it just ran on the other dev's machine by accident. I don't understand why I need over 100 dependencies for a simple Angular Single Page App that pulls JSON from the backend and pushes JSON back. Meanwhile, the C# backend (a huge, complicated behemoth of software) ran on the first click.


Three developers on my team spent the last 4 years pushing for angular. Four years ago, I was 50/50 on it vs react, so whatever, but if my team's really for it, let's do it.

Fast forward to angular 2, and we're down to two developers who are still for it.

Fast forward to today, I'm down to one angular dev who's still for it, and two of the original three have left for react jobs. Meanwhile, I'm left with a bunch of angular 1 code that needs to be upgraded to angular 2, and a few testing-out-angular-2 projects that are dependency hell.

The only reason I ultimately embraced angular 1 to begin with (above reasons aside), was because it was so opinionated about everything, I could throw it at my weaker developers and say: "just learn the angular way to do it", and there was very little left they could meaningfully screw up. Angular proponents on the team would see it as a point of expertise to teach the "angular way" to more junior devs, and everyone left the day feeling good.

When it comes to Javascript 95% difficulty with writing good maintainable code is ensuring that your team is all writing to a very exact, and consistent quality and style, since there are so many different ways you can write js, and so many potential pitfalls. And if the team all wants to embrace Google's Angular standard, that works for me. Its far easier to be able to point to an ecosystem with an explicit, opinionated way of writing code, than it is to continuously train people on how to write maintainable code otherwise.

But with angular 2, if you haven't been drinking to cool-aid for a while now, it requires so much knowledge just to get running, I can't even have junior devs work on it, without a senior dev who's also an angular fanboy there to make sure everything is set up to begin with. Its absurd. And I'm supposed to sell to the business that we need to migrate all my Angular 1 code to this monstrosity? And then spend time again every 6 months making the necessary upgrades to stay up to date? Get real.


I don't understand. We've started a new Angular 2+ project and our junior developers managed to roll into quite easily. Our designers (who know jack about Javascript) got excited when they discovered that our project uses .scss and the results have been spectacular.

Seriously, I REALLY REALLY don't get this hate for Angular 2+


Just wait until Angular 2 hasn't been cool for a while and you can't find any JS developers who are interested in maintaining your software rather than rewriting it in xyz_latest_fad_framework.


Curious, what would you rather do instead? Is there an opinionated React framework you could use?


>Curious, what would you rather do instead?

Moving away from SPAs seems like a dream at this point.


Mark, is that you?

Kidding - but we had exactly the same problem, except with a React app rather than an Angular one just before Christmas.

No joke on with this statement though: every time we have a time-consuming build issue to deal with it comes down to some npm dependency problem. Honestly, if there were a way we could realistically ditch npm (NO! YARN IS NOT ANY BETTER - to preempt that suggestion - it's simply an npm-a-like) I'd happily do so but sadly there isn't.


The basic explanation is that the dependencies for the angular app are much smaller, but I’m not sure which bit is confusing you. You don’t understand why an incorrectly written program required work to run when a bigger but correctly written program was easy?


> incorrectly written program

In principle programs shouldn't stop working just because they are old.

Yes, no language completely realize this. But there's a world of difference between C's "it was written only 40 years ago, why did compilers break it?" to Python's "yes, you are expected to review your code every 3 or 4 years", and there is another world of difference to the faster Javascript frameworks that practice "your code is 6 weeks too old, your looser!"


No JS framework does that because they version things. Run the same versions and it works.

If you’re not pinning versions correctly that’s hardly JS’s fault.


This is a cultural thing, where developers will decide when to invest in developing their library against the old version and when for the new version. For stable languages like C, or distro supported packages it’s years - just check out Debian or Red Hat for an ecosystem that values stability.


Sure, and at the scale & purpose of Debian or Red Hat that’s extremely important.

But that’s nothing to do with bad versioning practices and everything to do with product priorities.

Also C versions are have the same issue. Try to build a C11 project with a C89 compiler. Hell I’ve had C89 code not work in clang...

Versions affect everything.


You are describing bad development practices.

Not sure why you’re stuck on the number of deps either - as long as they’re small who cares?


Does that backend use nuget for dependencies?


No. It needs nothing but the .NET Framework (not .NET Core) because .NET is already providing everything we need.


Thankfully we now have package-lock.json


How about the idea that Node has been a hack from day one?


Node was a very interesting thing back when it started. It was a hack, but a nice kind of hack. You could write some efficient servers with it. But then the community that formed around it, with it the project went berserk.


A bit like PHP in that way?


Well, kind of. Node was not a general purpose tool as conceived initially. You would write some I/O bound servers in it. And PHP too is not a general purpose tool, it is for writing interactive web pages (in its pre-Web2.0 sense) easily. Though Node.js was way more intellectually designed. I don't know much about PHP, but there's lots of literature (see https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/).


I really wish that people would stop referencing that "Fractal of Bad Design" article. It's outdated and mostly irrelevant now (April 2012, PHP was at 5.4 then, it's at 7.2 now). It's not that I want to defend PHP, I just think people should judge PHP for what it is now instead of what it was several major changes ago.

Besides, the author seems to misunderstand a great many things about PHP and languages in general. Here's a short rebuttal (also from April 2012): https://blog.ircmaxell.com/2012/04/php-sucks-but-i-like-it.h... that explains some of the misunderstandings.


Hmm are you sure? I've read the fractal of bad design many times.

Some issues might be "fixed" but could they fix the actual *fractal of bad design"?

Isn't it still a mix of c-style, java-style, inconcistent, left associative, horribly broken language it always was?

I always thought the bugs were anecdotal backing of the main point: php is badly designed, non programming language for non programmers, who suffer stockholm syndrome from all php abuse...


> Hmm are you sure? I've read the fractal of bad design many times.

Yeah, I'm sure. And so have I. Maybe you should stop reading it to reinforce your prejudice and instead take a look at PHP 7.2?

> non programming language for non programmers, who suffer stockholm syndrome from all php abuse...

Hating PHP is almost like a bad meme. Obviously it's doing something right otherwise it probably wouldn't be as popular as it is. (Same can be said for Javascript, I guess.)

Your personal feelings about the language are pretty much irrelevant. The Fractal of Bad Design article, however, is actually spreading misinformation yet people with an axe to grind keep referencing it because it fits their agenda, hence why I react whenever I see it referenced.

Here are just a couple of examples of where it's flat out wrong and/or completely outdated. There are plenty more.

He's left in things that were fixed long before he published the article — e.g. the new array syntax — but that doesn't stop him from saying stuff like "Despite that this is the language’s only data structure, there is no shortcut syntax for it; array(...) is shortcut syntax. (PHP 5.4 is bringing “literals”, [...].)" Keep in mind, 5.4 was already out when he wrote it...

Not to mention the whole section on "missing features" where he basically enumerates things that most certainly doesn't belong in a language's core but in separate libraries or part of a framework, and — surprise! — those are all available in both libraries, frameworks, extensions, etc.

"There is no threading support whatsoever." pthreads have been stable since 2013: http://pecl.php.net/package/pthreads


I wonder when people will stop quoting this 4+ year old article. Most of what are actually issues are long fixed. https://php.vrana.cz/php-a-fractal-of-not-so-bad-design.php


When it's invalid, maybe. The article you link says in the first three paragraphs:

---8<---

Whether you like PHP or not, go and read the article PHP: a fractal of bad design. It's well written by someone who really knows the language which is not true for most other articles about this topic. And there are numerous facts why PHP is badly designed on many levels. There is almost no FUD so it is also a great source for someone who wants to learn PHP really well (which is kind of sad).

I am surprised that I am able to live with PHP and even like it. Maybe I am badly designed too so that I am compatible with PHP. I was able to circumvent or mitigate most problems so the language doesn't bother me.

Anyway, there are several topics which are inaccurate or I don't agree with them. Here they are with no context so they probably wouldn't make much sense without reading the original article:


What hasn't?


Quite a few. E.g. ssh definitely was not, Rust was not, TeX was not. But these were mostly second-thought projects of the "let's now finally do everything right" kind.


They certainly weren't the cores of other projects, ripped out and made into a standalone thing.


Good point



The npm repository is the largest package repository in the world. A lot of the major incidents they've could have happened to other ecosystems (e.g. PyPi allows a user to delete packages that other packages depend on), but they've either not happened or haven't had as large an impact. When npm breaks, everyone notices, because everyone either uses npm or knows someone who does.


Largely because Javacsript is so broken by default that it is almost required to depend on a whole slew of dependencies for functionality other languages contain in their built-in standard libraries. And furthermore, NPM dependencies are broken down into stupidly small units, versioned rapidly, and enforces very little consistency among transitive dependencies.

Other languages and package management systems don't encourage this kind of insanity.


I’m impressed. Not one thing you just said is accurate.


I'm impressed, because everything they said is accurate.


I know HN is basically an industry wide joke for JS discussion but no, none of that is accurate.

Most of it is plain wrong, though some of it is misapplied frustration to the wrong target.

But again, I guess I shouldn’t expect more from HN.


What exactly is wrong?


This in particular is a huge trust failure - working with mutable/replaceable libraries is like working with mutable/replaceable APIs.


Well they aren't mutable/replaceable, at least not since after the left-pad incident where npm announced new rules to prevent package unpublishing. It seems this was a operational bug at npm inc.


I wonder how much damage need to be done with JS/Node until seriously is put to rest the madness. Is absolutely necessary to break backward compatibility and rebuild JS from the start. With WebAssembly this is doable (without excuses!) and we already have a nice tag to declare what script language to use.

This is not possible, you ask?

In fact, JS/CSS is the most viable of all the stacks to move forward. Let's use the "advantage" that any JS library/ecosystem die fast and put enough hipster propaganda declaring the ultimate solution.

Is too hard? JS is so bad that fix it is too easy. You only need more than the week it originally take to build it.


As a counterpoint, couldn't any sufficiently complex structure be called a hack and a house of cards, when you really dig down into how it's put together? Mm, maybe not any - as some complex systems are well-tested with solid architecture - but just some, or most..


"Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?" --George Carlin

I think the software version of this is: any system with more structure than your program is an over-engineered monstrosity, and any system with less structure than your program is a flakey hack.


A "house of cards" implies that you don't have to dig to topple it. If you have to really dig down into how it's put together in order to start pulling it apart it isn't really a house of cards.

I don't use npm or node for anything serious, and i don't really have any knowledge of how NPM works, but this isn't the first time i've read this story of a whole bunch of packages disappearing and everybody's builds breaking. If everything is a house of cards, then why don't i hear the same stories about PyPI or gems or crates?


I can't speak for PyPI, but I know Ruby gems has had vulnerabilities in the past. A quick DuckDuckGo will probably suffice to demonstrate that. I'm not saying NPM is a great system, but it does seem to me that most systems have flaws, and any system that is as heavily used as NPM is likely to have them surface faster than other systems.


Because when you’re on top, everyone loves seeing you fail.

A quick CVE search pulled up 18 vulnerabilities in RubyGems, including remote code execution.


It reminds me of this article, Everything is Broken. Perhaps a house of cards becomes solid architecture through the test of time..?

https://medium.com/message/everything-is-broken-81e5f33a24e1


The issues with the NPM Registry are not technical but management issues and decisions not properly thought through.


As of this writing, aren't we still waiting to see what the problem was inside NPM that caused this user's packages to disappear?

It might well have been technical. It might well have been managerial. It very likely involved elements of both. But don't you think it's best to save the Monday morning quarterbacking for Monday morning, when all the facts are in?


> If everything is a house of cards, then why don't i hear the same stories about PyPI or gems or crates?

npm is roughly twice as big as PyPI, RubyGems and crates.io together.


Disappearing packages due to a deleted user is not an issue of scale.


Almost all problems I have with JS projects have to do with npm. It got better with lockfiles, but it seems they're inventing new problems...


Left-bad, I mean, the left-pad fiasco should have been the wake up call.


How do you not feel embarrassed using such low quality insults..?


It was just a bad joke, but in all seriousness, that was a big wake up call for a lot of people about the tangled web of npm dependencies.


Disagree, it was mainly a wake up call that npm shouldn't allow package deletion, a policy they changed as a result.

Every other 'these kids and their dependencies' opinion over the left-pad incident was highly subjective.


Because it is... It's mollochian complexity heaped on top of layers of excrement and ducktape, and we have collectively entered a state of mass Stockholm Syndrome about the situation.

I really would love to ditch web dev and all its myriad tendrils, and go back to native desktop software.


Somehow i imagine a native C-Desktopdev and a Webdeveloper meeting in No-Mans Land each party escaping from its own nightmare with that line on the lips, starting with a "Dont run into this direction-"


At my job we do native C and C++, some Java, some C#, scripting in Shell, Python, and Perl. When the left-pad incident happened someone said something to the room about it, we all looked it up, and spent a good 15 minutes mind-boggled, laughing and being grateful we weren't web devs. "Wait, you're telling me these people need NPM and GitHub to deploy? Seriously?"


>these people need [their package manager] and [their source code management tool] to deploy? Seriously?

Not really sure I understand what you're implying there


I'm not the poster you're replying to, but I think I understand it.

npm is not just their package management tool... the way most people use it, it depends on someone else's package registry/repository to deploy to your own servers.

And github is someone else's source code management tool/server.

As a matter of policy, if I can't have something on my own server (or one my org controls) I don't get to rely on it to deploy/run my application.

So I think I get the parent's comment... it's a really foreign situation, to me, to depend on the availability of stuff like this on servers I (or my org) don't control in order to deploy my application.

I'm sure the people who depend on these things look at me and say "Wait. You have to set up your own package repository and source control before you can deploy instead of using all this nice stuff that's available in the cloud? Seriously?"


Yeah. I've been on both sides of this coin. If I'm deploying cloud software (which I am, these days), then I have no problem relying on cloud software to make that deployment smoother. But if I ever go back to writing native applications, I sure as hell won't be reliant on the internet in order to manage intranet deployments. These are two different paradigms, and what works well in one doesn't make any sense in the other.


A public package manager and a public source code management tool, both of which are outside of your control. You should be able to deploy from a local [verified and audited] cache of your dependencies.


That's a good goal to strive for, but isn't necessary or practical for everyone. Maintaining local/hosted artifact caches, verifying them, and auditing them is a big hassle, and unless you make something (e.g. fintech, healthtech) that might need such an audit or emergency release, might not be worth the trouble.

Itty bitty company making a social website on a shoestring budget/runway with very few developers? Might just be worth postponing a release a day or two if NPM or GitHub are having issues.


virtualenv makes it trivial. It's not like it's strictly enterprise-grade tech.


How does vrtualenv make maintaining, auditing, and using a local mirror of dependencies trivial? Seems to me I can download a poisoned package into a venv cache just as easily as I can download it with wget, and unless I take the time to check, I’m none the wiser either way.


I was referring specifically to not being able to deploy due to a package manager being down. Of course there are still issues that can crop up with using virtualenv.


I haven’t dug too much, but I believe at my work, we run a server that hosts all our jars, and is the source of truth for all our builds. Nothing that’s been checked in goes straight to the Internet (you can add new dependencies to uncommitted code). And we’re only ~30 devs.


"Wait, you're telling me these people rely on their ISP and the telco infrastructure being operational to deploy? Seriously?"

"Wait, you're telling me these people need an internet-available Ubuntu mirror to install their development environments?"

"Wait, you're telling me these people need their users to have specific, updated browsers in order to run the deployed software?"

"Wait, you're telling me these people need their users to have a patched, up-to-date operating system in order to run your desktop app?"

"Wait, you're telling me these people just assume users won't switch off their computers before saving changes?"

"Wait, you're telling me these people depend on the power grid being available to deploy?"

"Wait, you're telling me these people assume their users have fast, low-latency internet connections to play their real-time multiplayer game?"

You get the idea.


And? Most of these are things you absolutely should be thinking about


"You should be thinking about" and "You need this? Seriously?" are very different statements. Of course you should be aware of dependencies.


And you should also be aware of what it takes to rebuild your stack, and have something in place if that disappears. If you think it's OK to rely on external tools like that to build your system, you deserve all the fallout you get when it fails.


Eh, i like desktop development and i make desktop apps for 20+ years. Before i got Windows 95 i was even trying to make my own DE for DOS in Turbo Pascal and before that in GW-BASIC :-P. I love the desktop.

Web stuff on the other hand can die in a fiery death, as far as i am concerned together with mobile stuff they are the source of everything wrong with the desktop today :-P.


Btw. for those who don't know:

Yarn (which is an alternative to npm) uses a global cache [1] on your machine which speeds things up, but probably also protects you from immediate problems in cases like the one currently on progress (because you would probably have a local copy of e.g. require-from-string available).

[1] https://yarnpkg.com/lang/en/docs/cli/cache/


Already counting down the days before yarn is considered old and broken and people are recommending switching to the next hot package manager/bundler...


yarn is one of those things coming out of the JS world that is actually really well made. yarn, typescript, react; say what you want about js fatigue, these are rock-solid, well-tested projects that are really good at what they do.

A major reason for the high toolchurn in that ecosystem is how many of those tools are not designed from the ground up, don't quite solve the things they ought to, or solve them in really weird ways (due to the low barrier of entry partly). But that doesn't mean all of it deserves that label.


> yarn, typescript, react; say what you want about js fatigue, these are rock-solid, well-tested projects that are really good at what they do.

I wish webpack was on that list.


I mean, there's plenty more that could be on that list, it's not exhaustive.

Webpack though I'm really not sure should be. It's certainly improving, but it's nowhere near the same league as the other ones.

Edit: Ah, I see what you meant :)


That's exactly my point. I wished it deserved to be on the list.


Can't say anything about react, but yes: yarn and typescript are good.

This is coming from a long time Java programmer who still likes Java and Maven but now might have a new favourite language.

This is made even more impressive by the fact that it is built on the mess that is js. (Again: I'm impressed that it was made in three weeks I just wish a better language had been standardized.)


It badfles me that technologists commonly complain about new technology. As far as I can tell your complaint boils down to “people should stop making and switching to new things”.. I find it hard to understand why someone with this attitude would be a technologist of any kind, and I find the attitude really obnoxious.


I take it that you've never had to work at a big organization? When you have multiple teams in different offices, it's incredibly difficult to constantly "herd cats" and point everyone to $latest_fad. And when you DO by some miracle get everyone (devs and management) to switch to $latest_fad, it's a huge pain to go back through and bug test/change every process to accommodate the new software.

I don't think "people should stop making and switching to new things" is a fair distillation of the parent comment, as it seemed like they were just expressing frustration at the blistering pace the Javascript community is setting.


Isn't this a case for a microservices, etc.?

Independent teams providing business capabilities through APIs would mostly eliminate the need to keep consistent technologies as long as the interface design follows shared guidelines.


Most companies of any size are allergic to "pick your own toolchain" development strategies. The infrastructure team has to support them. Someone has to be responsible for hiring. Security needs to be able to review the environment. Employees should be able to be moved between teams. And so forth.

Sure, I suppose devops can mitigate the infrastructure support problem, but overall most companies strongly prefer standardization.


No. My complaint is that things never get fixed properly. The complex problems around software distribution (which proper package managers have made a good stab at solving for decades) are ignored in favour of steamrollering over the problems with naive solutions and declaring that everything "just works" only for the wheels to come off a few years later running into a dead end which many of us saw from miles off.

This is particularly true for package/dependency management, but the attitude is found more broadly.

For what it's worth, the javascript world isn't alone here. Python, with its new Pipfile/pipenv system is on its, what, fifth, sixth? stab at solving package management "once and for all" and it's all truly dire and not something I depend on when I have the choice.

Nix solves pretty much all of these problems and a few more, but I expect it to be a decade or so before people realize it.

I'm not complaining about new things. These aren't new things. They're about a decade behind the curve.


Because each thing has a constant price in learning effort that is familiarizing yourself with its idiosyncrasies, which you have to pay even if you're experienced in the domain. When tools constantly get replaced instead of improved, you keep paying that price all the time.


> Because each thing has a constant price in learning effort

That's not, in my experience, how it works. Learning your first tool (or language) takes a lot of time. Learning your second is quicker. By the tenth, you're able to learn it by skimming the README and changelog.

It works like this for languages too, at least for me. My first "real" language (aside from QBasic) was C++ and it took me 3-4 years to learn it to an acceptable degree. Last week I learned Groovy in about 4 hours.

It still "adds up", but to a much lower value than you'd think.


But it does, you're just focusing on the other component of learning.

Put another way, for a new tool, learning cost is a sum of a) cost of learning idiosyncrasies of that tool, and b) cost of getting familiar with the concepts used by it.

You're talking about b), which is indeed a shared expense. But a), by definition, isn't. And it's always nonzero. And since new tools are usually made to differ from previous ones on purpose ("being opinionated", it's called), even though they fix some minor things, this cost can be meaningful. And, it adds up with every switch you need to do.

Some of it is a normal part of life of a software developer, but JS ecosystem has taken it to ridiculous extremes.


My argument is that the a) part's cost is indeed non-zero, but - contrary to what you say - trivial in a vast majority of cases. It's just my personal experience, but it happened every single time I tried to learn something: learning "what" and "why" took (potentially a lot of) time, but learning "how" was a non-issue, especially if a "quick reference" or a "cheat sheet" was available. I also disagree that the a) part is never shared between tools: there are only so many possible ways of doing things, but a seemingly infinite supply of tools for doing them. The idiosyncrasies are bound to get repeated between tools and, in my experience, it happens pretty often.

As an example, imagine you're learning Underscore.js for the first time. It's a mind-blowing experience, which takes a lot of time because you have to learn a bunch crazy concepts, like currying, partial application, binding, and others. You also have to learn Underscore-specific idiosyncrasies, like the order of arguments passed to the callback functions and the like - mostly because you are not yet aware which things are important to know and which are just idiosyncrasies.

Now, imagine you know Underscore already and have to learn Lo-dash or Ramda.js. As the concepts remain very similar, you only need to learn a few conventions, which are different in Ramda. But! Even then, you don't have to really learn all of them to use the library effectively. It's enough to keep the diff of the Underscore and Ramda conventions in mind: learning that, for example, the order of arguments passed to callbacks differ is enough; you can then check the correct order in the docs whenever you need. You know where to find that piece of information, you know when it matters and, by extension, when it's not a concern. There is no need to waste time trying to learn trivia: not doing something is always going to be the fastest way of doing it. By your second library, you start to recognize trivia and are able to separate it from informations that matter. Learning prelude.ls afterward is going to take literally 30 minutes of skimming the docs.

This is just an example, but it worked like that for me in many cases. When I switched from SVN to Bazaar, for example, it took quite a bit of time to grok the whole "distributed" thing. When I later switched from Bazaar to Git it took me literally an hour to get up to speed with it, followed by a couple more hours - spaced throughout a week or two - of reading about the more advanced features. Picking up Mercurial after that was more or less automatic.

I guess all of this hinges upon the notion of the level of familiarity. While I was able to use bzr, git and hg, it only took so little time because I consciously chose to ignore their intricacies, which I knew I won't need (or won't need straight away). On the other hand, you can spend months learning a tool if your goal is a total mastery and contributing to its code. But the latter is very rarely something you'd be required to do, most of the time the level of basic proficiency is more than enough. In my experience, the cost of reaching such a level of proficiency becomes smaller as you learn more tools of a particular kind.

That's the reason I disagree with your remark that that cost is "constant". It's not, it's entirely dependent on a person and the knowledge they accumulated so far. Learning Flask may take you a week if you're new to web development in Python, but you could learn it in a single evening if you worked with Bottle already. On a higher level, learning Elixir may take you months, but you could also accomplish it in a week, provided that you already knew Erlang and Scheme well.

So that's it - the cost of learning new tools may be both prohibitive and trivial at the same time, depending on a prior knowledge of a learner. The good thing about the "prior knowledge and experience" is that it keeps growing over time. The amount of knowledge you'll have accumulated in 20 years is going to be vast to the extent that's hard to imagine today. At that point, the probability of any tool being genuinely new to you will hit rock bottom and the average cost of switching to another tool should also become negligible.

To summarize: I believe that learning new tools gets easier and easier with time and experience and - while never really reaching 0 - at some point, the cost becomes so low that it doesn't matter anymore (unless you have to switch really often, of course).


> Last week I learned Groovy in about 4 hours

How well did you "learn" Apache Groovy? Just enough to change a small Gradle build file?

And did you already know any Java beforehand? If so, then there's a lot less Groovy that needs learning.

Did you write enough Groovy code to stumble across some of its many gotchas, or did you skim some docs and just learn what Groovy should be?


> How well did you "learn" Apache Groovy?

I'm not sure. I did it because of Jenkins Pipeline DSL; I learned enough to write ~400 loc of a build script from scratch. I was able to de-sugar the DSL and wrap raw APIs with a DSL of my own design (I'd say that I "wrote a couple of helper functions", but the former sounds way cooler...). I did stumble upon some gotchas - the difference between `def` and simple assignment when the target changes, for example.

EDIT: I wonder, is that level of proficiency enough for you to at least drop the scare quotes around "learn"? I feel that putting the quotes there is rather impolite.

> did you skim some docs and just learn what Groovy should be?

As I elaborate on in the comment below, there are different levels of proficiency and I never claimed mastery - just a basic proficiency allowing me to read all of the language constructs and write, as mentioned, a simple script from scratch, with the help of the docs.

> And did you already know any Java beforehand?

Well, a bit, although I didn't work with it professionaly in the last decade. However, knowing Java wouldn't be enough to make learning Groovy that fast - I have another trump card up my sleeve when it comes to learning programming languages. You might be interested in a section of my blog here: https://klibert.pl/articles/programming_langs.html if you want to know what it is. To summarize: I simply did it more than 100 times already.


> the scare quotes around "learn"? I feel that putting the quotes there is rather impolite

When I say I've learned (or learnt) a programming language, I mean more than a 4-hour jump start to basic proficiency level. Perhaps I was letting off some steam over the wild claims many programmers make regarding their PL expertise.

Did you know that Jenkins Pipeline cripples Groovy so all its features aren't available, specifically the Collections-based methods that form the basis of many DSL's?


> Did you know that Jenkins Pipeline cripples Groovy

Yes. I've run into some limitations; first because of a Pipeline DSL, and when I ditched it in favor of normal scripting I ran into further problems, like Jenkins disallowing the use of isinstance (due to a global configuration of permissions, apparently - I don't have administrative rights there) and many other parts of the language. It was kind of a pain, actually, because I developed my script locally - mostly inside groovysh - where it all worked beautifully and it mysteriously stopped working once uploaded. A frustrating experience, to say the least.

> over the wild claims many programmers make regarding their PL expertise.

I believe I'm a bit of a special case[1] here, wouldn't you agree? Many of the languages on that list I only learned about, however, many of them I learned, having written several thousand (on the low end) of lines of code in them. It's got to be at least 30, I think? I'd need to count.

Anyway, I argue that such an accumulation causes a qualitative difference in how you learn new languages, allowing for rapid acquisition of further ones. It's like in role-playing games, if you buff your stats high enough you start getting all kinds of bonuses not available otherwise :)

[1] If I'm not and you know of someone with the same hobby, please let me know! I'd be thrilled to talk to such a person!


Yes, I agree. I changed my outlook on programming after I spent about 2 years playing with Clojure as a hobby, then 1 year on Haskell.


It’s a drop-in replacement CLI tool. Let’s not be dramatic.


The problem isn't with that one tool alone. The problem is with the entire ecosystem, in which all the tools get regularly replaced by "better" ones. It all adds up.


To be precise, new tools are continuously created to address the weaknesses of other tools. This happens in other languages, just more slowly due to smaller community sizes.


"new tools are continuously created to address the weaknesses of other tools, instead of fixing those weaknesses in those other tools" - FTFY.

> This happens in other languages, just more slowly due to smaller community sizes.

Yeah, my point is that there is a cost for learning learning a new tool; the faster those new tools replace the old ones (instead of someone fixing the old ones), the more you have to pay of that cost.


What ideally should be happening is that existing tools get incrementally upgraded to fix issues and add improvements rather than scrapped and replaced as if they're disposable.


To be completely fair, it isn't exactly drop-in. There's new commands for a bunch of things, mainly around adding new packages locally and globally. I led the yarn switch effort on my direct team and had people coming to me weeks after asking how to do X because of the different commands.


I suspected that someone would mention this, but the fact of the matter is both systems are mostly interoperable. The switch from npm to yarn would be nothing like migrating from Gulp + Browserify to Webpack.

To switch to yarn, I printed out a one-page cheat sheet and taped it to my wall. I’ve had one blunder in the time I’ve used it (misunderstanding what `yarn upgrade` did x_x), but it was easily reverted.


I think you're making TeMPOraL's point, though.

Even in this relatively close case, it's not a zero-overhead transition. There are some changes. There are some new behaviours. You still need to know which things really work exactly the same and where the differences come from even if those differences are only minor. You always need due diligence about whether a new tool is reliable, future-proof, trustworthy, etc. And that's all after finding out about the new tool and deciding this one is actually worth looking into.

Multiply all of that by the absurd degree of over-dependence and over-engineering in the JS ecosystem, and it's entirely fair to question whether the constant nagging overheads are worthwhile.


Right, and that non-zero overhead is part of being a software developer.

It’s also laughable that npm is accused of being a hack and yarn is accused of being over-engineered.


It _badfles_ me that _technologists_ (whatever that means) dismiss others writing without actually reading it. It's not us, the detractors, complaining about using new technology because it's "new". For one, it's not new, it's the n-th undeveloped iteration of a technology 20 years old. We're not complaining about you using technology, it's us complaining about you ignoring the advances that could buy alcohol in the US by now.


Technologists value "good", not "new." Sifting through all the "new" to find "better" is fun, so long as expectations are properly tempered.


JS ecosystem is pretty well know for changing very fast compared to other mainstream languages. This is a fair point, NPM could implement the local cache without (hopefully) breaking anything


From my understanding they’ve always had one, but until npm@5 it wasn’t safe for concurrent access (side note: Maven still isn’t) and was prone to corruption. I think they’re making their way toward true offline cacheing a-la yarn, if they haven’t done so already.



We are talking about tools here. Standards are a different beast.

For example it is cool to have multiple tools doing the same thing is cool because you have the choice to use what fits your need (e.g. different Web Servers).

On the other hand, having multiple competing standards for the same job is just technological cancer and mostly the result of some commercial competition (or the attempt to fix a standard by replacing it).


I’ve seen this.


Yarn was the only thing that made npm get off their collective asses and do something about their dog-slow issue-ridden CLI and services.


And yet yarn's changes directly led to npm making significant improvements of their own...

Do you also insist that Chrome and Firefox shouldn't exist because IE does the job adequately?


Already counting down th days before yarn is considerd the new defacto standard package manager...


It's useless in cases like this though, where the package is already invalidated in the yarn cache, which is the case right now for many packages.


You should be using the --frozen-lockfile flag in any production build system.


I find it so silly that this isn’t the default behaviour.


That’s it. I’m using yarn.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: