Hacker News new | past | comments | ask | show | jobs | submit login
The Node.js Community is Quietly Changing the Face of Open Source (caines.ca)
134 points by apunic on Apr 16, 2013 | hide | past | web | favorite | 101 comments

Obviously if Node doesnt come 'batteries included' then you have to re-invent-from-scratch all the tools like package management, dependency management, MVC libraries, etc.

That doesn't mean the community is amazing and wants to contribute! ...it means, that those features didn't exist, and someone had to do them (and yes, it's really great that they're all coming out open source, but it's not revolutionary)

Any language in that situation will have a strong initial growth as people port their favorite toys across.

I'd be much more interested to see the rate of change over-time, year by year.

Is it still growing? Is it slowing down now there are some mature frameworks?

That's interesting stuff. "Changing the face of open source?" --> selfimportantness gasbagging.

I honestly think it might be (changing the face of open source), I am not a part of the node.js community, but I do look at the way NPM works with a bit of jealousy.

Born at the right time, defaulting to the "right" (according to me) license, with the right mix of tools that make it easy to publish and share -- and bluntly I hope other communities (like Erlang, one close to my heart) follow suit.

NPM is amazing. I used to use Python, which had the right idea with its modules but got so much of the implementation horribly wrong, and it has multiple package managers. Node, however, got modules right. Files and modules are in completely separate namespaces, and npm is a single package manager that "just works". The fact node isn't "batteries included" doesn't matter, because requiring modules is easy and dependencies "just work". You stop worrying about setting up dependencies, so you stop worrying when you want to add another dependency. It's great.

Can anybody tell me what NPM does that e.g. Maven / CPAN / Gems e.a. don't?

I haven't seen anything in NPM so far that goes beyond Maven's capabilities so I'm really curious what NPM's big advantage is.

Let's say your application depends on two libraries A and B. And let's say that these 2 libraries depend on different versions of library C. In Maven if library C is not backward compatible, you're stuck. Only one version of library C can exist in the classpath and so one of your 2 libraries might fail. However, in node, library A and B use their corresponding versions of library C. There are no conflicts.

This is amazing. I'm actually surprised NPM is one of the few that gets this right. I guess we are still in the early days of Software engineering.

No conflicts, but a tradeoff of loading two versions of library C. I assume at some point, the cost of multiple versions of everything (in terms of memory usage, for example) becomes an issue?

That's the tradeoff, with the crux being that adding more memory for a bit of JIT-compiled bytecode is preferable to having an app that just won't work.

There is something to be said for requiring developers to make things work right rather than work around their screwups, however.

npm publish lets you create something new and share it trivially and in a useful manner.

This might end up being a negative in the long run, but for now it seems like a huge positive.

If it's trivial to create packages, I suspect we'll end up with a lot of trivial packages.

Which is perfectly fine. I mostly want to be able to trivially use packages which solve my trivial problems, so I can move on to something more fun.

The issue eventually becomes a one of search; when I'm looking for a package, how do I know which are maintained, which never picked up broad adoption, etc?

Between the npm registry[1] and node-toolbox[2], there's already decent visibility into it. I often find the output of 'npm search' daunting to the point of mental paralysis, though.

[1]: eg, if I need a redis client, I can see the redis project is active and broadly used: https://npmjs.org/package/redis

[2]: similarly, http://nodetoolbox.com/categories/Redis

Now, take that observation and go back and read the original post.

I'd observe that the new languages all seem to have been accelerating their rate of package production over time. This is probably because over time the set of easily-copyable packages goes up. It's much easier to release a simple Rails clone after Rails has made the idea popular. It seems like every year there's another Brilliant HTML Templating Idea, and a new language can rapidly develop clones of the entire history of HTML templating ideas, whereas one that was new ten years ago had a much thinner pool of copyable ideas.

This isn't a criticism of Node (which I am certainly prone to, but this is not one of those times), this is an observation that it seems like each new language rapidly builds up packages faster than the last one and this probably isn't actually an attribute of the language.

Do you know of a wicket-like approach to "templating" in python (or indeed anything other than java)? I'm currently in the process of building my own clone in python, and I feel really bad about doing so, but it's so much better than all the approaches to html generation in python that I could find that I'd rather do this than struggle with vanilla django templates, or even genshi.

I like the fact that Node comes with a small core. I'm tired of vast, bloated, incomprehensible libraries that are going to take me more time to learn how to use than they'll be worth.

But because it comes with a small core, the time you saved will be spent searching and learning userland modules, written with wildly varying degrees of stability and coding styles. Unless, of course, you just use your own, then your time has been spent there. Much more of it.

I'm not trying to put node down, I'm a fan (switched to Go for most of my needs about a year ago, though), but this argument (small core so less to understand) just isn't valid if you're going to do anything more involved than learning the language and writing hello worlds.

I'm sure eventually node will have it's very own Tornado/Twisted/Boost/etc that you can then apply to your argument.


Obviously if linux doesn't come "batteries included" then you have to re-invent-from-scratch, or borrow, all the tools like compilers, web servers, package management, archive utilities, etc.

That doesn't mean the community is amazing and wants to contribute! ... it means, that those features didn't exist, and someone had to do them (and yes, it's really great that they're all coming out open source, but it's not revolutionary)

Any OS in that situation will have a strong initial growth as people port their favorite toys across.

I'd be much more interested to see the rate of change over-time, year by year.


If you have a point, make it clearly instead of hiding it.

It's pretty obvious. Your criticisms would have been exactly as justified with regard to linux. That doesn't mean node.js is guaranteed to be as wildly successful as linux, but it does mean that merely dismissing it because so much of the work on it is "just porting things" is misplaced.

Oh the sweet irony. This attitude is exactly what I'm talking about.

You're right, node is great. :)

So you are saying it is no different than other open source projects?

node.js is like any other open source language in this case.

It's not so much like Linux (implied thesis: everyone will be running things on top of node.js like they are run on top of Linux)

> you have to re-invent-from-scratch all the tools

If they are in standard library, they must be invented before too! The difference is that there isn't standard solution (hence the name) and anyone can offer his/her own solution to the problem.

Package and dependency management (with npm) are two things that do come included with the node distribution these days. I would also argue that npm is one of the primary reasons why the node ecosystem is thriving in the way that it is.

It's hardly surprising that there's a huge burst of growth when a language and its package repository first takes off. I'm sure if we looked at the 4 year periods where each of Perl, Python, and Ruby were hitting their stride we'd see similar numbers.

It's nice that node.js is succeeding, but it's hardly changing the face of open source. It's treading a very well-trodden path blazed by Perl starting about 28 years ago.

Looking at the numbers posted in the article, there are almost as many npm packages published in four years of Node than there are Python packages published in 22. I think just saying "hey, they're new, it'll level off" is a great disservice to the Node guys who made this happen.

Truth is, npm is growing (in terms of # of packages) faster than any community like this ever.

True but, this may also be a "technology generation" issue.. i mean.. compared with the 90's .. there are much more people interested and with hands-on computer development than ever before..

And this is all about the internet boom.. so it happens that the technology of the moment now is javascript and consequently node.js.. its natural that a great part of this bigger labor force ends up doing node.js stuff..

to see this numbers clearly we need to see the net effect, the number of the whole development community in comparison..

it's not fair to compare 2010's with the 90's.. what was the audience for perl back then? now whats the audience for node.js and all the other players?

There are other factors at play here. Node has access to a bigger audience, but it's an audience that already has mature toys like Python/Perl/etc. Node not only has to create a great ecosystem, it has to create a better one than what already exists (which is the central premise of the article).

Simply saying "Welp, there's more people on the internet now" is ignoring the excellent competition for hackers that Node (or any other new tech with grandiose ambitions) has.

There need to be more packages, because you are starting at a lower level than a "batteries-included" language

> It seems to me that monolithic frameworks like Rails don’t have a clear notion of what does not belong in Rails, and so they tend to expand indefinitely to include everything an application could need.

Rails 4 will see _eight_ different things that used to be in core extracted to a gem. The diff from 3.2.13 to 4.0.0.beta1 was +110,000 to - 100,000, so it's grown about 10% in the next major version.

Rails is obviously 'full stack' but it's not like it's a black hole that sucks in all Ruby code.

I was about to comment about this as well. I don't particularly like Rails. One of my most linked posts on my blog is a rant about what I don't like about Rails. But that was written when Rails 2 was current, and while I don't particularly like Rails still, I'm very happy to see that Rails has steadily moved towards splitting more and more stuff into separate gems that are increasingly useful in non-Rails scenarios.

So while Rails certainly is "batteries included" and "full stack" in many ways, I don't see how it contradicts the "tiny module aesthetic" that this guy mentions: A typical Rails deployment is big, sure, but it's big by pulling in a ton of, in many cases quite small, modules, more and more of which can be replaced

I still have issues with the level of coupling in a typical Rails deployment (e.g. a number of gems still pretty much expect a full "default" Rails stack, so replacing or avoiding specific components is often tricky), but the landscape has changed drastically.

To speculate a little about the huge rate of package growth (none of this should be taken as critical of the article, which makes no claims about this):

1. Reuse - a lot of NPMs also work in the browser. There are good reasons to make your browser code run in Node too; it makes command-line-driven testing easier and it requires little additional work to add support to a whole platform. And it should be obvious why there is so much JavaScript in the world; in fact, one of Node's biggest advantages is JS monoculture.

2. Take-off time. Node itself has gotten really popular really fast (there are good reasons for this too), which means its ecosystem needs to do some catching up. It has half the packages, but Node developers still need to get the same things accomplished. I've contributed more to the Node ecosystem than the Ruby one even though I write more Ruby in general, mostly because more needed to be done. You look around for a good existing solution, and if it's not there, you roll your sleeves up and you write it.

> mostly because more needed to be done

I agree, and think your comment is insightful, but let me add a little nuance to it.

I've been doing a lot of work with Node lately. In my experience:

1. I generally don't have trouble finding a module (library) that addresses my particular need. (I.e., when I go look for a library that does X, I generally find it.) It does happen, but a lot less often than I expected.

2. I generally don't have the opposite problem either. While in some ecosystems one finds so many libraries to address a particular need that it is hard to choose among them, in my experience that isn't (yet) a big problem with Node. It does happen, but less than it does in, say, Ruby (or for that matter emacs-lisp) and it is generally pretty easy to identify the "market-leader" as it were.

3. The problem that I do run into in Node (with greater frequency than most ecosystems I've worked with) is that I'll stumble across minor bugs in the modules I'm using. These tend to be moderately easy-to-fix issues that crop up in moderately-rare edge-cases, but they are definitely there. (I suppose this speaks to the relative maturity of the ecosystem.)

Counter-intuitively, I think this may actually be a good thing for the Node ecosystem. I'm reminded of the advice to open source developers to essentially "leave something broken" (or incomplete) so that potential contributors can make small-scale contributions [1]. A library that is too complete and well-designed is hard to contribute to (without investing a lot of time in understanding the architecture and direction of the project overall), but if you've got a library that does 95% percent of what you want, but fails on that one small edge case, more contributors are both willing and able to roll up their sleeves to scratch their particular itch.

[1] I can't remember or google-conjure the source of this advice right now. For some reason I think it might have been Marc Andreessen, but it sure sounds like something ESR would say.

I certainly agree about the maturity/lots of small patches to existing packages part. I would guess that LOC growth in Node is (proportional to, say, Ruby) even higher than the package growth for exactly this reason: not only are there a lot of new packages, but a lot of churn in existing ones. I'll add that the lots-of-small-packages thing makes it easier for us to parachute in, grok the code base, fix the issue, and submit a PR. That's part of what makes Node fun, I think.

It's also the case that Node modules are sometimes inspired by those in other ecosystems, like Rails.

It's just like technology transfer into a booming (developing) country. Happens rapidly until they've caught up to the frontier.

Ah, yeah, that's a good addition. A lot of not needing to reinvent the wheel conceptually.

I have never known Node.JS developers to be quiet.

Quite the opposite. With a single thread, gaping security holes if you want large memory support, and until about 3 weeks ago nothing that even came close to resembling compliance with HTTP standards, the onlything Node is changing in the OpenSource community is that a crowd that would have been called script kiddies 5 years ago can now put that on a resume and for some reason think that should earn them 6 figures.

Changing the Open Source community (for the better) requires doing more than hacking together a few lines of code that dupes the functionality of someone else's lines of code. Node.JS doesn't have a single innovative project that is moving all of computer science forward. If node wants to be taken seriously it needs to prove that it can play big data, or science, or linguistics with the rest of the communities.

Every language has a niche where it dominates in resources for a given field. Node has none.

And yet it succeeds. I think it's simply that async IO is in javascript developers' blood - sure, everything you can do in node you've been able to do for five years with twisted, but if you start trying to do something in twisted all that wonderful python ecosystem is suddenly useless to you, and no-one wants to talk about creating replacements because they've all moved on to tornado or gevent or incompatible-framework-du-jour.

Because javascript started in the single-threaded execution environment of the browser, the whole ecosystem has had to be nonblocking; browser APIs were callback-oriented so the whole ecosystem has been written in callback-oriented fashion, and all the libraries play well with each other. Single-threading is a hair shirt that results in better code in the long run, like laziness in haskell. And async I/O is so much more performant than blocking (in modern application stacks) that that's making node dominate there.

I do wonder whether other languages could have ended up the same way if threading hadn't been invented. Perl always had these unreliable bodged threads, and so there were some really interesting event-driven libraries for it - but the threads were good enough for many practical uses, and AFAIK the ecosystem never converged on a single approach. Did I hear of a PEP attempting to standardise a compatible API for doing these things in python?

In web languages threading is often not the win that people think it will be. True web scale is serving 1000s of requests per minute if not second. Each of those users is a "thread" in Python, Java, Etc. So enabling multiple threads per users robs Peter to pay Paul. For this reason the Async model doesn't offer the huge performance gains in deployed code to enterprise scale, that it offers to single users trying to build fast one off projects.

You seem confused. The whole point of the async model is to allow one thread to serve many users, and it's precisely on large scale projects that this becomes useful.

Perhaps you could enlighten us about these security issues or how any language "plays big data and science". I'm genuinely curious about what you mean by security issues, even though the rest of your comment is bizzarely bitter.

Security: In order to exceed the heap limit which is quite small in node you have to override the soft limit. Doing so enables several over run attacks. Not doing so severely limits what you can do as large scale apps pretty routinely need more than the 1.4 Gig limit for even simple stuff like tracking users sessions.

Plays Big Data / Science: Python is used in a lot of science fields, Perl is huge in data mining Java is used in a lot of astronomy. Every niche of IT and CompSci has a language that is more prevalent. Even if that is FoxPro for Accounting. Node doesn't have a Niche. (sure they have chat servers that's not really a science or industry) That is what I mean by it doesn't have an area it is contributing to the advancement of a field.

Your comment does not seem fair or balanced. And now there are two useless comments on this thread.

Strange that in a blog comparing package managers there's no mention of CPAN, which "currently has 120,446 Perl modules in 27,328 distributions, written by 10,567 authors" [0].

It's a lively community, sure. But "changing the face of open-source" is a stretch -- "quietly" even more so.

[0]: http://www.cpan.org/

There is a huge difference in the barrier to entry. To contribute a module to cpan versus contributing a module to NPM.

You might consider the barrier a feature, but it is a wild difference.

Well, if your argument is that it's easier to contribute to NPM, then you should consider that Perl has been managing the same output for 18 years that the Node community has for the past four: about 6700 modules/year.

For what it's worth, uploading to CPAN is not very difficult. Getting a PAUSE account is easy and uploading is no problem. It's really no harder than the PyPI system.

Step #1 is "apply". That is a hell of a barrier to entry. You also need to explain what you want to do with the pause account.

As I said in another comment, it might end up being to the detriment of node, but now it feels like a feature.

So, you want to compile LessCss. You go to npm and have a look at the options. Which package to use?

assemble-less, baidu-less, buildr laessig, less, less-bal, less-clean, less-cluster, less-context-functions, less-features-connect, less-less, lesscompile, lesscw, lessup, lesswatcher, lessweb, style-compile, styles, watch-lessc, wepp

Number of packages is far from a perfect measure of how much interesting and important work is going on in a community.

Or you could go to the official lesscss.org website and follow the very clear instructions to 'npm install -g less'.

Only the "less" package is the one you care about. The rest of them aren't alternatives, they're extensions. Isn't that... a good thing?

Well, there are two hard problems in computer science, cache invalidation, naming things and off-by-one errors.

Extensions aren't bad, but you need to find them in the first place, and if all you've got is a single, non-hierarchical identifier... I'm aware that more elaborate namespaces like e.g. CPANs also can cause confusion, i.e. if they're not followed and you end up with modules all across the tree, but with a bit of community support the benefits still could outweigh this.

Imho. npm really isn't very good for finding libraries. I generally google for them instead and check out the first few hits on github. Perhaps that's why the writer of the article was so exited about the github monoculture.

Just what I want in a framework. Thousands of libraries with functional overlap, each tested in a small number of real use cases and having a large number of undiscovered bugs, none of which is flexible enough to solve my problem or has a large community to maintain it.

In time, node users will realize the pain of abandoned library dependencies and the overhead of researching alternatives and grinding through bugs and functional gaps in poorly planned libraries.

I like node, but the library situation is a disaster. I'll take monolithic, well designed, tested, and popular over a huge choice of one off libraries any day.

I'll take monolithic, well designed, tested, and popular over a huge choice of one off libraries any day.

Been there, done that. And, it's terrible. Monolithic things try to do too many things. It's fine if you grow along with it but a few years down the line - a beginner will find it very difficult to get started and manoeuvre around.

Small, composable things are better. If something is popular, it will be forked and maintained. Also, monolithic things tend to slow down over time (in terms of keeping up) as they start accruing so much baggage.

Small and composable things that are popular and well tested is great. That is not what you get from something like the wild-west of node. Perhaps your ideal is Linux, with small composable commands. How many of those commands copy or move a file? Pretty much one - that's because lots of unexpected things go wrong, and we need one good implementation instead of 500 shitty ones.

Rails is actually moving towards being a pre-installed collection of small composable things, much like a *nix distribution.

Small and composable is great. Everyone implementing cp is not.

Isn't Node also coming 10 years late to the party? It's always easier to join the race at a later time when all you have to do is copy the efforts of others and catch up. Especially since Node has quite a few users and a lot of early days hype to sail on. That being said, good on them and good on the variety it offers people.

I just wouldn't say it's "changing the face of open source" though...

It came late (but at the perfect time) -- it lowered the barrier to entry -- it changed the default licensing -- it leveraged existing systems (github).

In short, it didn't do any amazing -- but by simply combining the right tech at the right time, the result is something IMHO much better than pip or gem or X.

What do you mean by "it changed the default licensing"?

Lots of other communities tend to default to GPL.

Name them.

You might be surprised. "Perl Artistic License" is not GPL. Python is largely a slightly tweaked BSD, I think, certainly it's not GPL. Both tend to release packages under the same license as the implementation. And so on.

This strikes me as another instance of the Node community conveniently rewriting history so they can tell each other how revolutionary they are, instead of looking around at what really has already been done. I don't know of any other community so prone to that, consistently and persistently, even after being corrected.

That doesn't answer the question.

Also, the MIT license was essentially universal for Ruby projects, popular in other ecosystems, and very widespread on GitHub before node's existence and subsequent growth.

I assume half of the packages are outdated plugins for grunt.js. If we then subtract all the packages without tests (aka the ones that are almost useless for open source) I guess the numbers should be pari. Doing such comparisons reminds me of the days were LOCs were a meter of productivity.

While I see the point the OP is trying to make, I feel the article is not written in a neutral way, with whatever little knowledge of node.js I have.

Isn't Node.js a platform? And JavaScript the language? I looked it up and JavaScript has been around since 1995 (which makes it contemporary to Ruby and Python).

It's not about language, it's about the module ecosystem. Less Node.js vs Ruby vs Python more npm vs gem vs pip

pip didn't exist in 1991 though. CPAN got mature faster (and it has 120k modules), but such a comparison isn't satisfactory without a curve showing the yearly growth rate.

While I understand the benefits the author presented about having many small packages, I still wonder what kind of effect this has on resolving overlapping dependencies.

EX: I have package X that depends on A, B, and C and I decide to use package Y that depends on C, D, and E.

D and E may very well be an alternative implementation of A and B, given the large number of packages and a finite solution space.

On the one hand it's easy to think that the best libraries for a particular task will rise to the top, but as a newer node dev, having to reinvent so many wheels for use in node is not very attractive. (However, this is almost counteracted by having to only write in one language).

I think the modules which tackle very common functionality do naturally rise to the top, and end up being well-maintained and fairly standard throughout the community. A good example is mikael's request (https://github.com/mikeal/request) which I suppose is a urllib2 equivalent.

Modules like this are constantly referenced in other modules, and in tutorials etc. By scrolling down this list (https://npmjs.org/browse/depended) you can get a sense of which modules now form the community curated 'standard library' for node - if you were a new node dev looking for a very common piece of functionality for your project, this might be a good place to start looking.

That's actually a solved problem (in node.js at least). npm builds a node_modules tree where each package gets its own copies (with the correct versions) of its dependents. Gone are the days when people took on dependency hell for a few MB of diskspace.

By analogy, bacteria introduced to a freshly sterilised petri dish are changing the face of existing, thoroughly colonised petri dishes.

Bully for Node, but to compare apples to apples, you should count from when the packaging system was released, not from when the language was released. Node's numbers wouldn't look so good if divided by the number of years since JavaScript was released.

Rubygems dates back to 2003, in which case Ruby's 5,439 packages per year still trails Node's pace but not so dramatically.

Its ridiculous to compare the take up of an execution environment to that of new programming languages. Of course it takes MUCH longer for a language to get traction than for a simple piece of infrastructure using a language that has been around for 14 years (at the time of node's inception).

Many packages in npm are just code which has been around for quite some time before npm existed or it comes from the client and was repackaged for node (underscore, backbone, jquery ...). So to make the comparison fair, lets divide 26,966 by 18 years of JavaScript: 1496/year. Not bad, but certainly far from game changing considerin the inclusion of JS in all major browsers gave that language an initial boost that Ruby and Python never had...

Um, I doubt most of the packages were existing JS code. Many, sure, but not most.

Seems a bit unfair to show a rate of packages per year based on the full life of each language, as the principle package repos haven't been around as long as the packages/year implies.

Also the rate of package creation probably isn't uniform over the lifetime of the language.

Paging Newton and Leibniz.

Newton and Leibniz to the HN thread, please.

I don't understand this reference. I have to say, I agreed. I don't see a lot here that is unique to node.js. If anything it's the culmination of timing that reveals the current state of advancement and collaboration in today's OSS/software world. But I think Go or Rust are emerging examples of a similar phenom.

> I don't understand this reference.

Newton and Leibniz independently invented calculus.

I'm sorry, but it's not a good thing when there are multiple packages to do a binary search.

PostgreSQL has a somewhat similar philosophy of extensibility and moving things out of core. Consider something like PostGIS: it's an entire first-class geospatial system done entirely as an extension (I don't think any other database can claim that).

It's still basically a batteries-included distribution, but it is very extensible and getting more so. I expect to see many more domain-specific extensions (you can already see a lot at http://pgxn.org ).

However, I will say that it should be a simple core but not too simplistic. Especially for something like a database, some things are better done in the core (perhaps not inherently, but it's in research-project territory). That's really the challenge: it's not "pro-modularity" versus "anti-modularity"; it's about what constitutes a good base from which to build.

He got me to the point where he divided number of packages by number of years (hell, I did not have internet in 1991, so why would there be packages???)

ruby: 54,385 packages / 18 years = 3022 packages per year

Gems / rubygems haven't been around 18 years. The first public release was in 2004. So 9 years or 6042 packages per year is probably more realistic.

And, most of those gems have been written after Rails became prominent (somewhere around 2006/7). Denominator should be 6 or 7.

The metric OP uses (total # packages dived by the # years the language exists) is misleading.

Node.js is new, a lot of people are trying to solve a lot of problems that aren't solved yet. Once they're solved and have a more or less standard solution the rate at which new packages are released will drop, simply because most problems will have been solved.

I find the quality of these packages to be quite low, in general. :/

Irrelevant but I must say it: Node.js forced async IO is its strength but programming it is a pain in the butt.

I used to think so too, then I stopped trying to use it for everything and started treat it as a DSL for just writing IO heavy asynchronous web services, and now I'm much happier.

This sounds interesting. Can you elaborate on what you mean? Do you mean using node in conjunction with another backend language/[micro-]framework?

Basically. My general setup (to the extent that it is reasonable) is to set up my web apps as a collection of freestanding APIs that simply send and receive JSON. The only component in common between the APIs is the back end database(s). This way I can write each API using the framework/language that makes most sense for the task and simply chuck Nginx in front of the whole thing to have it route requests to the right back end.

Check out caolan's async module. It changes how you think about writing nontrivial js.

node is designed for async io, is fast , but not that fast when it comes to heavy sync calculations , i guess it is not for everything. but since most webdevs know javascript it is the perfect buddy for any other solution out there.

Great post. I'd also add to that list the benefits of local by default packages that are in your project folder. The approach of tools like easy_install, pip, bundler, etc is to hide all your depemdencies in some dot-prefixed folder outside your project directory. This promotes an out of sight, out of mind mentality. The npm approach on the other hand puts them in your project and makes it much more likely that you are going to explore the source of your dependencies, more likely that you bug fix those dependencies instead of work around bugs and most importantly you'll treat that folder like part of your own project. This last point means that you are likely to develop one of your own generic modules to the point where it's good enough to submit back to npm as a public module.

"I’ve only very rarely seen a node.js project that wasn’t on github."

To an extent, that's due to npm's git integration and lack of strong free (for open source) git hosting alternatives. You can specify a git endpoint for a module import.

> You can specify a git endpoint for a module import.

But you lose some of the magic that comes from expressing dependencies through semver versioning. You can publish multiple versions of a module sourced from git, of course, but that takes a lot more manual effort for publishers than than it does to publish on npmjs.org.

Java has more than 100k packages in 16years... more than 6k per year as well.

The author uses the term 'mono culture' but I think the better term might be 'network effect'. There are a lot of positive network effects of all these projects hosting on github.


I don't agree with the idea that there should be no standard library. There is a lot of benefit to having a big standard library with code for common tasks like manipulating strings, performing network I/O, or formatting text. Sure, you might be able to improve on the standard library, but the next person who has to maintain your code probably won't thank you for using something weird. And if your new thing is awesome enough, it might be added to the standard library in time.

Things like this were a big problem in C++. In the early days of C++, literally nothing was standard. Everyone ran around using his own string class-- "std" string was named aspirationally, not to reflect reality. They all behaved slightly differently. It has taken literally decades to get to the point where std::string appears in most new projects in the UNIX world. (I think that Windows is still hosed, due to the continuing UCS-2 train wreck on that platform... but I digress.) Similarly, there are 31 different flavors of smart pointer, and even a master C++ programmer won't know them all. So much pointless non-standardization, so much cognitive overhead.

I'm also reminded of Perl's "there's more than one way to do it" and the "enhanced job security" that ended up providing for anyone who managed to sneak Perl into production.

There is value in modularity, but only when the choice you're giving to the library user is a valuable one. Choosing whether to use a MySQL database connector or a Postgres database connector is good diversity. Choosing which string class to use, or what we're calling BigInt this month, or which of 15 Regex libraries we're using today... that kind of diversity just gives you headaches and maintenance pain, nothing more.

All of the things you listed are within the scope of node.js's standard library - except for BigInt.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact