That doesn't mean the community is amazing and wants to contribute! ...it means, that those features didn't exist, and someone had to do them (and yes, it's really great that they're all coming out open source, but it's not revolutionary)
Any language in that situation will have a strong initial growth as people port their favorite toys across.
I'd be much more interested to see the rate of change over-time, year by year.
Is it still growing? Is it slowing down now there are some mature frameworks?
That's interesting stuff. "Changing the face of open source?" --> selfimportantness gasbagging.
Born at the right time, defaulting to the "right" (according to me) license, with the right mix of tools that make it easy to publish and share -- and bluntly I hope other communities (like Erlang, one close to my heart) follow suit.
I haven't seen anything in NPM so far that goes beyond Maven's capabilities so I'm really curious what NPM's big advantage is.
This might end up being a negative in the long run, but for now it seems like a huge positive.
The issue eventually becomes a one of search; when I'm looking for a package, how do I know which are maintained, which never picked up broad adoption, etc?
Between the npm registry and node-toolbox, there's already decent visibility into it. I often find the output of 'npm search' daunting to the point of mental paralysis, though.
: eg, if I need a redis client, I can see the redis project is active and broadly used: https://npmjs.org/package/redis
: similarly, http://nodetoolbox.com/categories/Redis
I'd observe that the new languages all seem to have been accelerating their rate of package production over time. This is probably because over time the set of easily-copyable packages goes up. It's much easier to release a simple Rails clone after Rails has made the idea popular. It seems like every year there's another Brilliant HTML Templating Idea, and a new language can rapidly develop clones of the entire history of HTML templating ideas, whereas one that was new ten years ago had a much thinner pool of copyable ideas.
This isn't a criticism of Node (which I am certainly prone to, but this is not one of those times), this is an observation that it seems like each new language rapidly builds up packages faster than the last one and this probably isn't actually an attribute of the language.
I'm not trying to put node down, I'm a fan (switched to Go for most of my needs about a year ago, though), but this argument (small core so less to understand) just isn't valid if you're going to do anything more involved than learning the language and writing hello worlds.
That doesn't mean the community is amazing and wants to contribute! ... it means, that those features didn't exist, and someone had to do them (and yes, it's really great that they're all coming out open source, but it's not revolutionary)
Any OS in that situation will have a strong initial growth as people port their favorite toys across.
You're right, node is great. :)
It's not so much like Linux (implied thesis: everyone will be running things on top of node.js like they are run on top of Linux)
If they are in standard library, they must be invented before too! The difference is that there isn't standard solution (hence the name) and anyone can offer his/her own solution to the problem.
It's nice that node.js is succeeding, but it's hardly changing the face of open source. It's treading a very well-trodden path blazed by Perl starting about 28 years ago.
Truth is, npm is growing (in terms of # of packages) faster than any community like this ever.
to see this numbers clearly we need to see the net effect, the number of the whole development community in comparison..
it's not fair to compare 2010's with the 90's.. what was the audience for perl back then? now whats the audience for node.js and all the other players?
Simply saying "Welp, there's more people on the internet now" is ignoring the excellent competition for hackers that Node (or any other new tech with grandiose ambitions) has.
Rails 4 will see _eight_ different things that used to be in core extracted to a gem. The diff from 3.2.13 to 4.0.0.beta1 was +110,000 to - 100,000, so it's grown about 10% in the next major version.
Rails is obviously 'full stack' but it's not like it's a black hole that sucks in all Ruby code.
So while Rails certainly is "batteries included" and "full stack" in many ways, I don't see how it contradicts the "tiny module aesthetic" that this guy mentions: A typical Rails deployment is big, sure, but it's big by pulling in a ton of, in many cases quite small, modules, more and more of which can be replaced
I still have issues with the level of coupling in a typical Rails deployment (e.g. a number of gems still pretty much expect a full "default" Rails stack, so replacing or avoiding specific components is often tricky), but the landscape has changed drastically.
2. Take-off time. Node itself has gotten really popular really fast (there are good reasons for this too), which means its ecosystem needs to do some catching up. It has half the packages, but Node developers still need to get the same things accomplished. I've contributed more to the Node ecosystem than the Ruby one even though I write more Ruby in general, mostly because more needed to be done. You look around for a good existing solution, and if it's not there, you roll your sleeves up and you write it.
I agree, and think your comment is insightful, but let me add a little nuance to it.
I've been doing a lot of work with Node lately. In my experience:
1. I generally don't have trouble finding a module (library) that addresses my particular need. (I.e., when I go look for a library that does X, I generally find it.) It does happen, but a lot less often than I expected.
2. I generally don't have the opposite problem either. While in some ecosystems one finds so many libraries to address a particular need that it is hard to choose among them, in my experience that isn't (yet) a big problem with Node. It does happen, but less than it does in, say, Ruby (or for that matter emacs-lisp) and it is generally pretty easy to identify the "market-leader" as it were.
3. The problem that I do run into in Node (with greater frequency than most ecosystems I've worked with) is that I'll stumble across minor bugs in the modules I'm using. These tend to be moderately easy-to-fix issues that crop up in moderately-rare edge-cases, but they are definitely there. (I suppose this speaks to the relative maturity of the ecosystem.)
Counter-intuitively, I think this may actually be a good thing for the Node ecosystem. I'm reminded of the advice to open source developers to essentially "leave something broken" (or incomplete) so that potential contributors can make small-scale contributions . A library that is too complete and well-designed is hard to contribute to (without investing a lot of time in understanding the architecture and direction of the project overall), but if you've got a library that does 95% percent of what you want, but fails on that one small edge case, more contributors are both willing and able to roll up their sleeves to scratch their particular itch.
 I can't remember or google-conjure the source of this advice right now. For some reason I think it might have been Marc Andreessen, but it sure sounds like something ESR would say.
Quite the opposite. With a single thread, gaping security holes if you want large memory support, and until about 3 weeks ago nothing that even came close to resembling compliance with HTTP standards, the onlything Node is changing in the OpenSource community is that a crowd that would have been called script kiddies 5 years ago can now put that on a resume and for some reason think that should earn them 6 figures.
Changing the Open Source community (for the better) requires doing more than hacking together a few lines of code that dupes the functionality of someone else's lines of code. Node.JS doesn't have a single innovative project that is moving all of computer science forward. If node wants to be taken seriously it needs to prove that it can play big data, or science, or linguistics with the rest of the communities.
Every language has a niche where it dominates in resources for a given field. Node has none.
I do wonder whether other languages could have ended up the same way if threading hadn't been invented. Perl always had these unreliable bodged threads, and so there were some really interesting event-driven libraries for it - but the threads were good enough for many practical uses, and AFAIK the ecosystem never converged on a single approach. Did I hear of a PEP attempting to standardise a compatible API for doing these things in python?
Plays Big Data / Science: Python is used in a lot of science fields, Perl is huge in data mining Java is used in a lot of astronomy. Every niche of IT and CompSci has a language that is more prevalent. Even if that is FoxPro for Accounting. Node doesn't have a Niche. (sure they have chat servers that's not really a science or industry) That is what I mean by it doesn't have an area it is contributing to the advancement of a field.
It's a lively community, sure. But "changing the face of open-source" is a stretch -- "quietly" even more so.
You might consider the barrier a feature, but it is a wild difference.
For what it's worth, uploading to CPAN is not very difficult. Getting a PAUSE account is easy and uploading is no problem. It's really no harder than the PyPI system.
As I said in another comment, it might end up being to the detriment of node, but now it feels like a feature.
assemble-less, baidu-less, buildr laessig, less, less-bal, less-clean, less-cluster, less-context-functions, less-features-connect, less-less, lesscompile, lesscw, lessup, lesswatcher, lessweb, style-compile, styles, watch-lessc, wepp
Number of packages is far from a perfect measure of how much interesting and important work is going on in a community.
Extensions aren't bad, but you need to find them in the first place, and if all you've got is a single, non-hierarchical identifier... I'm aware that more elaborate namespaces like e.g. CPANs also can cause confusion, i.e. if they're not followed and you end up with modules all across the tree, but with a bit of community support the benefits still could outweigh this.
In time, node users will realize the pain of abandoned library dependencies and the overhead of researching alternatives and grinding through bugs and functional gaps in poorly planned libraries.
I like node, but the library situation is a disaster. I'll take monolithic, well designed, tested, and popular over a huge choice of one off libraries any day.
Been there, done that. And, it's terrible. Monolithic things try to do too many things. It's fine if you grow along with it but a few years down the line - a beginner will find it very difficult to get started and manoeuvre around.
Small, composable things are better. If something is popular, it will be forked and maintained. Also, monolithic things tend to slow down over time (in terms of keeping up) as they start accruing so much baggage.
Rails is actually moving towards being a pre-installed collection of small composable things, much like a *nix distribution.
Small and composable is great. Everyone implementing cp is not.
I just wouldn't say it's "changing the face of open source" though...
In short, it didn't do any amazing -- but by simply combining the right tech at the right time, the result is something IMHO much better than pip or gem or X.
You might be surprised. "Perl Artistic License" is not GPL. Python is largely a slightly tweaked BSD, I think, certainly it's not GPL. Both tend to release packages under the same license as the implementation. And so on.
This strikes me as another instance of the Node community conveniently rewriting history so they can tell each other how revolutionary they are, instead of looking around at what really has already been done. I don't know of any other community so prone to that, consistently and persistently, even after being corrected.
Also, the MIT license was essentially universal for Ruby projects, popular in other ecosystems, and very widespread on GitHub before node's existence and subsequent growth.
EX: I have package X that depends on A, B, and C and I decide to use package Y that depends on C, D, and E.
D and E may very well be an alternative implementation of A and B, given the large number of packages and a finite solution space.
On the one hand it's easy to think that the best libraries for a particular task will rise to the top, but as a newer node dev, having to reinvent so many wheels for use in node is not very attractive. (However, this is almost counteracted by having to only write in one language).
Modules like this are constantly referenced in other modules, and in tutorials etc. By scrolling down this list (https://npmjs.org/browse/depended) you can get a sense of which modules now form the community curated 'standard library' for node - if you were a new node dev looking for a very common piece of functionality for your project, this might be a good place to start looking.
Rubygems dates back to 2003, in which case Ruby's 5,439 packages per year still trails Node's pace but not so dramatically.
Newton and Leibniz to the HN thread, please.
Newton and Leibniz independently invented calculus.
It's still basically a batteries-included distribution, but it is very extensible and getting more so. I expect to see many more domain-specific extensions (you can already see a lot at http://pgxn.org ).
However, I will say that it should be a simple core but not too simplistic. Especially for something like a database, some things are better done in the core (perhaps not inherently, but it's in research-project territory). That's really the challenge: it's not "pro-modularity" versus "anti-modularity"; it's about what constitutes a good base from which to build.
Gems / rubygems haven't been around 18 years. The first public release was in 2004. So 9 years or 6042 packages per year is probably more realistic.
Node.js is new, a lot of people are trying to solve a lot of problems that aren't solved yet. Once they're solved and have a more or less standard solution the rate at which new packages are released will drop, simply because most problems will have been solved.
To an extent, that's due to npm's git integration and lack of strong free (for open source) git hosting alternatives. You can specify a git endpoint for a module import.
But you lose some of the magic that comes from expressing dependencies through semver versioning. You can publish multiple versions of a module sourced from git, of course, but that takes a lot more manual effort for publishers than than it does to publish on npmjs.org.
Things like this were a big problem in C++. In the early days of C++, literally nothing was standard. Everyone ran around using his own string class-- "std" string was named aspirationally, not to reflect reality. They all behaved slightly differently. It has taken literally decades to get to the point where std::string appears in most new projects in the UNIX world. (I think that Windows is still hosed, due to the continuing UCS-2 train wreck on that platform... but I digress.) Similarly, there are 31 different flavors of smart pointer, and even a master C++ programmer won't know them all. So much pointless non-standardization, so much cognitive overhead.
I'm also reminded of Perl's "there's more than one way to do it" and the "enhanced job security" that ended up providing for anyone who managed to sneak Perl into production.
There is value in modularity, but only when the choice you're giving to the library user is a valuable one. Choosing whether to use a MySQL database connector or a Postgres database connector is good diversity. Choosing which string class to use, or what we're calling BigInt this month, or which of 15 Regex libraries we're using today... that kind of diversity just gives you headaches and maintenance pain, nothing more.