Hacker News new | comments | show | ask | jobs | submit login
“Node.js is one of the worst things to happen to the software industry” (2012) (cat-v.org)
624 points by behnamoh 455 days ago | hide | past | web | 552 comments | favorite



Disclaimer: I worked with Node.js full-time for about 14 months, ending about 9 months ago. I'm willing to admit that the following complaints may or may not be out of date, especially with ES6 and newer Node versions. Proceed with grains of salt at the ready.

There are a lot of things to like about Node.js, but the primary thing that bothers me about it is the obsession with async. It's a language/framework designed to make pretty much everything async.

In the real world, almost everything is synchronous, and only occasionally do you really want async behavior. By that I mean, you almost always want A, B, C, D, E, F, G in that order, and only occasionally would you say that you want async(H,I). But with Node, it's the other way around. You assume things are async, and then specify synchronous behavior.

Instead of assuming A; B; C; D; E; F; G;, you wind up with a ton of code like A.then(B).then(C).then(D).then(E).then(F).then(G);

...I know that's a contrived example, and I know you don't really need to do it that way, but so many people do, and it really illustrates the point. In Node.js, you are explicitly synchronous / implicitly async. Most other coding paradigms (including Go) better match what I consider reality, which is that everything is implicitly synchronous, and you specify async behavior when you need it.

Basically, I think it's backward. But perhaps like the OP, I just can't wrap my head around it.

The NPM stuff... well, I think all ecosystems have their pros and cons. I'm not a huge fan of NPM, but it does the job for the most part, and I'm curious as to how people would actually improve it, rather than just complain about it all the time. I don't really have any good ideas (knowing nothing about how package management actually works under the hood).


Well.... no, not really. Processors really are fundamentally asynchronous. They are state machines which react when external signals impinge upon them. Synchronous behavior is a useful illusion, a convenient reasoning tool for some kinds of tasks, and our traditional notations for describing computational processes are all based on this model of how computing might be done - but it hasn't actually been done that way since the batch-processing era.

I learned this as I dove into low-level hardware work - drivers and embedded systems. When you're dealing with hardware, life is fundamentally driven by interrupts. Processors spend most of their time asleep. When something happens, the signal level of an input pin changes, and the processor reacts: it wakes up, it starts executing the interrupt handler for that signal, it does things, it sends messages, and then - its job done - it goes back to sleep.

So what, we're talking about normal computers here, you say, surely they're different, right? But no, they really aren't; it's the same process of signal, interrupt, handler, return, all the time. Every context switch, every packet receipt or transmit completion, every timer tick, all of it, it's all driven asynchronously; the system is a hierarchy of event handlers waiting for their opportunity to respond to some external signal.

If you're trying to build an IO system, then, the synchronous model is an expensive illusion. You can force it, but it isn't reality. You spend time waiting, you spend time blocking, you build complex APIs that pretend to poll arrays of connections in parallel; and all you're doing is forcing that illusion of synchrony onto a fundamentally asynchronous world.

It's backwards, all right, but it's backwards because our traditional way of describing computation is backwards, especially the languages which follow the imperative tradition. Functional languages have gained popularity in part because the mental model works better for asynchrony; instead of describing a series of steps to follow, and treating the interrupt as the exception, you describe the processes that should be undertaken in certain circumstances; you effectively construct a state-machine instead of describing a process, and then you leave that state-machine to be poked and prodded by whatever external signals come along to drive it.


Mainstream processors are synchronous. For this reason there is a global clock needed to synchronise all (most) the chip parts. I remember there was some research on asynchronous CPUs but I don't think it ever yielded anything commercial.

EDIT: Wikipedia link: https://en.m.wikipedia.org/wiki/Asynchronous_circuit

What you are saying is that they react to interrupts asynchronously, but this has nothing to do with the synchronous nature of a CPU. Moreover I think that if you have a CPU that doesn't respect the barriers and reorders the instructions to execute them in parallel, then it's a bugged CPU.


That's besides the point.

The things that are synchronous within the processor are also synchronous in Javscript.

What is async in JS and Node (unless you use the sync alternatives) is any kind of IO (disk, network). That is also async for the CPU.

And even the things that you would expect to be synchronous inside the CPU are really NOT for modern, out of order CPUs. They do a lot of complicated things to make code run faster, use branch prediction to predict the code path taken and execute (apparently synchronous) instructions in parallel.


But that only happens when the result is the same as if the instructions were ran synchronously, and synchronization primitives are still needed at the end of these sections.

Even if they weren't, the abstraction layer that you work at with Node is far, far above the point where you care about instruction reordering. If synchronicity is an inherent property of the process that you're implementing (e.g. you want to insert something in the database and then log an event) then no amount of shuffling will alleviate the fact that the logging has to happen after the database insertion.

Edit: the fact that deep down they reorder instructions is a bit scholastic, but if we are to go into these details, it's worth remembering that even the instruction reordering happens synchronously, and that even asynchronous events, like input from an off-chip peripheral, are treated synchronously. The turtles are synchronous all the way :-).


Haha yeah, sure. Reading the input from an IO device once it's been delivered is synchronous.

But IO is not handled in an synchronous fashion. If CPU cores just waited around idling until a hard drive is done fetching the data (for example), our computers would be very slow indeed.

That was my main point.


Ah, certainly. In a similar manner, what I meant is that oftentimes it's the process itself that's synchronous, and if the semantics of a language or a toolkit make it difficult to manipulate this sort of flow, people complain.

I've done too little work with Node to have a constructive opinion on it, but I've occasionally had similar complaints about Qt, which I've used a lot more. It uses a signal-slot paradigm where you basically say "when this event is triggered, this function gets called" (in a callback-like, but somewhat more powerful and flexible fashion). It's great for a lot of things, but makes it hard to read and reason about synchronous operations that have to be fit into this form.

Ultimately, though, people cope with it and write working software with it. I imagine it's the same with Node :-).


> What is async in JS and Node (unless you use the sync alternatives) is any kind of IO (disk, network). That is also async for the CPU.

The same statement is true of Erlang and Elixir, yet they manage to handle non-blocking async code without either of the daft callback/promise code styles of Javascript.

Even within the confines of async-capable languages, Node is one of the worse options.


I agree that CSP as in Erlang/Elixir or Go is way better for handling async behaviour.


Or clojure, whose flavor compiles down to JS.


As I said in my previous post, a CPU that doesn't respect barriers and reorders the instructions anyway is a bugged CPU, not a modern one.

If I need the database result before doing anything else then my code is inherently synchronous and it has an hard dependency on the database result. In that context making the DB call asynchronous and waiting for it is more complex than making a synchronous call and proceeding with the normal program flow. The most natural way to write an algorithm is to have everything synchronous and have a "simil-synchronisation" of the asynchronous program flow using an "async monad". And sadly I normally use C# 4.0 that still doesn't have the async construct borrowed from F#, so I'm stuck with tasks and continuations.


> And sadly I normally use C# 4.0 that still doesn't have the async construct borrowed from F#, so I'm stuck with tasks and continuations.

Do you know that the async/await constructs are just sugar for the Task class you have access on C# 4.0?


The C# implementation is a wrap/unwrap of the Task class, in F# it is using computation expressions to lift the synchronous code to an Async monad. The async construct is much easier to reason about, especially with nested tasks.


We are talking about different kinds of asynchrony. Yes, the CPU marches to the beat of a single drum, but even that "advance" signal can be seen as an action taken in response to an external stimulus: the voltage change incoming from the clock crystal.


Nope. When the clock signal is received it means that the components in the critical path have sent the output in the destination, and so the state of the chip is coherent and it starts another step, synchronous across all the chip as the previous. Using your definition then I can say that a block of procedural code that starts on a single thread as soon as an HTTP request is received, processing it and blocking the thread until the end, is asynchronous because it is an action taken in response to an external stimulus, to use your own words. But it is simply not true. The mechanism that dispatches the call to start the subroutine is asynchronous, the code itself is synchronous.


I'll take your word for it; we are moving into architecture that sits below my experience now. But we are also below the level of abstraction relevant to the original question, which concerned the design of an API used for communicating with hardware outside the CPU.


Any system that shares a clock is synchronous. Two common communication methods for hardware are SPI, which has a shared clock, and is hence synchronous (https://upload.wikimedia.org/wikipedia/commons/thumb/f/fc/SP...) and UART, which has no shared clock, indeed it can be implemented with a single wire, and is hence asynchronous. Internally the receiver's clock needs to be fast enough to correctly sample data being transmitted to it. (Ed: or I guess you might be able to do something clever with an async circuit fed directly by the signaling line.)

Communicating with hardware outside the CPU doesn't have to be synchronous or asynchronous, it's a design decision depending on what you're trying to accomplish. Languages that don't support both methods are missing half of the design space.


I believe it's relevant to point out that most CPUs are synchronous in response to the claim that "Processors really are fundamentally asynchronous."


It does seem like I could have phrased that differently, since a number of people have taken it to mean something rather different than I intended.


Isn't this essentially saying that events need a catalyst, there needs to be some external stimulus for something to occur? That's fine. Synchronous programs have that stimulus. It's called "user action". The program, like the CPU, waits until the entity running the show tells it it would like something to happen. Then it does that something in the order specified.

So yes, the CPU waits for the program to ask for some work, just like the program waits for the user to ask for some work. When that work is requested, the program is executed synchronously. I really don't understand how this argument supports the mess that is Node.js.


Well, CPUs do use out-of-order execution to optimise some things: https://en.wikipedia.org/wiki/Out-of-order_execution

But, crucially, they can only do it when it still behaves as if they're operating in-order.

Either way, I agree that CPUs are explicitly synchronous in terms of how they work internally and how they interact with the outside world.


What really happens is in fact very similar to what happens in the javascript event loop.

The processor processes instructions one after another, synchronously, but whenever he has to use something external he makes the call and continues his normal life until the interruption triggers.

The javascript event loop consists in placing function calls in stack and using WEB api calls asynchronously.


The whole point of abstractions is for us to not have to worry about the underlying complexity. If I want to program in a way that is close to how the hardware works, I'll pick a language like C, however when I choose a high level language, I do not want to care if everything my processor performs is async.

What difference does it make if from the processor's perspective I have:

```js somepoeration().then(function() { // do stuff }); ```

or

```js somepoeration(); // do stuff ```

In both cases my program will just wait for 'somepoeration' to complete before doing anything useful.


In the second case, IO is not allowed to interrupt between between 'someoperation' and 'stuff' and change your program's state.

In node, the first form is only needed when 'someoperation' has to perform IO to do its job so concurrent operations are unavoidable.

If you're trying to reason about your program, this is huge. It very much matters where these points are. A concurrent thread being allowed to jump in and modify memory at any time at all, even halfway through a primitive math operation (the deranged C / Java / Python[1] memory model) means a correct program is a morass of locks and defensive programming and an obvious program is horribly broken.

Communicating sequential processes (Go, PHP+MySql) makes IO have a modestly simpler synchronous syntax at the cost of communicating between I/O operations much more complex (sending a message to a port or performing some sort of transaction instead of just assigning to a value). It's a tradeoff.

node.js does assume you're writing a network program that needs to overlap multiple communicating I/O operations; if this isn't the case there's no tradeoff to make and non-communicating sequential code is probably easier (think PHP without a database or sessioning)

[1] Python is modestly less braindead by having a very modestly larger set of operations guaranteed to be protected by the GIL. Hoever, it's not composable at all and it doesn't really make reasoning about nontrivial programs easier in practice.


> A concurrent thread being allowed to jump in and modify memory at any time at all, even halfway through a primitive math operation (the deranged C / Java / Python[1] memory model) means a correct program is a morass of locks and defensive programming and an obvious program is horribly broken.

This ignores that most threads, most of the time, operate on their private memory. When your request thread is querying the database and rendering some HTML based on the data from the database, then the only thing that needs to be guarded is the database access. The HTML rendering happens on thread-private data.


If you're trying to reason about your program, this is huge. It very much matters where these points are. A concurrent thread being allowed to jump in and modify memory at any time at all, even halfway through a primitive math operation (the deranged C / Java / Python[1] memory model) means a correct program is a morass of locks and defensive programming and an obvious program is horribly broken.

This can be solved by functional programming. Since the original article was written, JS has `const` and `let` [1] . Also, there are now numerous statically typed functional languages that compile/transpile into JS.

Also, if a developer understands exactly how JS is executed, the problem is mitigated:

    var mutableState = {}

    asyncFunc.then(nextAsyncFunc)

    syncFunc()
If you ignore web workers and node.js threading libraries people have written, Javascript always (by design) finishes executing synchronous code before executing any async handlers (your code is in a queue of handlers). This means that syncFunc will never be interrupted by nextAsyncFunc and have mutableState mutated during its execution. Of course, this means that if you do any intense computation synchronously inside a web server, it will block all other requests until it completes, and it will only use one core.


Const does not mean immutability, only immutable references to the outermost pointer. It is equivalent to final in Java. While that solves the issue with numbers changing state, it does not help objects e.g. For that you need something like immutable.js from Facebook.


"I do not want to care if everything my processor performs is async."

The point is not the processor. The point is that all IO operations take time. Processing the IO operations concurrently with the serial program logic is where the beef is. Writing asynchoronous code in C or C++ is tedious to say the least in my experience, with weird concurrency bugs looming everywhere.


That's not accurate. If you have any code after the line in the first example, it will be executed before the function in the "then" call. You could run several asynchronous operations (nearly) simultaneously. I/O will execute as fast as possible rather than wasting cycles running things sequentially.


I'm fairly sure his point was that if he wanted code to execute between the calls he would rather write that in his code directly than deal with the cognitive overhead of reasoning about constant async calls.

Async makes a lot of sense in cases where you are waiting for an event outside of your server's process, but async calls in the same code base seem a bit over the top.


The difference is that an async call takes a callback and the side effects of the callback, if any, are not immediately available.

But anyway the second call may as well be synchronous and blocking.


You're right, and you're also being chronically misunderstood. It's not about "asynchronous" in the sense of "unclocked"; it's more about how incredibly fast processors are if and only if they aren't waiting for something else.

The programmer would like to think of the series of operations involved on a per-connection basis. Whereas from the point of view of the processor a "connection" is not a thing; there is only a stream of incoming message fragments at occasional intervals from the network and I/O subsystem. It can request data from persistent storage by what is effectively snail mail: send a message and wait a few million cycles for it to come back.

So the software must consist of a set of state machines. We can push those up into the operating system and call them threads, or down into the user's code and call them callbacks and continuations. Each approach has advantages and disadvantages, and it's important to understand what they are.

(Possibly the language which does this best is Erlang, although it's not as easy to get started with as node.js. Theorists really overlook the vital metric of "time to hello world" in language learning.)


> the mental model works better for asynchrony; instead of describing a series of steps to follow, and treating the interrupt as the exception, you describe the processes that should be undertaken in certain circumstances

I could never understand how this works better as a mental model. Say you ask somebody to buy you a gadget in a store they don't know. What do you do tell them:

a) "drive in your car on this street, turn left on Prune street, turn right on Elm street, the store will be after the second light. Go there, find "Gadgets" isle, on the second shelf in the middle there would be a green gadget saying "Magnificent Gadget", buy it and bring it home"

or:

b) when you find yourself at home, go to car. When you find yourself in the car, if you have a gadget, drive home, otherwise if you're on Elm street, drive in direction of Prune Street. If you're in the crossing of Elm street and Prune street, turn to Prune street if you have a gadget but to Elm street if you don't. When you are on Prune street, count the lights. When the light count reaches two, if you're on Prune street, then stop and exit the vehicle. If you're outside the vehicle and on Prune street and have no gadget, locate store and enter it, otherwise enter the vehicle. If you're in the store and have no gadget then start counting shelves, otherwise proceed to checkout. Etc. etc. - I can't even finish it!

I don't see how "steps to follow" is not the most natural mental model for humans to achieve things - we're using it every day! We sometimes do go event-driven - like, if you're driving and somebody calls, you may perform event-driven routine "answer the phone and talk to your wife" or "ignore the call and remember to call back when you arrive", etc. But again, most of these routines will be series of steps, only triggered by an event.


A list of steps to follow acts way more like the event model than procedural code.

The state of the world can change between steps and the next set of instructions don't apply til you meet some certain criteria or .. event, if you will.

> drive in your car on this street, turn left on Prune street, turn right on Elm street, the store will be after the second light.

Turn left on Prune street doesn't mean nothing else happened between you driving your car and reaching that street.

You could have stopped on the way to pick groceries, changed a tire, bought a snack ... you get the idea.

It doesn't mean you must drive straight to Prune or could get to the destination if you only drove straight there.

Getting to Prune street is an async call. When you reach it, regardless of how, a callback is fired and a new async task is given to you.


It's not "async call", it's a sequence of actions that you perform. Whenever it happens, it's still a sequence of actions. It is a natural way for people to describe a way to do something. The fact that one scenario can be interrupted for another does not change its nature.


> b) when you find yourself at home, go to car. When you find yourself in the car, if you have a gadget, drive home ...

This is basically exactly how it happens. I say "Can you buy me a thing". The rest is automatically performed by your brain's programming. There is no need to actually mouth the instruction "if you are at home without the gadget, go to your car" because "Can you buy me a thing" automatically links against the code which includes that state processing function.


I think a lot of the issues with JavaScript are around the movement of individuals from synchronous programming languages to the asynchronous JavaScript way.

For some time it resulted in callback hell (http://callbackhell.com/). This was addressed through Promises and more recently Observables, Generators, and async/await.

With transpiling, adoption of these asynchronous concepts has been possible quicker than would have been achieved otherwise.

I would suggest revisiting some of the modern JS constructs to support better use of the underlying cpu architecture.

I will however admit, getting my head around Promises took about a month before it really clicked how powerful this approach can be to support non-blocking IO.


> With transpiling, adoption of these asynchronous concepts has been possible quicker than would have been achieved otherwise.

You're basically admitting that Javascript is not good enough to write asynchronous code and you need to use a third party language that transpiles to Javascript ...


> This was addressed through Promises and more recently Observables, Generators, and async/await.

Parent is talking about ES2015 constructs (or ES2016, in the case of async/await). I don't think anyone will disagree that doing async stuff in ES5 is a massive PITA- but it's a bit disingenuous to claim that the newer Javascript specifications aren't doing enough to fix those problems.

That said, four years ago I would have agreed with you- Coffeescript seemed to be the only game in town making an effort to improve the state of Javascript. But I think the embrace of transpilers in front-end engineering has only helped Javascript thrive, by allowing the community (and core developers) to more easily determine what features will actually make the language better.

(On a completely unrelated note, nice username- DCSS reference?)


I've started using async/await and it is something of a game-changer in terms of code readability.

Promises were nice but also subject to nesting and readability problems which the new addition solves nicely.


>You're basically admitting that Javascript is not good enough

Most languages go through iterations and improvements. Transpilers are enabling Javascript to 'fix' issues with the language quickly while being backwards compatible. This is a good thing.

Note by necessity all languages transpile down machine code eventually. ;)


I would't say that transpiling and compiling are the same.


Try doing some research before you post nonsense. Google isn't more than a click away...


Says the guy who has nothing relevant to say.


That layout made my eyes bleed: http://i.imgur.com/hwKqaUc.jpg


It's not about how the processor works. A developer which is using a language of JavaScript's abstraction level does not think about exactly how a processor works. It's about how a human can easier think up a program and put it into code. And that is mostly synchronous indeed.


I find that many misunderstandings of this sort arise from inadequate mental models of the underlying system, and I was trying to present the mental model I use, in which asynchronous IO patterns seem totally natural and normal. Clearly that introduced some unnecessary confusion, so I'll frame things differently next time.


How many JS-targeted VMs / container things run just one thread of this async-ness when more are available to that container / VM / context? I'd rather run mod_php under preform MPM and take advantage of all the cores. Is there something as easy and well-supported for running Node apps in things like Docker?


> In the real world, almost everything is synchronous, and only occasionally do you really want async behavior.

From an Ops perspective, I disagree. At a systems level, everything already is async, whether you like it or not. So then the options are either to embrace the admittedly hectic nature of async, or pretent that things are synchronous and impose those constraints to maintain a logical purity from a programmer's standpoint.

For example, in a synchronous setup, it's perfectly logical to service an incoming http request with a few database queries followed by some more logic and some more DB queries or API calls; wrap all the result and send it back. The same in node.js forces the code author to think about those initial DB queries (which may run simultaneously), about how to roll up the result when they are all done and pass it to callbacks.

But while thinking about those callback interactions, there's a motivation to reduce the number of steps, or to factor them out into their own module or microservice. This promotes thinking at a systems level instead of just at the level of "1 request serviced by 1 handler". Perhaps that big request can be broken up into several API calls and the web page can load those asynchronously. Perhaps the smaller microservices end up building up less memory usage per request (or spread it out in a way that makes clustering more efficient).

One may see async as a hurdle in the way of logical abstraction, or as a way to align thinking with the realities of network programming.


The advantage you're describing is concurrency, which doesn't necessarily require an asynchronous/event-driven programming model. Synchronous concurrency is built around threads to achieve concurrent execution (i.e. instead of waiting on events for several requests to finish in parallel, you fork off a thread for each to process each request synchronously).

Node came of age about a decade after epoll was introduced, when not having access to nonblocking IO was considered a big liability for a couple of dominant web programming languages, and they built their concurrency model around the semantics of epoll.

However, there are languages like Haskell, Erlang, and Go that IMO did the right thing by building a synchronous programming model for concurrency and offering preemptable lightweight processes to avoid the overhead associated with OS thread per connection concurrency models. These languages offer concurrent semantics to programmers, yet still are able to use nonblocking IO underneath the covers by parking processes waiting on I/O events. It's not the right tradeoff for every language, particularly I think lower level languages like Rust are better off not inheriting all the extra baggage of a runtime like this, but for higher level languages I think its probably the most convenient model to programmers.


>>So then the options are either to embrace the admittedly hectic nature of async, or pretent that things are synchronous and impose those constraints to maintain a logical purity from a programmer's standpoint.

Logical purity is important. It helps people reason about their code better, which in turn makes them more productive.

The reason we have frameworks and high-level languages is so that we can take advantage of abstractions and not worry about the underlying complexities of the system. If a framework is making you "embrace the hectic nature" of those complexities, why use that framework in the first place? Might as well use a language like C, which actually is a low-level language designed for developing hyper-optimized software. Node.js, on the other hand, is supposed to be a Web (read: high-level) framework. That's why implicit-async doesn't fit its nature, and in large codebases you see nested callbacks all the way down.


Ever heard of the Law of Leaky Abstractions?

Logical purity is completely useless if the chosen abstraction leaks like the Titanic after the iceberg.

And no, the alternative is not "just use C", but choosing an abstraction that better fits the nature of the underlying system.


I think what this really gets at is that callbacks are a shitty concurrency primitive.

They certainly make sense from an implementation perspective (I've got this interpreter and I want to do concurrency... I know, I'll just use it to trampoline callbacks).

But callbacks are too low level to reason about easily to some extent, it's kind of like writing everything as a series of goto statements. Sure you can do it, but good luck following it.

The advantages of asynchronous execution are lost. This isn't limited to nodejs, I've experienced similar problems with Python twisted...

Channels (such as in clojure/core.async) or lightweight processes and messaging (such as in erlang/beam), are much more enjoyable to work with.

This shouldn't really be a shock, however, they're higher level abstractions based on real computer science research by people who really thought about the problem.


Callbacks are ugly when you have to deal with them directly. But a little bit of syntactic sugar to capture the current continuation as a callback - such as C# await, or the now-repurposed yield in Python - makes it very easy and straightforward to write callback-based async code.


Babel and regenerator let you use await in es/js too.


Python does have await from asyncio


Apparently, I wasn't paying as much attention to the state of async in Python 3.5 as I should have. Looks like they actually have the most featureful implementation of that right now, complete with async for all constructs where it can be sensibly implemented and provide some benefits (like for-loops and with-blocks).


Fortunately JavaScript is a flexible enough language that you don't only have to use the primitives it provides.

Promises (which were pure JS before they were ever added to the language) are a more-composable structure for reasoning about concurrency. I don't think I've put more than one line of callback-style code into production per month.


And async/await are such a game changer its amazing.

I finally bit the bullet and tried it even though it's still not final syntax, it simplified most of my code while keeping the execution exactly the same.

It really is a night and day difference.


On the subject of nice higher-level concurrency abstractions, I'm very taken with the current fad for Promises/Futures (a la ES6 or Scala), which are basically a translation of the Haskell IO monad to plain English.


Isn't closer to the Async monad?


I was a node developer back when it hadn't been packaged yet (yes, I'm a hipster). I left a few years ago for many reasons, including the fact that I think the community sucks.

I agree with everything you say, and I want to add to it. People compare node to things like Go, PHP, Python, etc, which is a mistake. As someone who had to write a fairly complex native module for node that a lot of people still use (5000 downloads this month, according to npm), nodeJS is a framework not a language.

Node is not in control of the underlying code that implements the ES spec. Node is a bunch of C/C++ middleware wrapping V8 and implementing various server and process utilities. That is all it is.

Why is this important? Because it biases the community towards piling on feature after feature and moving at an insane pace, sometimes in ways that are outside of the community's control. Node is biased against stability, because they feel the need to keep up with V8. Counter this with Go, which has been frozen as a language for 8 releases now, with each release fine tuning the stability of the compiler/language, runtime, and libs.

That native module that I mentioned earlier, I have a bunch of open requests to make it work with nodeJS 6 (no idea what this is), which is the fourth breaking V8 interface change I've had to deal with. The native module clocks in at >2000 lines of C++. I implemented the same library in Go and it clocks in at 500 lines. I've only had to modify the Go version of this library once, and it was to take advantage of a new feature (i.e. I didn't have to update it).

Node's continual quest for features is going to keep it in the niche web development space it has come to dominate. There is simply no way I would architect a system with such an unstable platform.


Maybe it's because I learned to write code in javascript (browser and node) -- does that make me an async native? -- that I feel the reverse of you. Synchronous just doesn't feel right, and I'm happy with the way node is assumes async. I'm one of those who finds the new Await syntax sort of unsettling at a gut level.


Async isnt about ordered operations its about waiting. When you make a database call, you thread can sit there for 50 ms or do something useful. You can outsource rendering work to a child process. You dont have to let infinite loops block events.


That 50ms isn't necessarily wasted. When I get the results of that database call I want the results returned to the user ASAP, with async operations the thread may be too busy to do that.

We do this daily in real life. It's more efficient for me to have 10 minutes down time at work while I'm waiting for someone else than it is for me to start something else then either get interrupted or finish another task before I respond to the person I'm waiting on.


It depends on how fast you can switch context. If you can switch context in 1ms, then waiting 50ms is a waste of 48ms. But if context switch takes 100ms then sure, it's better to wait idling.


This seems absurd to me. What exactly are we going to sit there and do while waiting on the data we need to fulfill a query?

I work on services where any given endpoint may handle many thousands of requests per second. We don't care about a slight penalty to query the database, because these are very short, and our services return responses on the order of 100 ms.

Maybe these are things you care about in a language like Javascript, but in something statically typed, they just don't seem like a problem.


> What exactly are we going to sit there and do while waiting on the data we need to fulfill a query?

Fulfil more/other queries.

> ...in something statically typed, they just don't seem like a problem.

What has static typing got to do with asynchronous IO?


You could use the same thread and process to do something else instead of having to spin up another thread or process.

If you're running a web server, you can handle another request, then come back to the first one when the data arrives.


So shouldn't the only question ever be:

* Would we rather burn computing resources (i.e. on new threads) or calendar time+human resources (i.e. on code refactoring)?

Followed closely by:

* And does it even matter for our situation? (i.e. since for many situations getting every last drop of performance out of a system isn't the real obstacle; it's making it do something useful for the end-user or business)


and let's not kid ourselves that anyone trying to get maximum performance out of anything would be running JavaScript. If you need max performance, you write C, and sometimes raw ASM.


That's like saying if you aren't driving a Veyron you obviously don't care about speed so use a moped.


Having a successful async paradime helps closer to bare metal by setting a standard for how things should work. Now you can likely find more asynchronous libraries that attempt to copy the functionality of something familiar than creating something completely fresh


In a client world, you definitely care about using all of your resources and never halting the rendering/main thread. There is often many things going on at once and you usually need to keep main thread halts under 16ms


> What exactly are we going to sit there and do while waiting on the data we need to fulfill a query?

On the data you need to fulfil one client's query. The answer is that you do stuff you need to fulfil other clients queries, until you're ready to come back to the first one.

I don't see what type systems have to do with your comment either. Scala uses asynchronous concurrency primitives (Futures) and it's statically typed.


It's common in high performance code to use spin locks (also called busy waiting) to reduce latency. In some circumstances it's better to do nothing and be available almost immediately to do some work than to naively context switch away and pay the penalty of context switching back later. Check out how futexes are implemented in Linux for an example - hybrid mutex and spin lock.


If you're running a web server, you're processing possibly hundreds of requests at any given time. Especially since JS is single-threaded, you want to minimize blocking by awaiting, aka yielding your thread to another thread to let it continue processing its own request. Async minimizes time spent doing nothing.


Doing synchronious is much clearer and easier, most of the async patterns (like yield/await and promises) really just try to simulate a sync paradigm.

But there is one great reason why node's async first mentality is superior- when you want to do async in node, you don't have to worry that some library you are using is going to lock up your thread.

In any other language, you have to painstaikingly make sure everything you use isn't doing sync, or try and monkeypatch all io operations (python has something like that).

Frankly, for me, when I'm dealing with webservers I find myself needing to use async quite a lot.


In any other language

No; for example, in Go, every time the code does IO, the scheduler will re-assign another goroutine, and they asynchronously get back to the other when the IO finishes. It doesn't need any special support by the library.


That sounds like an implicit async/await, which sounds rather awesome.

I haven't really gotten around to trying Go yet, but there are definitely great things going on there.


It's just green threading, which has been around for a long time.

You can run millions of threads on a single machine with modern Linux kernels if you either have lots of memory or simply minimize your stack sizes and usage. The kernel doesn't care much. You don't really need any special language features. The main reason fibers/green threads/goroutines/etc have become popular lately is that common language runtimes and standard libraries don't resize thread stacks on the fly and tend to like very deep call stacks, so minimising your memory usage can be rather hard.


> he main reason fibers/green threads/goroutines/etc have become popular lately is that common language runtimes and standard libraries don't resize thread stacks on the fly and tend to like very deep call stacks, so minimising your memory usage can be rather hard.

Green-thread, coroutine have been around for a long time, the recent popular is probably due to increase prevalence of people trying to solve the C10K problem.

Green thread, are more efficient both in term of memory consumption and context switching than kernel thread (even more so with compiler support).

The linux kernel might be able to handle millions of threads, but thruth is one can do it much more efficiently in userland


> Doing synchronious is much clearer and easier, most of the async patterns (like yield/await and promises) really just try to simulate a sync paradigm.

I'm currently working on some lighting (DMX-based theatre lighting) software and the ability to have my 'effects' written in a sequential manner by using 'yield' is actually incredible. It's simplified my code a huge amount and made it a lot easier to reason about.

There is a function called every frame, which then calls each effect. Since each effect is a Python generator function, it can be written sequentially and just has to yield whenever the frame is computed.


> when you want to do async in node, you don't have to worry that some library you are using is going to lock up your thread.

> In any other language, you have to painstaikingly make sure everything you use isn't doing sync

I haven't found that to be the case TBH. In most other languages i've used i don't have to worry about foo() being syncronous or not. I can assume that. In other words, after the control returns from the foo() call, i can assume that what foo() is supposed to do has been completed. In the case that foo() needs to do some IO operation, i can usually rely on it doing a syscall that will block the thread and put it to sleep, instead of doing something nefarious like busy waiting.

And that's it. The current thread will sleep while the IO operation in foo() completes, and the OS (or VM if we're talking green threads) will switch the CPU to do something else in the main time. And if it's a web server we're talking about, that something else may be another web server thread, serving a different request. Yay, preemptive multitasking!

In the case of node, you do very much have to worry about a function call being synchronous or asynchronous, as it defines the way it has to be called. In the former case, it can be called "normally":

  let a = foo()
  // use `a`
In the async case, some code gymnastics need to be involved:

  // Either callbacks:
  foo(a => { 
    // use `a`
  })
  // Or promises:
  foo().then(a => {
    // use `a`
  })
  // Or promises + await syntax:
  let a = await foo()
It may not seem a big problem at first, especially if using the `await` syntax is an option. But to me the biggest problem of this technique is that the "async-ness" of a function cannot be abstracted away. If a function `foo` that has always been synchronous, and that is used in several places, needs to be changed, and this change implies `foo` calling an async function `bar`, then `foo` will need to become async too, and all the places where `foo` is called will need to be changed. Even though this internal `bar` async call was an implementation detail of `foo` that shouldn't have concerned `foo` callers. And this propagates all the way up: if any of the places where `foo` was called was, in turn, a synchronous function too, then that function will also need to be refactored into an async one, and so on and so forth.

And that's basically why i find node's distinction between sync and async functions so frustrating :(

PS: Cooperative multitasking does not imply this weird syntax complexity of having two different kinds of function calls that cannot be freely intermixed. For instance, Eralng "processes" use cooperative multitasking, but the "yields" are managed implicitly by the VM, so you don't need to explicitly yield control of the current thread to let other stuff run concurrently.


> make pretty much everything async.

This isn't intrinsically bad. It just makes Node.js biased towards use cases that benefit from asynchronous I/O, which is a perfectly valid design choice. Sensible programmers already know that computationally intensive tasks that benefit from actual parallelism are best served by a different tool.

What's absolutely horrible about Node.js is how they make everything asynchronous: explicit continuation passing everywhere. In other words, the user is responsible for manually multiplexing multiple logical threads through a single-OS-threaded event loop. If it sounds low level, it's because it is low-level. A high-level programming language ought to to better than this.

Other languages get this right: Erlang, Go, Clojure, Scala, Haskell, Rust, etc.


1) All data has a type.

2) All I/O (if you're doing it right) is asynchronous.

Node does things the right way around; it's just a shame that Linux (unlike Windows) doesn't have async I/O baked into the heart of the OS and well supported with useful kernel primitives.

Synchronous I/O by default means you're wasting your CPU cores. When running at Web scale, you ALWAYS want to leverage async I/O in order to not have the CPU idle blocking on some I/O operation, and you don't necessarily want the synchronization and memory overhead of multiple threads. That's why Node is designed the way it is.

By using the 'co' library and generators you can write async code almost as if it were sync. It works like the inline-callbacks feature of Twisted.


Synchronous I/O by default means you're wasting your CPU cores.

But that's the thing: synchronous code doesn't mean synchronous execution. The language/platform should be able to make it async for you, without forcing you to re-shape your code. And it's not a rare system: even plain old threads work like that - in fact, they're often a good solution: https://www.mailinator.com/tymaPaulMultithreaded.pdf


The code itself is not the problem. Data, specifically mutable state and dependencies are. Relying on the platform to figure those out for efficient async execution hasn't worked out so well in the past...


> it's just a shame that Linux (unlike Windows) doesn't have async I/O baked into the heart of the OS and well supported with useful kernel primitives.

What about epoll()?

> Synchronous I/O by default means you're wasting your CPU cores.

You can implement synchronous IO on top of an async backend using the above mentioned epoll. I believe that is how read() and similar syscalls are implemented. They block the programs thread, but the core is free to run other threads while it waits for IO.


epoll doesn't change the fact that all of the well-supported I/O primitives under Linux are synchronous. Not necessarily blocking, but synchronous. There are aio_* system calls in the kernel but they suck. There are only a few, and the kernel devs seem to think that doing sync I/O in a different process/thread is good enough so they may not even last forever in the kernel.

Linux does not support I/O completion ports, or even a completion-based model for I/O at all. (Instead it uses the readiness model, requiring you to waste cpu cycles in a polling loop checking if an fd is "ready", that is if you don't want to spin off a new process.) In Linux you can't check an I/O buffer for data and schedule the call that would fill the buffer if it's empty in one system call, the way you can in Windows.

Windows was modelled after VMS, which was designed for reliability and throughput. Linux was modelled after Unix, which was designed so that Ken Thompson could play games on a scrapped PDP-7.


This sounds like something straight out of the Unix Hater's Handbook. :)

But epoll is not the fault of Unix. Linux gets it wrong, but others do not. There's a good (and wildly entertaining) discussion/rant about it in this BSD Now episode: https://www.youtube.com/watch?v=l6XQUciI-Sc


epoll (and kqueue) are products of Unix's approach to I/O, which is synchronous by design. You can build synchronous I/O routines out of asynchronous primitives, but not the reverse (without resorting to separate processes/tasks/threads).

epoll is more wrong and broken than kqueue, but neither are up to the level of capability the NT kernel offers.


> requiring you to waste cpu cycles in a polling loop checking if an fd is "ready",

epoll in edge-triggered mode does not require this loop to exist in userland. In edge-triggered epoll (or the similar kqueue construct), your work thread(s) will be asleep until the kernel chooses to wake one up.


> Synchronous I/O by default means you're wasting your CPU cores

When a process becomes blocked (like due to synchronously waiting for IO to complete), the kernel will context switch away to another runnable process. The process that becomes blocked essentially goes to sleep, and then "wakes up" and becomes runnable again once the operation completes. The CPU is free during this time to run other processes.

If you need to do multiple different of these things concurrently, then you can run multiple processes. Writing a single process with async code won't make that process faster. To do more things at the same time you can run multiple processes. Context switching between different processes is what the kernel scheduler is designed to do, and it does so very efficiently. There isn't much overhead per thread. If I recall correctly, Linux kernel stacks per thread are 8 kilobytes (with efforts under way to reduce that further [4] - also discussed in [1]), and the user stack space is something the application can control and tune. The memory use per thread needn't be much.

Using all available cores to perform useful work is the most important thing to achieve in high-throughput code, and both async and sync can achieve it. Async doesn't become necessary for high performance unless you're considering very high performance which is beyond the reach of NodeJS anyway [2]. Asynchronous techniques win on top performance benchmarks, but typical multithreaded blocking synchronous Java can still handily beat async NodeJS, since Java will use all available CPU cores while Node's performance is blocked on one (unless you use special limited techniques). There's some good discussion about this in the article and thread about "Zero-cost futures in Rust" [1]. The article includes a benchmark which compares Java, Go, and NodeJS performance. These benchmarks suggest that the other tested platforms provide 10-20x better throughput than Node (they're also asynchronous, so this benchmark isn't about sync/async).

Folks might also be interested in the TechEmpower web framework benchmark [2]. The top Java entry ("rapidoid") is #2 on the benchmark and achieves 99.9% of the performance of the #1 entry (ulib in C++). These frameworks both achieve about 6.9 million requests per second. The top Java Netty server (widely deployed async/NIO server) is about 50% of that, while the Java Jetty server, which is regular synchronous socket code, clocks in at 10% of the best or 710,000 R/s. NodeJS manages 320,000 R/s which is 4.6% of the best. In other words, performance-focused async Java achieves 20x, regular asynchronous Java achieves 10x, and boring old synchronous Jetty is still 2x better than NodeJS throughput. NodeJS does a pretty good job given that it's interpreted while Java is compiled, though Lua with an Nginx frontend can manage about 4x more.

I agree that asynchronous execution can provide an advantage, but it's not the only factor to consider while evaluating performance. If throughput is someone's goal, then NodeJS is not the best platform due to its concurrency and interpreter handicap. If you value performance then you'll chose another platform that offers magnitude better requests-per-second throughput, such as C++, Java, Rust, or Go according to [1] and [2]. Asynchronous execution also does not necessarily require asynchronous programming. Other languages have good or better async support -- for example, see C#'s `await` keyword. [3] explores async in JavaScript, await in C#, as well as Go, and makes the case that Go handles async most elegantly of those options. Java has Quasar, which allows you to write regular code that runs as fibers [5]. The code is completely normal blocking code, but the Quasar runtime handles running it asynchronously with high concurrency and M:N threading. Plus these fibers can interoperate with regular threads. Pretty gnarly stuff (but requires bytecode tampering). If async is your preference over Quasar's sync, then Akka might be up your alley instead [6].

> By using the 'co' library and generators you can write async code almost as if it were sync.

For an interesting and humorous take on the difficulties of NodeJS's approach to async, and where that breaks down in the author's opinion, see "What color is your function?" [3].

[1] https://news.ycombinator.com/item?id=12268988

[2] https://www.techempower.com/benchmarks/#section=data-r11&hw=...

[3] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

[4] https://lwn.net/Articles/692208/

[5] http://docs.paralleluniverse.co/quasar/ [6] http://akka.io/


Node isn't interpreted, it runs on V8 which is a fairly advanced JIT compiling VM.

The main reason Node is so slow is that, as you point out, it doesn't do threading at all, and JavaScript is just inherently difficult to optimise into machine code, even though V8 makes a good effort.


The benchmark code uses multiple processes to utilize all CPU cores (standard way in Node.js land).

I would blame the highly dynamic nature of the language for (relative) slowness – the price you pay for flexibility / quickly-writable code. That being said, it's still fast for dynamically typed language.


What do you think a VM is?

It does not target your hardware architecture. Just because your code is in binary instead of text, it does not become less of an interpreter.


https://developers.google.com/v8/design

C-f for "Dynamic Machine Code Generation"

V8 compiles it to native machine code for execution. It's not an interpreter or a VM with its own bytecode.


Cool. There is no VM, just a runtime not much different from the ones from other compiled languages. It compiles your code piece by piece, and optimizes during that compilation.

This is a very interesting solution for JavaScript, but I think it's constrained by the language and could become better if it were fully compiled.


Perhaps for the use-case of node, being fully compiled could be a bit better. But I'm not convinced. Java has demonstrated (over two decades now) that initial compilation to machine code isn't essential for performant software (ok, less than two decades for hotspot JITs in Java). The cost for node is in startup time, but this is, like with Java, amortized over the life of the program. For a single-run executable, it's perhaps too costly (but we use truly interpreted languages for this as well, so depends on the task), but for a server, it's potentially pretty good.


Yes, that was for the specific case of Node. For web use, full compilation is a clear loss.

Java can make great use of JIT exactly because it's interpreted. When you make partial compilations, you lose a lot of JIT opportunities, and a lot of static optimization opportunities too. Yet, it looks like V8 attenuated this somehow, and got good performance anyway. Must have been a feat.


> 2) All I/O (if you're doing it right) is asynchronous.

Not true. Synchronous I/O is higher-throughput. If you need to do a large amount of high-latency I/O (e.g. web requests) then asynchronous can end up being lower-overhead overall, but making everything async is just as bad as making everything sync.


NPM would be hugely improved if:

1) `npm install` on two different machines at the same point in time was guaranteed to install the same set of dependency versions.

1b) shrinkwrap was standard.

2) it was faster.

3) you could run a private repo off a static file server.


Check in your modules.


In the past, I was involved with building a Scheme interpreter which was automatically asynchronous, without need for manually handling it with callbacks.

Basically, it was lightweight threads which would yield execution to other threads whenever waiting for I/O, or explicitly yielding. It allowed for a very straightforward linear programming style, no callback hell.

Coupled with a functional programming style, there was rarely a need for mutexes or other synchronization between threads.

When a thread yielded, the current continuation was captured and resumed when the thread was ready to continue. At the bottom of it all was a simple event loop where the Scheme interpreter dispatched I/O and timeouts. Scheme happens to have support for continuations built in to the language, so the implementation was actually quite simple.


I've built something similar but a completely new programming language with backtracking as the major feature. The intended use is dynamic OS configuration especially network interface configuration.

The interpreter is single-threaded but there is simple concurrency between the "processes", that is a process yields when it has nothing more to immediately do.

Old online demo build through Emscripten (it misses many new features such as functions): https://rawgit.com/ambrop72/badvpn-googlecode-export/wiki/em...

Docs currently still dying on Google Code archive: https://code.google.com/archive/p/badvpn/wikis/NCD.wiki

Source: https://github.com/ambrop72/badvpn


> Basically, it was lightweight threads which would yield execution to other threads whenever waiting for I/O, or explicitly yielding. It allowed for a very straightforward linear programming style, no callback hell.

How is this different from a normal multithreaded C program on a POSIX kernel?

When I call read(), my thread is suspended until the data comes back, and another thread can run.


If you think promise-based code with a lot of "then" is annoying, perhaps ES7 async/await will make you happier.


This is exactly what I was about to write. Also, the amount of "callbacks, that's how it's supposed to be" and "hardware is like that" and "programming is hard, deal with it" kind of replies for a readily solved problem are staggering.


I don't know about others but the imposition of try catch for error handling in async/await seems to cut against the syntax gains over promises.


If by promises you mean the `some.fun(had).catch(had, function(){handle stuff here})` syntax compared to async/await, that is actually a very fair point and it does make two syntaxes similarly verbose.

However, it is still easier to follow a ladder of conditional statements using async/await compared to escher-style stairs of callbacks inside callbacks.


Interesting. In which language is this "readily solved"?


Erlang, Elixir, & Go have nicer models.



It's a solution for a made-up problem. Sure, async/await makes it easier. But not as good as the normal sync code would be. And for what reason? The concurrency problems are already a solved issue in many many frameworks. Sure, if you do fully manual threads in something like C++ and code up everything yourself - you're in for the trouble down the road unless you are a good professional. But there are heaps of concurrency frameworks in all major languages that make this very easy. And what about performance, how does JavaScript justify putting virtually every single return value of every function into some Promise object? Talk about wasted resources.


Await lets you preserve the sequential syntax that normal programmers are used to. It's syntactic sugar, which is fine if you prefer to code in a way that most people are familiar with, versus callbacks that can be messier (callback hell).


JavaScript uses one thread so not switching threads with all its contexts saves a lot of overhead.


JavaScript being single-threaded is one of it's biggest drawbacks. This is universally agreed on by both it's proponents and critics. Yes sure, some very few resources are wasted on each thread, but in case of JavaScript, it wastes a lot more by not using all the cores, of which there are usually several present in almost any modern computer that will run JS.


> The NPM stuff... well, I think all ecosystems have their pros and cons. I'm not a huge fan of NPM, but it does the job for the most part, and I'm curious as to how people would actually improve it, rather than just complain about it all the time. I don't really have any good ideas (knowing nothing about how package management actually works under the hood).

  - Remove the Couchbase dependency
  - Make login authentication a plugin or API-based
  - Integration with Github/Gitlab for automated publishes
to name a few.


Yes, and on top of that, it is asynchronicity without real concurrency. Imagine one part of the code doing a long CPU-bound computation; the rest has to wait!

It's like the worst of two worlds.


With due caution about @marssaxman's point that software is fundemantally interrupt driven, I will agree with this.

I work an a real-time system that is controlled by RPC from a desktop. We claim we want to keep the real-time code small and simple.

Yet I find people pushing complexity down onto the RT system even when the real-time guarantees aren't needed because the PC code responds to it using a callback driven async model. It's easier to put a non-real time for loop into the RT thread than to emulate a for loop on the PC.


I think the idea of having a powerful scripting language based on JS is a godsend in many ways. But like you, I also feel that async is a bad default for a language that has such widespread use.

I love being able to use one syntax to write 95% of my code(instead of having to switch between JS / python / php etc.) but some of NodeJS's programming and ecosystem realities are annoying to say the least.


Everything is async in node, because doing sync operations in event loop is terribly slow.


No. That's nonsense. Everything is async in node because node is based on the V8 Javascript engine which is not thread safe. V8 is not thread safe because it was designed for web browsers, and browser rendering engines are also not thread safe.

People have tried to retcon node's choices as being based on some sort of deep technical justification, but that's not the case: its limits are the limits of the material it's made of, not a specific design choice.

If you look at more advanced VMs like the CLR or JVM then they support all modes of operation: synchronous/blocking (which is often what you want), synchronous/blocking with many threads, async with callbacks, async with continuations ... node only supports one. How is that better?


JavaScript was explicitly designed to use one thread and prevent all kinds of concurrency problems. It requires getting used to, but it's very powerful.

Look at this page and tell me the JVM is better: https://docs.oracle.com/javase/7/docs/api/java/util/concurre...


Javascript was never explicitly designed to use one thread: it used one thread because Netscape used one thread. This is back in the 1990s when multicore on the desktop didn't exist and JS execution speed was a non issue.

The page you linked to shows a full and complete library of concurrency tools, which you don't have to use if you don't want to. How is this evidence of being worse?


Sure, powerful. As long as you're mainly I/O-bound and shared-nothing.

Oh, and your link actually DOES show that the JVM is better. It can give you everything node.js has and then some.

If you want to see something like node.js for grown-ups, take a look at Vert.x or Akka (& Frameworks based on it), for example.

(Sorry for the snark, it's been a long day...)


That library is probably one of (if not the most) powerful concurrency libraries in existence.


The universe is inherently concurrent. It only makes sense for programming languages (which are used to model problems in this universe) to follow along.


Your comments about the annoyingness of async are real.

BUT

Just because you want to do ABCDE does not mean it's feasible.

You can access a file, read it, compute something, depending on that access another file, read it, write something via Http Post somewhere else.

BUT the nature of i/o is such that the problem is 'intrinsic' - it's not a 'node.js' problem so much.

Of course you know that you can do everything syncrhonously if you want, but then you get hangups and what not waiting for underlying resources, networking calls etc..

The nature of a distributed system is inherently async, and so we have to live with it.

That said - maybe you have some examples of where async definitely should not be used?

The other huge problem with Node.js is the vast, crazy expansion of tiny libraries, supported by ghosts, that change all the time. A simple lib can depend on a dozen others, with varying versions, everything changes almost daily. A small change somewhere can really jam things up.

This is by far my #1 concern, although it can kind of be avoided if you're careful.


> Just because you want to do ABCDE does not mean it's feasible.

Every single language is able to do ABCDE, except for JavaScript. Yes, I guess that means it's impossible.

> The nature of a distributed system is inherently async, and so we have to live with it.

The nature of a distributed system is the one you impose onto it by design. Computer systems are not something that appear at nature and we just harvest.

There are plenty of synchronous architectures for distributed systems, from distributed logical clock ones, for clock domain ones. People tend to avoid async ones anyway.


I usually ask people to explain why they picked or like (or dislike) a particular technology and that surprisingly tells quite a bit about their proficiency level.

At least in interviews I found it tells a lot more about their proficiency than say knowing how to invert binary tree in under 20min or solve a digital circuit diagram in with object oriented principles.

Node.js is a technology that raises red flags when someone advocates it. I've heard stuff like "it's async so faster", "it makes things non-blocking so you get more performance not like with threads", "you just have to learn one language and you're done", "...isomorphic something..." When digging in to discover if they knew how event dispatching works or how these callbacks end up called data comes in on a TCP socket, and there is usually nothing.

The other red flag is the community. Somehow Node.js community managed to accumulate the most immature and childish people. I don't know what it is / was about it. But there it was.

Also maybe I am not the only one, but I've seen vocal advocates of Node.js steam-roll and sell their technology, often convincing managers to adopt, with later disastrous consequences. As article mentions -- callback hell, immature libraries, somehow the promised fast performance guarantees vanish when faced with larger amount of concurrent connections and so on. I've seen that hype happen with Go recently as well. Not as bad, but there is some element.

Now you'd think I am 100% hater and irrational. But one can still convince me that picking Node.js was a good choice. One good thing about Node.js is it is Javascript. If there is a team of developers that just know Javascript and nothing else. Then perhaps it makes sense to have a Node.js project. Keep it small and internal. Also npm does have a lot of packages and they are easy to install. A lot of them are un-maintained and crap but many are fine. Python packaging for example used to be worse, so convincing someone with an "npm install <blah>" wasn't hard.


I've been developing public and internal APIs with node.js full-time for the past 3 years. I can see where you are coming from, but nothing you've said explains why the platform itself is not useful. Most of what you complained about is the caliber of developer and the ecosystem. That reminds me a lot of the complaints about PHP.

The truth is that there are good developers using node.js, there is good code in the ecosystem, and someone that's worked with for a while has learned lessons.

I agree with your performance complaints. On my last project we had to spend considerable time reworking components of our application due to those components blocking the event loop with CPU intensive tasks.

I would say that node.js is probably selected more than anything else for speed of getting a project up and running. It's easy to find JavaScript devs. JavaScript doesn't require a compilation step so iterating and debugging small changes is much faster. There's a ton of pre-build frameworks for serving up APIs even with very little code for CRUD apps.

It's not that there's something you can do with node.js you can't do with other languages. There's just less of a barrier to entry.


This is an age old story.

There's _way_ more bad Perl in the world than good Perl (Matt's Script Archives anyone?), but these day's it's easy to find well written Perl and appropriate and useful Best Practices for Perl projects of any scale.

There's a _lot_ of bad PHP out there - but Facebook and HipHip clearly show that there are sensible, scalable, and well understood ways to write good PHP code.

Nodejs seems to me to be like the Perl world was in '95 or the PHP world in 2000 or the Python world in 2002 or the Java world pretty much forever ;-) There's not enough examples of "Good Nodejs" yet, and all the Google searches show you are "typical" Nodejs code - which as Sturgeon's law dictates "90% of everything is crap" - so most of the Nodejs code that gets seen or discussed is crap. That will _probably_ change as we "forget" all the worst written Node, and upvote, links too, copy, and deploy well written Node.

There's more similarity to PHP that other languages in my opinion too, in that Node _is_ Javascript, and like PHP, it's a fairly easy route into "development" for an html savvy web designer, which means there's a _much_ larger pool of novice Javascript/Node devs with little or no formal training. You don't need 3yrs of CS degree to dive in and start "scratching your own itches" in Javascript - and in "that part of the industry" it's much easier to leverage "a great looking portfolio but no CS degree" into a job offer than in, say, an enterprise Java or DotNet shop (or a GooFaceAmaUber "don't even respond it they don't have a PhD in another field as well as their CS one" reject-a-thon...)


You're touching the reason yourself. When 90% of the code you find is crap, it simply means that the language has a low barrier of entry. When languages like ML/Scheme/LISP variants, Haskell, ect. don't have that much crap, it's because the barrier of entry is higher.

And this goes not just for languages. Frameworks like Akka, is the same. The idea of actors to form the system is simple and elegant, but it's far from simple to get started with.


For Perl and PHP - as well as the low barrier to entry, there was also the almost ubiquitous availability - at least across web hosting platforms in the late '90 and early 2000s. I was a Perl guy through and through back then, but even I wrote some PHP because getting web hosting companies to allow/enablePerl cgi (or even worse fastcgi or mod-perl) was expensive or impossible, where PHP was "just there" and "just worked".

(But yeah, you don't get frustrated pixel-pushers from the coloring in department getting over ambitious and writing up a site backend in Rust or Go... Your typical weekend Haskell hacker is about 100% likely to be able to spell "algorithm" and at least handwave their way through a Big O discussion... No big criticism of talented designer types, but in general they've got as many holes in their understanding of the "S" bit of "CS" as most developers have in their aesthetic abilities...)


Python hasn't got a high barrier to entry (unless you are talking about deploying it to a web server). The code is generally better than PHP or JS code (though I have seen my share of crappy Python code).


IMO python has a much higher barrier to entry.

Hell just getting PIP installed can be a bear on windows, remembering the damn --upgrade flag, and the whole concept of a virtualenv means that often the casual python hacker doesn't have access to packages in python until they are much further along.

Contrast this with Node where npm is always right there, and installs are local by default means the kid that just sits down with js has access to every package out there and can even publish things his/her first few days.

I'm obviously more comfortable with the NPM ecosystem, and I might just be biased, but it does seem much easier to work with and use.


My understanding is that Facebook has migrated most (all?) of its PHP codebase to Hack, which goes to show you: even the company that built a global empire in PHP can recognize it for the shitty, inadequate language that it is.


<devil's advocate> On the other hand, that shitty inadequate language got them up to their first billion or so users and way past Unicorn stage. If you're optimising your language choices and talent pool for your second or third billion users when you currently don't even have enough users to fund the ramen invoices, you're 100% certainly playing around with "the root of all evil"... ;-)


FB's primary reason for creating Hack was to add type-safety to PHP code. They spoke about how PHP was great for the fast development cycle, but would benefit from more structure if the created codebase was to become long lived. So the new Hack code is the same PHP code, only with types (and the implied benefits such static analysis, optimisations, etc).

Incidentally, this is also why FB created Flowtype - the lack of types is a massive boost during the code's youth, and a massive burden from early adulthood and onwards. Hack/Flow are a clever way to bridge the gap between typed and nontyped languages.


I was going to say the same thing about PHP. I was at a PHP shop who thought all their problems would be solved with NodeJS. They basically created the same problems in Node. The next place I went to was heavily biased to Erlang and OCaml, but the front-end is done in Node. But the code and architectural quality are like comparing night and day.


JS doesn't require a compilation step, but every project I look at has one anyway.


I've done quite a lot of C/C++ work. In my experience, getting JS up and running is more exhausting. You might have webpack running, which runs babel, and sass, and a few hundred node processes. It's just bloody insane today. And none of it plays nice at all.


>I would say that node.js is probably selected more than anything else for speed of getting a project up and running. It's easy to find JavaScript devs.

Writing JavaScript for the browser and writing it for Node.js are different beasts. It's "easy" to find JavaScript devs because most people have tinkered with jQuery and think that qualifies them. Furthermore, since the Node.js explosion in 2012, lots of posers have been trying to get into this scene.

>JavaScript doesn't require a compilation step so iterating and debugging small changes is much faster.

As another commenter pointed out, this is true, but most projects use a Grunt pipeline or something similar at this point, because they'd feel left out if they didn't. Just more groupthink from the Node.js camp.

This isn't a unique property to Node.js. Pretty much everything that isn't Java, C#, or mega-crusty pre-Perl CGI has it. However, compilation is actually pretty useful; it's not something you necessarily want to throw away. Java and C# devs seem plenty capable of training their Alt+Tab / F5 fingers to get tweaked code running fast.

>There's a ton of pre-build frameworks for serving up APIs even with very little code for CRUD apps.

There may be, but the JavaScript ecosystem changes so quickly and integrates so many esoteric, not-supported-anymore-after-next-Tuesday things that it's offputting. My experience with the code quality of a lot of common libraries has not been great either.

I personally detest the Node.js fad and can't wait for it to die out. I have never been able to find someone who can actually give me a good reason for it to exist. At least when RoR was the fad there was a sensible reason behind it: PHP sucked. I really don't know why Node.js even exists except to push buzzwords and execute a really terrifyingly bad idea of making everything JavaScript.

Even Eich admits that JavaScript was a one-week, publish-or-die project that did many things wrong. It's frankly embarrassing that we still use it as the primary language in our browsers 20 years later. I don't know who looks at that and says "Let's take this to the backend, baby!"

Personal ambition is basically the only reason I can think of for Node.js to exist, both in general and in any specific organization. People see it as a tool to colonize parts of their company. That's the only explanation, because they assuredly don't see any technical benefits in it.


> If there is a team of developers that just know Javascript and nothing else. Then perhaps it makes sense to have a Node.js project. Keep it small and internal.

If you have a team of developers that know just JavaScript and nothing else, you are probably dealing with 100% front-end developers that have little to no understanding of systems programming and what you need to do to be performant and secure outside of a browser.

This leads to a lot of issues with security in the Node.js ecosystem, and is one of the reasons I have a job.


Any developer that only knows one language is carrying around a horrible cognitive bias of how software should be written and they don't even realize it. They are dangerous, keep and eye on them, and for the love of god teach them something new.


Depressing thought of the day; most of the developers I work with only know 1 language and/or framework and have no ambitions to expand. I guess this is just the life of an "enterprise" employee. :(

Honestly, when I mention something like Golang or Rust or Cordova they look at me like I have 2 heads or something. Yes, I don't expect the layperson to know what those are but these are long term application developers (mostly C#) and they have never-ever heard of these things. At first I just thought it was the people I work with but I recently changed jobs and its the same thing there. I really am beginning to think that I'm an anomaly.


ehh, this is just usually shortsightedness for all involved. A front end developer for JavaScript is going to know HTML/HTML5/CSS/JavaScript and the quirks of all browsers and quite possibly complex frameworks on top of all of this. The amount of knowledge to be good at all of this far surpasses the "oh, they only know javascript" naivety that I hear from other developers.

I know Java developers and .net developers who don't know or want to know any other language because they're not just "java" devs, they know the frameworks and have experience to build things.

If all you do is "Scripting" then sure, know a few scripting languages


> The amount of knowledge to be good at all of this far surpasses the "oh, they only know javascript" naivety that I hear from other developers.

No, that's not it at all. I am friends with front end guys and I'm aware at how much work it takes to just keep up with and be good at the framework du jour, Backbone or Angular or React (then Flux, then Redux) and whatever's next from there.

Being good at markup and layout, and being good at a front-end framework, are not skills that are direct analogues to systems-level issues that plague Node.

For example, Node LTS's Buffer() uses UInt8Array on the browser, but a malloc() on the server side that may return insecure memory with leaked information from other parts of the process. libuv's event system is written in C. Socket programming on -nix is nothing like using a WebSocket library in Chrome, and the front-end has no concept of filesystems outside of localStorage or limited WebSQL use. When was the last time a browser had you deal with malloc()?

How many front-end developers understand the performance of writing files to disk with different filesystems, master-slave or master-master replication of databases, building solid server-side deployment workflows, securing their code and their servers from all the things that can go wrong? This skillset generally doesn't exist; it's abstracted away by the Chromium sandbox, or the interpreter. Front-end generally does not require knowledge of the principles of the underlying operating system and architecture to get the job done.

On the backend, all of these problems become apparent. It takes a different way of thinking, and a different understanding of a computer system. My original comment to rdtsc is just that -- a "JavaScript" developer that works on front-end tech only doesn't have an adequate idea of how these things work to build good server-side architectures, and Node.js is not the tool to use for those types of things.

Frankly, if they only know JS, they haven't touched these things at all. Outside of Node and Electron, virtually nothing on a system outside of a browser is written in JS, so you wouldn't have picked up these systems skills in JavaScript.


This seems dishonest. Everything you said happens regardless of language. Being a go developer doesn't mean you know more about how goroutines are handles on your operating system. The very same techniques you use to expose performance metrics and analytics happen in go, java, ruby, python et all that happen in node and i'd wager that node/javascript has an abundance of profiling and performance tooling for people to embrace.

Sure.. some developers may not understand the implications of everything they do, but if the app has implication that impact their performance they will discover them.

Not everything needs to be over optimized, over engineered or overly complex.

As an Ops dude i find it infinitely easier to scale/manage nodejs apps than i do java.. For instance i can solve some issues by scaling out node wider across more machines than having big optimized single machine and an implication of going wider means if some things crash, the others handle it and because i go far/wide i've chosen to optimize on availability/performance in different ways than you.

I may know that a node app crashes after x days for leaky memory and deal with it because i have mesos/marathon watching alive pages and killing/restarting the container and redeploying the app or autoscaling it up and down as need be.

Replace node with anything else and you have the same problems.. we see 100's of jvms released a year.. many versions of go.. many point releases of ruby.. python.. its a never ending battle that has to be fought a lot more deeply than any one person or one developer ever can.

and lets be honest, even the best node app is building something to build the best front end so letting front end developers focus on being front end engineers and helping them solve any issues thereof to fill in the blanks is not a bad way to do business.

Expecting everyone to over optimize and over engineer and over theorize about how things should be is a huge waste of time.


I've never really understood the "one langauge" argument. I'm a good (not great) front-end JavaScript programmer, but I don't think that helps writing node.js. Sure, the syntax is technically the same, but the bulk of a node.js program looks and is written quite differently from front-end js (even js library writing). Surely having similar syntax (which is basically C-like anyway) isn't that big a deal?


It's supposed to mean you can share object declarations. But it's javascript so that doesn't mean much. And it's not like the data format in the database and the gui is ever the same by the end of a project anyway, so two formats end up being needed anyway, so it's a bit of a pipe dream.

The one case I've found where node really makes a lot of sense is if you're trying to build a webapp with a native version using the same code. Then node-webkit is the only game in town that doesn't involve a transpiler. Then again, the component parts are flaky enough that maybe the transpiler isn't so bad afterall.


The one language argument, as far as I can tell began when people tried to share code across the server and the front end, it sort of worked for one persons use case but nobody else really had much success with the endevor (at least that I'm aware of.)

That argument is a pretty bad one and is the one most people like to assume people are talking about so they can make fun of it.

The real argument, at least in my mind is that you can specialise and pretty much run your entire stack completely in js with only a bit of json and some html. Build tools, database access, deploy tools, dependency management, server side code, client side code, its all JS.

For me, I don't agree that it matters, I don't even think having a diverse stack takes much longer to learn. But I do understand the fanfare for it.


"I've never really understood the "one langauge" argument. "

Being able to do front-end and back-end in js is really nice, especially passing data back and forth in js-object like format.

If you have not tried it, do so, I feel it does streamline things nicely.

Node.js is a little lighter than most things and if you use it correctly I think it has advantages.

I wouldn't build a bank on it however :)


You can pass JSON using Python, or pretty much any server side language.


Of course you can. But in some languages it's a pain.

And you can't do python on both ends.

Using JS on both sides is nice if the architecture is suited for it.


It seems like an appeal to hiring manager to me. "You'll have all the developers you need! Look, it's all javascript and everyone knows javascript!"

The problem is assuming language proficiency is the defining characteristic of a developer. We really need to disabuse management of that idea.

I'd rather teach a good 'microservice developer' javascript than teach a good 'javascript programmer' what makes a good microservice.


I'm still surprised no one has created a JavaScript front-end for Erlang/BEAM, like Elixir or LFE. JavaScript is a decent language with a huge community and Erlang/BEAM could solve Node's deficiencies like async-everything APIs and single-threaded execution.


Probably because anybody smart enough to build that is also wise enough to leave the tire fire that is the Javascript community and language.


Maybe because JS is not really a decent language, just the pile of awfulness that we've had to grin and bear it with to do web front-ends for too long, and Erlang or Elixir is nicer to work with to start with.


That's because Erlang developers have good taste ;-)

But Elm is actually moving in that direction, the last release added a process abstraction inspired by Elixir, and I've read they're even considering BEAM as a compile target, in addition to the current JS target and plans for a WebAssembly target.


> JavaScript is a decent language with a huge community and Erlang/BEAM could solve Node's deficiencies like async-everything APIs and single-threaded execution.

While BEAM of course has facilities at the VM level that support that, you'd have to extend javascript to take advantage of them on top of building a JS->BEAM compiler.

At which point, the benefit of using JS rather than a language designed to deal better with concurrency and parallelism is dubious.


I've been a software dev for 10 years. When I started I saw the transition from Perl to PHP and a lot of snobbishness from the former towards the latter. Seeing the changing of the guard in web languages was pretty instructional and it's something I see again and again.

I think basic CompSci courses should really have a course or two on managing software projects and handling the problems of what framework do I use to build my new software app? Because fundamental language or framework decisions have both a technical and a business component and even as a front line programmer it helps to be aware of both.

Node.js is a great environment for getting a server side app going fast and it has very good tooling thanks to the rest of the JS community with additions like npm, gulp, bower, express etc. There's obvious benefit in having shareable libraries between client and server side and most importantly (to software companies) hiring coders who can work with it is far, far easier than say finding that rarest of unicorns - an experienced Haskell developer.

If (and it's a damn big if) you outgrow Node.js you're doing well. Then (and only then) look at the alternatives like Play Framework, Spring Boot, Vert.x or whatever else floats your boat.

Rants can be useful in giving a kick up the asses of the relevant community to go address certain bug bears. This rant though is so damn generic it reminds me of those Perl developers at college pouring cold water over the idea of using PHP because they felt threatened by it.


    I think basic CompSci courses should really have a
    course or two on managing software projects and handling
    the problems of what framework do I use to build my new
    software app
The problem (if it's a problem, depends on who's asking) is that undergrad CS courses mainly train you to be a CS graduate student (which in turn train you to be a CS academic), but most students choose to major in CS because they want to become professional programmers (aka Software Engineers).

I've done both bachelor and master level CS studies and job-preparation-wise would probably have gotten as much (or more) from a 1-1.5 years (2-3 semesters) vocational training than I did from 8 years of university.

Probably the most apparent perk my studies have gotten me career-wise was being invited to interviews at major tech companies like google and amazon.


I think this actually varies from school to school. My undergraduate university, where the CS program was part of the engineering school, focused on software engineering. My graduate university where the CS program was a part of the college of arts and sciences focused a lot more on theory. But my view here may be skewed, because I only remember one professor who taught most of our theory courses -- I took most of his classes.

From roughly the same number of years in university, (8ish) I can say there were some clear professional benefits from me having done a masters' program. From my advanced software engineering course I was able to talk to my teammates about Gang of Four and better ways go about problems than if-else loops. From my theory courses I was able to identify problems they were seeing, tell them which algorithms would probably be worth looking into etc...

That is all stuff one can learn outside of university, but I learned it while I was there. I get giddy whenever I can try something new and all the writing I did (I wrote a few academic papers) made me really comfortable with writing technical design specs and documentation. When I was in academia I had to document everything anyways so its become second nature to me.

I think 2-3 more years of working would have certainly looked better on a resume, but I think my skill set has added to my team as a whole and I'm glad I did it.


> most students choose to major in CS because they want to become professional programmers (aka Software Engineers)

Then why not major in Software Engineering? Is that not an option in American universities?


One problem is that it does not have the standing of a pure subject like math, engineering or CS.

Its more vocational, isn't offered by top-tier universities (in the UK) and therefore has lower status e.g. its like taking Media Studies at an ex-poly vs. reading English at Oxford.


It was very much seen that way back in the day at Birmingham University. We wrote an x11 window manager with audio and graphics processor apps for our first term in 2nd year. I saw academia start to go down the pan at college with the introduction of "ICT". To me, this was nothing but subsidised Microsoft training.


this was nothing but subsidised Microsoft training.

"Subsidized Microsoft training" is what most people actually want and need to get the sort of job they aspire to. People with this "Subsidized Microsoft training" are also what many companies are looking to hire. So why is it such a bad thing to offer people this option?


It's not a problem, unless you go to higher education looking for CS and end up with that instead.


Because we need people to program computers not use word.


There is a problem here, but to be clear, the solution is not to convert CS into vocational training. The problem with the way software engineering was introduced into US universities is that it was usually dumbed-down to broaden its appeal. I do not know if it is still like that.


Don't know about the US but in Israel anything to do with programming is in such high demand and so overbooked (therefor requiring competitive grades to get in) that you get what you can.

I got into my bachelors in 2002, at the nadir of the dot-com crash (when demand for CS education was at its lowest) & my grades in high-school were just good enough to get into the Math & CS combined program (both CS or SE had higher requirements).

The only other option would have been to go to a private college rather than a state university, which would have had (at least) double the tuition and (at most) half the prestige/employability-potential of a university degree.


Interesting. I'm from Germany and studied "Computer Science and Media" and it was rather overbooked, like Media stuff often is here.

But the regular computer science or software engineering degrees (like most technical degrees) aren't overbooked ever.

They let in everyone and throw 60% - 70% out after the first year.


Unlike Germany, in Israel how desirable a study is is directly correlated with how much money one can make working in it...So media/art stuff is very low on the totem pole.


Hehe.

I have the feeling most German (Millenials?) just study what they do in their free time.

Yes, there are the academic families, that tell their kids to become lawyer or medical doctor or something like that.

But most tell me I was just lucky that I liked doing computer stuff, so I could study computer science and make "mad bucks" while they liked to draw or read and had to study something like languages or fine arts.


I have actually been living in Germany for the past 3 years, and spent 8 years in Austria before that :)

I find the non-materialistic (or at least less materialistic) mentality here preferable. But to be honest I think the only reason is that life in the German speaking countries is much easier - you can earn minimum wage or close to it and still live a decent life.

In Israel I had to earn twice as much for half the quality of life (slight exaggeration but only slight).


I like this mentality too.

But Bafög (the german student aid) can accumulate to more than 10k€ of dept (you have to pay back half of it) and some of my friends even took credits for their (non-consecutive) master degrees or because they didn't finish their bachelors in regular time. Now they are stuck with 10k - 40k of dept and no way to pay this back in with the money they make.


Or my favorite - Business Administration with Concentration in Management Information Systems. This was my route and it laid a great foundation of how business works (you know, that part of a company that PAYS for IT... as well as business-focused IT (which does lag behind new uses of tech but does cover being ready for fundamental shifts/disruptions - which has been around in the business world since the beginning).


Yes, it is an option (at least in America). Many colleges' business departments do offer a "information technology" degree, which typically combines some high-level programming skills with some business school classes (eg accounting, economics, management, marketing, etc.).

This is probably the best undergrad option for anyone who wants to be a "professional programmer" over here, if you are more concerned about the writing web apps side and not the optimize-that-algorithm nuts-and-bolts type of side.


Not sure about America but it's more of lack of understanding in Australia.

Also, in the university i am studying in. The university course for a Software Engineer is for 5 years since that is the minimum study time needed to be accredited as an engineer while the CS degree is 3 years.

The more messed up thing is that everything taught in CS major is taught the Software Engineering major but Software Engineering has more electives and has a few general engineering units.


It's not common. Usually you can choose between Computer Science or Electrical Engineering, if you want to do things with computers. The problem is, Comp Sci tends towards an inordinate amount of proofs on blackboards and theoretical horse-hockey, that you don't get to actually do in code, while EE tends towards playing with soldering irons and wiring breadboards, with a little embedded low-level programming.


Well I would argue that universities are here to create acidemics, not create job candidates. That vocational work is exactly what you need to be a software engineer. I think the issue is more people don't understand what university level CS is, or that it's NOT a "get a job" pass.


Well companies demand it. You can get a job on work experience alone, but if you're just starting out good luck applying to Google/MS/Facebook/Amazon/etc with only vocational training.


I don't disagree with most of what you say but I can't shake the feeling that for the most common types of web project - nothing touches the productivity possible with frameworks such as Rails or Django.

(I don't know enough about mature PHP MVC frameworks to comment on whether Laravel et al should also be in that list)

Is there anything for node that offers that wide range of functionality and a mature ecosystem for content-driven sites?


> If (and it's a damn big if) you outgrow Node.js you're doing well. Then (and only then) look at the alternatives like Play Framework, Spring Boot, Vert.x or whatever else floats your boat.

I couldn't disagree more. If you like the concept of type safety, don't waste your time.


You called out the fact that you see repeating patterns in the industry, and go on to mention Node.js as a great environment for rapid prototyping. That made me chuckle because that's exactly what I heard about RoR just a few years ago.


There is no such a thing as 'outgrowing Node.js'.

Often, people who think that they're "too good" for Node.js are people who had a single bad experience; they designed their Node.js app poorly (E.g. they didn't think through their architecture enough) and then instead of blaming themselves for their ignorance, they put all blame Node.js.

It's always convenient to blame the tool. But I've used a lot of different tools and I can tell you that the tool is almost never the problem.

I've found that people who don't like Node.js are often people who were forced to learn it because their company used it. These people had resentment against Node.js from the very beginning - And they kept this resentment with them while they were 'learning' it - And as a result, they never really learned it properly - They never fully understood it.

Not everyone has to love Node.js but there is no reason to hate it either. It's a damn good tool.


    It's always convenient to blame the tool. But I've used a
    lot of different tools and I can tell you that the tool 
    is almost never the problem.
This is both true and also misses the point. Given a sufficiently smart software engineer Javascript is fine. But like the old adage about compilers the joke is that there is no sufficiently smart engineer. Even the best will occasionally make mistakes. Over time those mistakes will accumulate. When enough of them accumulate you experience serious pain.

Any argument that rests on the idea that all you have to do is: "Just hire engineers that can be perfect in perpetuity" is doomed to be a poor one. No language is perfect, however there do exist lanaguages that make it possible to fix problems after the fact with higher levels of confidence and more guarantees. Javascript is not one of those languages.

Not to mention that single threaded callbacks have an inherently lower ceiling on concurrency that multithreaded approaches. Some times you have to make the decision on whether to save thousands-millions of dollars on infrastructure so you can stick with your current codebase or to rewrite and take a medium term hit on velocity instead.

There most definitely is such a thing as outgrowing Node.js.


Anybody can criticize a language or platform, but it doesn't mean much if there aren't any better alternatives.

This article presents an extreme conclusion without much supporting evidence, so it's pretty pathetic that this made the front page. Nobody even uses callbacks anymore now that we have Promise and async/await.

Yes, Javascript isn't the best language (though ES6 improves tremendously on ES5). But right now it's the only language you can use in the browser (aside from languages like Clojurescript that compile to Javascript). The biggest advantage of Node.js is that you can reuse the same code on the client and server, and thus it's ideal for creating universal single-page web apps. Being able to reuse the same code on the client and server is a massive advantage that can't be understated.

Also, Node.js with Nginx is more scalable out of the box than Ruby on Rails, Python/Django, PHP, etc. Hell it's comparable to Java, which is incredible for a dynamic language. The difference is, you can write a Node.js web application 10x faster than the equivalent application in Java, and with a drastically smaller codebase (less code = less code to maintain). These days developer time is the biggest cost.

These rants come off as coming from either (1) back-end developers who never touch UI code or anything on the client-side (not where Node.js thrives) (2) armchair commentators who don't actually have to get shit done in terms of building and deploying web apps on deadlines, and thus have the luxury of criticizing everything without presenting realistic alternatives.

> "There are only two kinds of languages: the ones people complain about and the ones nobody uses." -Bjarne Stroustrup


> he biggest advantage of Node.js is that you can reuse the same code on the client and server, and thus it's ideal for creating universal single-page web apps. Being able to reuse the same code on the client and server is a massive advantage that can't be understated.

This advantage is totally overblown, and in fact I am not sure it even is an advantage. It definitely makes things easier in the short run, but it always comes around to bite you in the ass. The fact is, objects on the server and objects on the client are different things, and while you write less code up-front because the differences aren't always immediately obvious, you end up writing a lot more code later because you didn't think about the very important differences. Representing them as the same thing enables shoddy programmers to not think about the context of where their code will be run.

> "There are only two kinds of languages: the ones people complain about and the ones nobody uses." -Bjarne Stroustrup

"[A] quotation is a handy thing to have about, saving one the trouble of thinking for oneself, always a laborious business." -- A. A. Milne


I'm guessing you've never written a universal/isomorphic single-page application?

If you try to write one without Node.js, you're going to be writing a lot of the same code twice - once in Javascript to run on the client-side, and twice to run the same exact logic on the server-side (eg. fetching data on the server to pre-render a page and parsing that data, making an AJAX call for the same data on the client and parsing that data in JS).

One codebase is easier to maintain than two codebases in two different languages.


> I'm guessing you've never written a universal/isomorphic single-page application?

You guessed wrong.

> If you try to write one without Node.js, you're going to be writing a lot of the same code twice - once in Javascript to run on the client-side, and twice to run the same exact logic on the server-side (eg. fetching data on the server to pre-render a page and parsing that data, making an AJAX call for the same data on the client and parsing that data in JS).

> One codebase is easier to maintain than two codebases in two different languages.

Sure, until you realize that they aren't the same objects, because context matters. But then it's too late because you've so fundamentally coupled the front-end to the back-end that they're inseparable. So you throw in ifs and switches to handle the difference, and eventually your code becomes an unmaintainable mess, and you can't get time from your boss to clean it up because deadlines.

So you quit and get a new job and starting that new project with Node is so easy, and you don't see the problem because you never stick around to actually maintain your code. Your previous business will go out of business when they try to rewrite the app because development has ground to a halt, but that's not your problem, now is it?


In my experience, having to write the same code twice in two different languages is a hell of a lot more work both from a creation and maintenance perspective than just writing it once. I haven't really had to deal with many of those edge cases you're talking about.

So what do you prefer to Node.js for universal single-page apps then?


> In my experience, having to write the same code twice in two different languages is a hell of a lot more work both from a creation and maintenance perspective than just writing it once. I haven't really had to deal with many of those edge cases you're talking about.

Cool, I hope for your sake that your luck continues.

> So what do you prefer to Node.js for universal single-page apps then?

To be honest, I prefer not to write single-page apps, as they break some fundamental ways the internet works, with no real benefit. I have started using React for in-page responsive components, which are composable and reusable independent of the app.

I prefer Flask for the backend, although I've used Django w/ Django Rest Framework and that has also been a good experience.


I think this is right on. They are completely different environments and I can't imagine very many instances where you'll be able to write a generic function you really want on both front and back end. I certainly haven't run into any in real life


I think a good example of a function you'd want in the front and back end is one that takes data and outputs a template.


sigh I'm afraid to hear your answer, but I'm going to ask anyway: what would compel you to output the same template on both the client and server sides?


Possible, possibly dubious: search engine optimization.


I'd argue that this points to a poor design. If something is static enough to be crawled by a search engine, then it doesn't make sense to be generating it dynamically every time on the client side. It's static, generate it statically, and then all you have to do is serve it up (caches are a variant of this approach, but not the only one).


I'm currently working on an unofficial interface for Readability because they aren't working on it anymore and their Android app is really broken.

The webpage needs to work offline (whole point of the project) but should be able to work without js if need be.

I create templates server-side and serve them to the client,

If js is available, it takes over and renders any further client-side interactions.

Does this sound like poor design or a valid reason to be able to render on the server and client?


It sounds like a poor design.

Webpages don't work offline. All the user has to do to break everything is hit "refresh". What your business users wanted was an app, and you gave them a webpage instead.

I also don't understand why you would do rendering both server and client side. That's an overcomplicated architecture that doesn't buy you anything. If you chose one, you'd have a simpler architecture.

And before you claim "optimization"--did you profile?


You're pretty opinionated for someone who doesn't seem to understand a lot of things ;)

1. Hitting refresh doesn't break everything. It resets your current state to the default. I default to the url combined with locally stored state, so you will lose text in an input field, but not much else.

2. I want a website not an app, thank you very much. I'm really just displaying text. I don't need to download an app for that when I have a perfectly good browser on every device I'd like to read from.

3. Maybe it's over complicated to you, but the same function that renders on the server is what I use to render on the client. Zero difference. It's not a choice I have to make so I can use either when I feel it's appropriate.


> 1. Hitting refresh doesn't break everything. It resets your current state to the default. I default to the url combined with locally stored state, so you will lose text in an input field, but not much else.

Really?

1. Go to your website.

2. Disconnect from the internet.

3. Hit Ctrl+Shift+R.

How's that app you wrote as a website working for you?


Works well, actually.

It's called appcache. Works in most browsers. You should look it up ;)


> These rants come off as coming from either (1) back-end developers who never touch UI code or anything on the client-side (not where Node.js thrives) (2) armchair commentators who don't actually have to get shit done

Half this post is from the creator of Node.js


I agree with most of your saying, but the second rant is by Ryan Dahl, creator of Node.js. Often these rants come from people who are very competent and know what they're talking about, but anybody can get burned out by working too much, and the annoyances of flaws can become overwhelming. It's worth remembering that we can always iterate and improve on what we have, even if it's good enough, if nothing else than for the sanity of the people who have to work on cleaning it up later.


> Nobody even uses callbacks anymore now that we have Promise and async/await.

Sadly this is not true (yet)


> Also, Node.js with Nginx is more scalable out of the box than Ruby on Rails, Python/Django, PHP, etc. Hell it's comparable to Java, which is incredible for a dynamic language.

Citation or link to any benchmarks ?

> The difference is, you can write a Node.js web application 10x faster than the equivalent application in Java, and with a drastically smaller codebase

Building a prototype, maybe; But's it's hard to believe that Node.js give a 10x boost in programmer productivity in the long run. The recent success of typescript is an indication of the limit of dynamic languages for big projects


It wasn't so long ago that critics of XML etc were shouted down, mocked, dismissed.

Then every one just kinda woke up. It's not even a debate.


The number one issue I've found with Node.js is when developers make things overly complicated for no apparent reason. The bar may be too low to get in so possibly you'll get a higher degree of poor design decisions.

The second would be the overuse of build scripts in that the build seems more complicated than the app in both time to get the thing up and complicated chaining steps. I've not had much fun debugging grunt, gulp or webpack in some of these fortune 1000 projects and I have a hard time wanting to give a shit about knowing them in great detail as the app should be the focus.

The parts of node that I most like are the core libraries that come with the install. When I try to stick with those as much as possible rather than using some half-baked npm module for every whim, I have a very pleasurable experience.

The async/wait and promises, etc along with piping streams are quite elegant in how modules can be snapped together like lego pieces but I find that people fuck it up terribly when they half know it as I initially did and it becomes akin to the messiness of es5 callbacks.

It does take some time to really utilize async well so I would recommend to read up on those concepts in great detail prior to jumping in.

Please npm responsibly.


Everything in node is just slapped together with no rhyme or reason. npm module quality is piss poor - even the flagship express web server where nothing works out of the box. God help you if you find a bug in a node core library - the typical reaction is that developers now depend on the buggy behavior and as result it won't be fixed. I've returned to sane programming languages after using node for years and I don't miss node's async nonsense where you can't perform a computation without halting all users of your application without performing major programming gymnastics.


That comment reminds me of those informercials where people can't open a carton of milk without pouring it all over themselves so they need some fan dangled gadget for $19.95 to help them do it. Node has many issues but for me its just really easy to use, maybe because I only use it in a minimal sense, similar to lambda scale as you can see on my sebringj github repos.


It would be nice if there were build warnings npm acceptance would give you so you had the chance to clean up things, such as build tests etc and also a recommended standard way to build them. Then if npm developers were lazy in that sense and letting the warnings pass regardless, when you install an npm, it would give you a warning, "this asshole put in a shitty module" or something more PC.


You say you are over node, but I don't think you are. You need to let go and move on.


Grunt, Gulp and Webpack are for building front-end assets. They are all built using Node.js, but their complexity has to do with managing complex front-end code base and then optimizing it for the browser.

If they weren't built with Node, then it would be Ruby, or Java, or Python, or even Make.

It is a shitty shitty problem that people are still trying to figure out what the best way is.


I still don't understand why people couldn't have just learned Make, instead of reimplementing it badly two or three times in every different special-snowflake language.


The same reason a dogs likes his balls because he can.


At least it's not maven.


Maven is consistent, fast, and a de-facto standard. I can't say any of that about any Node.js-based "build" tool.

Maven has faults too. But it would be a mistake to ignore its strengths just because it is ugly.


I fight with maven every day. I was just being snarky, the fact is maven can be abused in the same way anything can. Gulp gets insane when people start writing opaque custom tasks. Maven gets insane when people write opaque plugins. Nothing is perfect. Although people tell me Gradle is...


Maven is as close to perfect as it gets, IME. Certainly a lot more so than gradle. Insane plugins are a probem, but since plugins are first-class code you can solve that the same way you'd solve insanity in your actual codebase.


Oh man, Gradle isn't perfect? Crap. I was holding out hope.


It's telling that none of the posts defending Node.js are talking about it's technical merits. They're all saying:

1. Attacks on people--you're being too negative, you're saying this to feel superior.

2. Choosing Node.js is a tradeoff! We can't really say what you get in exchange for using this crappy ecosystem, but "tradeoff" sounds good even if you're trading using a reasonable ecosystem for nothing.

If you really think Node.js isn't a flaming pile of crap, I challenge you to come up with something it does that isn't done far better in another ecosystem.


I've been working on a project that uses node for the better part of the year. I'm in charge of most of its architecture and design.

After all these months, my conclusion is exactly the same as yours: there's nothing that can be done in done that I can't do better in other platform/ecosystem.

Sure, it's unique (event loop) and has a lot of good things (simplicity), but for anything serious, it's lacking real advantages.

Most of my issues with it comes with javascript itself, and the absence of proper tooling. I have no idea of the advantage of choosing this platform given the level of immaturity.

Oh, for those wondering: Node was chosen by a person caught on the hype (my boss, a friend and mentor who can make mistakes as anyone), who hadn't done any real world programming in it (only hello world stuff).


> Most of my issues with it comes with javascript itself, and the absence of proper tooling.

Does anything in ES2017, FlowType, or TypeScript resolve your issues with Javascript itself? If not, what are the three biggest?

> absence of proper tooling

What do you mean by proper tooling? What are you comparing it to? I think you might find that many of the other modern options also lack tooling in terms of debugging, IDE support, etc.


Right; but C# does everything Node.JS does easier and faster (even the event loop, if you like, although it also has first-class support for threading), and has excellent IDEs, debuggers, documentation generators, etc.


Ah yes, C# has awesome tooling. I thought you were comparing to something like Go.


Yours is not a technical argument either -- you're just pre-emptively claiming that no counterargument can be valid.


No, a technical counterargument could be valid if Node had any technical merit.

I'm claiming that no valid technical counterargument exists. If you disagree, it should be very easy for you to present a valid technical counterargument instead of trying to offload the burden of proof onto me.


geofft's reply to your grandparent comment is a valid counterargument IMO.

I noticed your reply, which is basically "nobody should be using an event loop". Whether you like it or not, that's how 99.99% of the GUI software in existence works (Win32, Cocoa, whatever). Node allows people to leverage their existing mental model into server apps.


But UI's have to be async if they want to be responsive, while the HTTP request-response cycle (which is what Node is predominantly used for) is completely synchronous.

Sure, things like WebSockets are changing this, but you don't really need an event loop if you have proper threads or a multi-process server (you know, like Node clustering, which you tend to need anyway once you outgrow the limits and pains of a single event loop).


Global event loops are fundamentally a bad abstraction for a parallel (not just concurrent) system.

It makes sense to leverage existing mental models in a lot of cases, but when you're moving to a new platform, you have to learn some new mental models because your existing mental model fundamentally doesn't work in the new system. Global event loops don't work, at all, in a system that actually needs to be parallel (i.e. most server systems).

Giving up on writing software that scales even to a basic level just so you can reuse your existing mental model isn't a good tradeoff.


The burden of proof is on you. What exactly is wrong with Node.js? It is a network server. It is programmed in a popular, expressive, dynamic language that a lot of coders already know. It has a great, well thought out base library. It does concurrency and asynchronous interactions really well. Since it is programmed in the same language as the client side, it allows code reuse between the server and the client. It has a vibrant ecosystem around it with a library for every possible need. It has a large community of developers.

So, if you want to claim that Node is bad, you need to come up with reasons why it is bad.


It lacks a coherent type system, its threading model is awful, and its security is fundamentally broken.

Your move.


To add to my other comment: the technical criticisms of Node have already been stated and were widely known. So if you're going to argue that using Node in a server environment is a technically sound choice, it's up to you now to respond to the criticisms.

And let's be clear: these are technical criticisms and if you could respond to them it would be with a technical defense of Node. Continuing to attack critics of Node or claim that we haven't already met our technical burden of proof only goes to show that your defense of Node is bankrupt from a technical perspective.


> I challenge you to come up with something it does that isn't done far better in another ecosystem.

A single, standard event loop that literally every library in the ecosystem uses so you don't have to worry about it.

The other options are compiled languages (Windows message pump, Qt or GTK+ event loop, etc.), which aren't what people seem to want for Node's use case, or things like Twisted. For all of those options other than maybe Windows, you don't get to assume that the entire ecosystem speaks the same event loop protocol. And I don't think any of these options do it far better than Node does. Twisted is pretty good, probably a little better, but Node is just fine too.

Note that I am not a Node.js programmer. I can kind of work with existing apps in Node because I can kind of write JS sometimes (I'm really more of a C person). I just know what other technologies are good at.


> A single, standard event loop that literally every library in the ecosystem uses so you don't have to worry about it.

Tcl? Erlang/OTP?


> A single, standard event loop that literally every library in the ecosystem uses so you don't have to worry about it.

Using an event loop with all the other threading models available out there is like bringing a knife to a gunfight. I could maybe argue that JavaScript's event loop isn't as good as other event loops, but while actor models exist, I'm not sure why we'd even talk about event loops.


Actor models don't solve the problem of getting a good-sized ecosystem going. If you have production-ready actor-model libraries for all the random things I might want to connect to, great, but I'm not currently aware of such things existing. (Maybe that's a failure of listening on my part!) If you want to develop that ecosystem, awesome, and I totally support that.

But I would much rather bring a knife to a gunfight than some totally awesome, straight-from-the-laboratory gun that's never been tested in combat.


> If you have production-ready actor-model libraries for all the random things I might want to connect to, great, but I'm not currently aware of such things existing.

If you have production-ready event loop libraries for all the random things I might want to connect to, great, but I'm not currently aware of such things existing.

Part of the problem with the Node ecosystem is that the bar for "production ready" is very, very low. For example, literally nothing in NPM is production ready where security matters, because the package system is fundamentally insecure. left-pad broke builds everywhere--what if instead of failing to download, it introduced a backdoor? And even if you assume all the randos who maintain NPM packages are trustworthy, there's no signature verification on code, so any of them could have their accounts compromised and there would be no way to tell. This is the code you want me to run on my server? Really?

Node has a large ecosystem, but the subset of Node's ecosystem which is actually quality software is quite small.

> But I would much rather bring a knife to a gunfight than some totally awesome, straight-from-the-laboratory gun that's never been tested in combat.

Me too, but I'm exceedingly surprised to hear you quote this as a positive of Node.


I am not really familiar with how to deploy Node in production, having never done so, but aren't there relatively easy ways to mirror NPM locally? Or can't you just check node_modules into version control (possibly a separate branch or something)?

Honestly if I had to deploy Node in production I'd be inclined to see if I can just use those Node libraries that are packaged for Debian. I do the same thing for deploying Python, Perl, C, etc. in production.


> I am not really familiar with how to deploy Node in production, having never done so,

Maybe don't form uneducated opinions on things you don't have any experience with, then.


I phrased it that way to be polite, but if you're not interested: I know that these tools exist. I was giving you an opportunity to save face for your uneducated and factually incorrect claims. The left-pad incident broke CI tools that intentionally referenced public NPM, not competent internal deployments. It broke incompetent internal deployments, yes, but every other language ecosystem can be deployed incompetently.

If you're relying on the availability and correctness of a public service for deployments, whether or not it's signed or allows packages to be removed, you're doing it wrong. This is as true for NPM as it is for Debian.


> I phrased it that way to be polite,

Unless you were just outright lying, not having ever deployed Node in production isn't just being polite, it's literally admitting you have no practical knowledge of the technology you're defending.

I can see some value in the feedback of someone who has only worked with Node 3 months or so--newcomer reactions give some impression of the learning curve of the technology, which matters. But you've done what, read a blog post? Your opinion on this subject is worthless.

> I know that these tools exist.

As do I, and what's more, I've used those tools.

> I was giving you an opportunity to save face for your uneducated and factually incorrect claims.

Which ones were those exactly?

> The left-pad incident broke CI tools that intentionally referenced public NPM, not competent internal deployments. It broke incompetent internal deployments, yes, but every other language ecosystem can be deployed incompetently.

Yes, you can use NPM shrinkwrap (and I do) but let's be clear: that means you get to devote perhaps a week out of every year updating everything manually. "Competent" deployments with NPM involve enough of a pain in the ass that, after a few years of doing this, I'd actually prefer to manually import dependencies. But that's not really a viable option.

If you're going to claim this is just incompetence, I am inclined to agree with you. But that means that an inordinate portion of the Node community is incompetent, so I'm not sure this can be represented as a defense of Node.

> If you're relying on the availability and correctness of a public service for deployments, whether or not it's signed or allows packages to be removed, you're doing it wrong.

If that's the case, then I don't know of ANYONE who is using NPM correctly, because to avoid relying on whether or not things are signed, you'd have to audit literally everything pulled down by NPM, which even for simple projects can be half a million lines of code. Shrinkwrap means you only have to do it once, but that's still more time than anyone I know of has.


None of this is different in any other language, including C. If you run a C app that uses outside libraries, or a Python app that uses outside libraries, or anything else, you either devote well more than a week out of every year to updating libraries, or everything is super old and frozen and impossible to change, or everything risks breaking when you upgrade your OS distribution (or other source of packages). That's how production software works.

If you want to claim that the state-of-the-art in every single environment isn't production-ready, you're using the term "production-ready" in a very different way from its common meaning.

Can you present an ecosystem that does this better? All I've heard you claim is that Node is bad, in ways that are not specific to Node.


> None of this is different in any other language, including C.

Really? Because NPM is the only dependency system I know of that doesn't verify packages in any way.

> If you run a C app that uses outside libraries, or a Python app that uses outside libraries, or anything else, you either devote well more than a week out of every year to updating libraries, or everything is super old and frozen and impossible to change, or everything risks breaking when you upgrade your OS distribution (or other source of packages).

This hasn't been my experience. Unlike the Node community, the C community tends to feel strongly about reverse compatibility. The Python community has become a bit more diluted in recent years, but as long as you stick with mature libraries, upgrades are usually no more than updating some version numbers in the `requirements.txt` and running `pip install -r`. In cases where Python packages break dependencies, they're usually pretty decent about giving deprecation warnings.

In NPM, mature libraries break backwards compatibility without warning all the time, sometimes without even a version number change (by virtue of the dependency's dependencies).

Can you point me to an example of a mature library in Python breaking like Babel did when left-pad happened?

> Can you present an ecosystem that does this better? All I've heard you claim is that Node is bad, in ways that are not specific to Node.

The ways I mentioned are absolutely specific to Node.


> Really? Because NPM is the only dependency system I know of that doesn't verify packages in any way.

npm verifies the SSL cert on its connection to the registry. pip does the same thing. Neither verifies signatures on the packages, or anything else.

Both of which are better than C build scripts that involve downloading tarballs from SourceForge and running `make install`.

> This hasn't been my experience.

Then you're not running C in production, sorry. Or you're ignoring the friendly build engineers and distro packagers (like myself) who spend their days solving these issues so you don't have to notice. Which is great, we solve these problems so you don't have to, but you really need to defer to the people with expertise.

> Can you point me to an example of a mature library in Python breaking like Babel did when left-pad happened?

Sure. Babel didn't break when left-pad happened. A certain way of installing Babel did. I have no idea how you keep conflating a method of installing packages (that isn't even what serious production users use) with code. So, "all of them."

I can give you countless examples of mature libraries and software packages in C, Python, or many other languages where `./build.sh` or equivalent, that wgets some crap from SourceForge, stopped working in the same way. But if you're doing `./build.sh` as part of your regular production workflow, you know that it's going to break and that you have better options.


> npm verifies the SSL cert on its connection to the registry. pip does the same thing. Neither verifies signatures on the packages, or anything else.

The top result from Google: https://pypi.python.org/security

> Both of which are better than C build scripts that involve downloading tarballs from SourceForge and running `make install`.

True, but I thought you were the one who wanted to limit our conversation to competent installations?

> Sure. Babel didn't break when left-pad happened. A certain way of installing Babel did. I have no idea how you keep conflating a method of installing packages (that isn't even what serious production users use) with code. So, "all of them."

Really? So how do you install Babel the first time in order to shrinkwrap it?

> I can give you countless examples of mature libraries and software packages in C, Python, or many other languages where `./build.sh` or equivalent, that wgets some crap from SourceForge, stopped working in the same way.

Okay, give me countless examples.


> The top result from Google: https://pypi.python.org/security

I'm not sure what this has to do with anything. Yes, you may sign your uploads. Nobody verifies them on download.

Since you seem to be a fan of Google, try Googling 'pip signature verification' and reading the results. Here's one place to start: https://github.com/pypa/twine/issues/157

> Really? So how do you install Babel the first time in order to shrinkwrap it?

The same way you install anything else from any other ecosystem? The packages have to be up and online when you initially retrieve them, yes. I have no idea how you think that's NPM-specific. If you would like to download some code, you have to have somewhere to download it from.


There are some impressive libraries in the Node ecosystem. As an example, passportjs gives you support for a lot of weirdly custom OAuth implementations, Kerberos, etc


Passport JS is horrible. Sure it's easy to use, but it's not secure, which is kind of the entire point.

There are other libraries that are interesting, but that's more a function of necessity than of it being a good thing--people write JavaScript because it's the only thing that runs on the browser, and then people port it onto Node. And it's worth noting that a lot of these have been held back significantly by being on Node. Take Babel for example; it's useful and interesting, but the code is a mess because of callbacks, which makes it hard to modify, and it was taken down by the left-pad fiasco a few months back (which broke thousands and thousands of builds worldwide). Some interesting things have been built on Node, but that's despite Node, not because of Node, and they would be more reliable if they weren't built on Node.


Yeah, that's fair. I can't say I've used it beyond prototyping, for which it is very easy.

I came across it vs. doing something similar in C#, which required one library per vendor (Quickbooks OAuth, Google OAuth etc), and when I last used them I ran into issues where they conflicted so I like the combined interface approach.


It's not really new though, see for example Omniauth for Ruby.


Could you tell me more about Passport not being secure? I've been meaning to use it in an upcoming Node project, so I'd really love to know why I shouldn't use it, and what I should be using instead!


JavaScript is missing some fundamental cryptographic primitives that make it impossible to write secure software with JS, most notably it lacks any cryptographically secure random number generator.

Additionally, package installs are insecure, so unless you have time to audit the entire `node_modules/` you can't actually guarantee that the code you want to run is the code you have installed. Additionally, because JS doesn't have any namespace restrictions, a vulnerability in any package allows access to the whole codebase.

For a survey of problems with JS (none of which have been fixed since this was written), take a look here: https://www.grc.com/sn/files/jgc-javascript-security.pdf


Thanks for the info!

Do popular Ruby and Python solutions suffer from the same problem?


  ispositivenumber.js
Wait, there's also

  is-positive-number.js
So confused


The JavaScript community really needs to get together and fix it so only one version of any possible library is published.


> isn't a flaming pile of crap

The hyperbole in every criticism of Node must be there for a reason. I think it is more a reflection on the state of mind of the critic rather than the language.

I would posit that many have dedicated huge amounts of time to a new or less-popular language that may not be around in 5 years.

JS is a safe, long-term bet. It's tentacles have made it into every platform and use case out there. Backend, frontend, embedded, mobile, desktop, etc.

I'm curious, what does your vision of the future of web and mobile development look like?

Which language do you wish was in browsers instead of JS?

How would you objectively rank what makes a "good" language and a "crap" language, and how would you test these hypotheses?


> The hyperbole in every criticism of Node must be there for a reason. I think it is more a reflection on the state of mind of the critic rather than the language.

"It's telling that none of the posts defending Node.js are talking about it's technical merits. They're all saying:

1. Attacks on people--you're being too negative, you're saying this to feel superior."

> JS is a safe, long-term bet. It's tentacles have made it into every platform and use case out there. Backend, frontend, embedded, mobile, desktop, etc.

By that argument, Java is a better language than JavaScript.

There are a lot of established languages that aren't going anywhere. And certainly with WebAssembly in the works, I'm not as confident that JS will be popular in two decades as you are.


Agree that WebAssembly is the big one to watch.

On the prior point, from my experience there is a big confirmation bias at work and I think these discussions would go better if there was some disclosure of how invested each person was in their own language of choice. E.g. if you realised you could ship features faster in language X, how much wasted time would you have invested in language Y.

When I was working all Scala, I was always looking to confirm my decision by reading positive reviews, reading about the switchers from Ruby, because I was spending huge amounts of time learning it. This was right up until the point that I switched to Node and could ship features 10x faster.

I think people are always happy to read articles like this and see people switching away from Node if they are working in something trendy like Go. It feels good. But at some level there is a fear that the thousands of hours you spend on learning the intricacies of a language may be wasted.


Again, why not just stick to the technical merits here? Why accuse Node critics of trend-chasing?

I'm not a trend-chaser. Python has been around since 1991 and is my language of choice for most things. I like Elixir, but I wouldn't even consider using it at work, specifically because it could just be a fad. I'll keep an eye on it and maybe in 5-10 years consider looking for Elixir jobs if it's still growing.

There is a ton of technical experience which goes into me saying that Node is a flaming piece of crap. It's not just shipping features faster, it's how stable those features are, how many bugs you get, how secure your system is, how performant/scalable, etc. I have worked with Python and Node for years, every workday. Before that I worked in C#, Java, Ruby, Clojure--and it's exactly the experience of chasing a few trends versus using some old solid languages that makes me want to stick with tried-and-true tools.

Your post has a sense of "everything is relative, there's no such thing as one language being better" to it, but I don't think that's true at all. I think we can look at the technical merits of languages, compare those to the problems we have, and see which ones are better. And that's important because when we have the choice, we can choose to use the better tools.


JS is not a safe long term bet if you want to build reliable systems. There are better languages out there for this purpose. Just because JS is widespread does not mean it is good.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: