Hacker News new | comments | show | ask | jobs | submit login
“Node.js is one of the worst things to happen to the software industry” (2012) (cat-v.org)
624 points by behnamoh 393 days ago | hide | past | web | 552 comments | favorite



Disclaimer: I worked with Node.js full-time for about 14 months, ending about 9 months ago. I'm willing to admit that the following complaints may or may not be out of date, especially with ES6 and newer Node versions. Proceed with grains of salt at the ready.

There are a lot of things to like about Node.js, but the primary thing that bothers me about it is the obsession with async. It's a language/framework designed to make pretty much everything async.

In the real world, almost everything is synchronous, and only occasionally do you really want async behavior. By that I mean, you almost always want A, B, C, D, E, F, G in that order, and only occasionally would you say that you want async(H,I). But with Node, it's the other way around. You assume things are async, and then specify synchronous behavior.

Instead of assuming A; B; C; D; E; F; G;, you wind up with a ton of code like A.then(B).then(C).then(D).then(E).then(F).then(G);

...I know that's a contrived example, and I know you don't really need to do it that way, but so many people do, and it really illustrates the point. In Node.js, you are explicitly synchronous / implicitly async. Most other coding paradigms (including Go) better match what I consider reality, which is that everything is implicitly synchronous, and you specify async behavior when you need it.

Basically, I think it's backward. But perhaps like the OP, I just can't wrap my head around it.

The NPM stuff... well, I think all ecosystems have their pros and cons. I'm not a huge fan of NPM, but it does the job for the most part, and I'm curious as to how people would actually improve it, rather than just complain about it all the time. I don't really have any good ideas (knowing nothing about how package management actually works under the hood).


Well.... no, not really. Processors really are fundamentally asynchronous. They are state machines which react when external signals impinge upon them. Synchronous behavior is a useful illusion, a convenient reasoning tool for some kinds of tasks, and our traditional notations for describing computational processes are all based on this model of how computing might be done - but it hasn't actually been done that way since the batch-processing era.

I learned this as I dove into low-level hardware work - drivers and embedded systems. When you're dealing with hardware, life is fundamentally driven by interrupts. Processors spend most of their time asleep. When something happens, the signal level of an input pin changes, and the processor reacts: it wakes up, it starts executing the interrupt handler for that signal, it does things, it sends messages, and then - its job done - it goes back to sleep.

So what, we're talking about normal computers here, you say, surely they're different, right? But no, they really aren't; it's the same process of signal, interrupt, handler, return, all the time. Every context switch, every packet receipt or transmit completion, every timer tick, all of it, it's all driven asynchronously; the system is a hierarchy of event handlers waiting for their opportunity to respond to some external signal.

If you're trying to build an IO system, then, the synchronous model is an expensive illusion. You can force it, but it isn't reality. You spend time waiting, you spend time blocking, you build complex APIs that pretend to poll arrays of connections in parallel; and all you're doing is forcing that illusion of synchrony onto a fundamentally asynchronous world.

It's backwards, all right, but it's backwards because our traditional way of describing computation is backwards, especially the languages which follow the imperative tradition. Functional languages have gained popularity in part because the mental model works better for asynchrony; instead of describing a series of steps to follow, and treating the interrupt as the exception, you describe the processes that should be undertaken in certain circumstances; you effectively construct a state-machine instead of describing a process, and then you leave that state-machine to be poked and prodded by whatever external signals come along to drive it.


Mainstream processors are synchronous. For this reason there is a global clock needed to synchronise all (most) the chip parts. I remember there was some research on asynchronous CPUs but I don't think it ever yielded anything commercial.

EDIT: Wikipedia link: https://en.m.wikipedia.org/wiki/Asynchronous_circuit

What you are saying is that they react to interrupts asynchronously, but this has nothing to do with the synchronous nature of a CPU. Moreover I think that if you have a CPU that doesn't respect the barriers and reorders the instructions to execute them in parallel, then it's a bugged CPU.


That's besides the point.

The things that are synchronous within the processor are also synchronous in Javscript.

What is async in JS and Node (unless you use the sync alternatives) is any kind of IO (disk, network). That is also async for the CPU.

And even the things that you would expect to be synchronous inside the CPU are really NOT for modern, out of order CPUs. They do a lot of complicated things to make code run faster, use branch prediction to predict the code path taken and execute (apparently synchronous) instructions in parallel.


But that only happens when the result is the same as if the instructions were ran synchronously, and synchronization primitives are still needed at the end of these sections.

Even if they weren't, the abstraction layer that you work at with Node is far, far above the point where you care about instruction reordering. If synchronicity is an inherent property of the process that you're implementing (e.g. you want to insert something in the database and then log an event) then no amount of shuffling will alleviate the fact that the logging has to happen after the database insertion.

Edit: the fact that deep down they reorder instructions is a bit scholastic, but if we are to go into these details, it's worth remembering that even the instruction reordering happens synchronously, and that even asynchronous events, like input from an off-chip peripheral, are treated synchronously. The turtles are synchronous all the way :-).


Haha yeah, sure. Reading the input from an IO device once it's been delivered is synchronous.

But IO is not handled in an synchronous fashion. If CPU cores just waited around idling until a hard drive is done fetching the data (for example), our computers would be very slow indeed.

That was my main point.


Ah, certainly. In a similar manner, what I meant is that oftentimes it's the process itself that's synchronous, and if the semantics of a language or a toolkit make it difficult to manipulate this sort of flow, people complain.

I've done too little work with Node to have a constructive opinion on it, but I've occasionally had similar complaints about Qt, which I've used a lot more. It uses a signal-slot paradigm where you basically say "when this event is triggered, this function gets called" (in a callback-like, but somewhat more powerful and flexible fashion). It's great for a lot of things, but makes it hard to read and reason about synchronous operations that have to be fit into this form.

Ultimately, though, people cope with it and write working software with it. I imagine it's the same with Node :-).


> What is async in JS and Node (unless you use the sync alternatives) is any kind of IO (disk, network). That is also async for the CPU.

The same statement is true of Erlang and Elixir, yet they manage to handle non-blocking async code without either of the daft callback/promise code styles of Javascript.

Even within the confines of async-capable languages, Node is one of the worse options.


I agree that CSP as in Erlang/Elixir or Go is way better for handling async behaviour.


Or clojure, whose flavor compiles down to JS.


As I said in my previous post, a CPU that doesn't respect barriers and reorders the instructions anyway is a bugged CPU, not a modern one.

If I need the database result before doing anything else then my code is inherently synchronous and it has an hard dependency on the database result. In that context making the DB call asynchronous and waiting for it is more complex than making a synchronous call and proceeding with the normal program flow. The most natural way to write an algorithm is to have everything synchronous and have a "simil-synchronisation" of the asynchronous program flow using an "async monad". And sadly I normally use C# 4.0 that still doesn't have the async construct borrowed from F#, so I'm stuck with tasks and continuations.


> And sadly I normally use C# 4.0 that still doesn't have the async construct borrowed from F#, so I'm stuck with tasks and continuations.

Do you know that the async/await constructs are just sugar for the Task class you have access on C# 4.0?


The C# implementation is a wrap/unwrap of the Task class, in F# it is using computation expressions to lift the synchronous code to an Async monad. The async construct is much easier to reason about, especially with nested tasks.


We are talking about different kinds of asynchrony. Yes, the CPU marches to the beat of a single drum, but even that "advance" signal can be seen as an action taken in response to an external stimulus: the voltage change incoming from the clock crystal.


Nope. When the clock signal is received it means that the components in the critical path have sent the output in the destination, and so the state of the chip is coherent and it starts another step, synchronous across all the chip as the previous. Using your definition then I can say that a block of procedural code that starts on a single thread as soon as an HTTP request is received, processing it and blocking the thread until the end, is asynchronous because it is an action taken in response to an external stimulus, to use your own words. But it is simply not true. The mechanism that dispatches the call to start the subroutine is asynchronous, the code itself is synchronous.


I'll take your word for it; we are moving into architecture that sits below my experience now. But we are also below the level of abstraction relevant to the original question, which concerned the design of an API used for communicating with hardware outside the CPU.


Any system that shares a clock is synchronous. Two common communication methods for hardware are SPI, which has a shared clock, and is hence synchronous (https://upload.wikimedia.org/wikipedia/commons/thumb/f/fc/SP...) and UART, which has no shared clock, indeed it can be implemented with a single wire, and is hence asynchronous. Internally the receiver's clock needs to be fast enough to correctly sample data being transmitted to it. (Ed: or I guess you might be able to do something clever with an async circuit fed directly by the signaling line.)

Communicating with hardware outside the CPU doesn't have to be synchronous or asynchronous, it's a design decision depending on what you're trying to accomplish. Languages that don't support both methods are missing half of the design space.


I believe it's relevant to point out that most CPUs are synchronous in response to the claim that "Processors really are fundamentally asynchronous."


It does seem like I could have phrased that differently, since a number of people have taken it to mean something rather different than I intended.


Isn't this essentially saying that events need a catalyst, there needs to be some external stimulus for something to occur? That's fine. Synchronous programs have that stimulus. It's called "user action". The program, like the CPU, waits until the entity running the show tells it it would like something to happen. Then it does that something in the order specified.

So yes, the CPU waits for the program to ask for some work, just like the program waits for the user to ask for some work. When that work is requested, the program is executed synchronously. I really don't understand how this argument supports the mess that is Node.js.


Well, CPUs do use out-of-order execution to optimise some things: https://en.wikipedia.org/wiki/Out-of-order_execution

But, crucially, they can only do it when it still behaves as if they're operating in-order.

Either way, I agree that CPUs are explicitly synchronous in terms of how they work internally and how they interact with the outside world.


What really happens is in fact very similar to what happens in the javascript event loop.

The processor processes instructions one after another, synchronously, but whenever he has to use something external he makes the call and continues his normal life until the interruption triggers.

The javascript event loop consists in placing function calls in stack and using WEB api calls asynchronously.


The whole point of abstractions is for us to not have to worry about the underlying complexity. If I want to program in a way that is close to how the hardware works, I'll pick a language like C, however when I choose a high level language, I do not want to care if everything my processor performs is async.

What difference does it make if from the processor's perspective I have:

```js somepoeration().then(function() { // do stuff }); ```

or

```js somepoeration(); // do stuff ```

In both cases my program will just wait for 'somepoeration' to complete before doing anything useful.


In the second case, IO is not allowed to interrupt between between 'someoperation' and 'stuff' and change your program's state.

In node, the first form is only needed when 'someoperation' has to perform IO to do its job so concurrent operations are unavoidable.

If you're trying to reason about your program, this is huge. It very much matters where these points are. A concurrent thread being allowed to jump in and modify memory at any time at all, even halfway through a primitive math operation (the deranged C / Java / Python[1] memory model) means a correct program is a morass of locks and defensive programming and an obvious program is horribly broken.

Communicating sequential processes (Go, PHP+MySql) makes IO have a modestly simpler synchronous syntax at the cost of communicating between I/O operations much more complex (sending a message to a port or performing some sort of transaction instead of just assigning to a value). It's a tradeoff.

node.js does assume you're writing a network program that needs to overlap multiple communicating I/O operations; if this isn't the case there's no tradeoff to make and non-communicating sequential code is probably easier (think PHP without a database or sessioning)

[1] Python is modestly less braindead by having a very modestly larger set of operations guaranteed to be protected by the GIL. Hoever, it's not composable at all and it doesn't really make reasoning about nontrivial programs easier in practice.


> A concurrent thread being allowed to jump in and modify memory at any time at all, even halfway through a primitive math operation (the deranged C / Java / Python[1] memory model) means a correct program is a morass of locks and defensive programming and an obvious program is horribly broken.

This ignores that most threads, most of the time, operate on their private memory. When your request thread is querying the database and rendering some HTML based on the data from the database, then the only thing that needs to be guarded is the database access. The HTML rendering happens on thread-private data.


If you're trying to reason about your program, this is huge. It very much matters where these points are. A concurrent thread being allowed to jump in and modify memory at any time at all, even halfway through a primitive math operation (the deranged C / Java / Python[1] memory model) means a correct program is a morass of locks and defensive programming and an obvious program is horribly broken.

This can be solved by functional programming. Since the original article was written, JS has `const` and `let` [1] . Also, there are now numerous statically typed functional languages that compile/transpile into JS.

Also, if a developer understands exactly how JS is executed, the problem is mitigated:

    var mutableState = {}

    asyncFunc.then(nextAsyncFunc)

    syncFunc()
If you ignore web workers and node.js threading libraries people have written, Javascript always (by design) finishes executing synchronous code before executing any async handlers (your code is in a queue of handlers). This means that syncFunc will never be interrupted by nextAsyncFunc and have mutableState mutated during its execution. Of course, this means that if you do any intense computation synchronously inside a web server, it will block all other requests until it completes, and it will only use one core.


Const does not mean immutability, only immutable references to the outermost pointer. It is equivalent to final in Java. While that solves the issue with numbers changing state, it does not help objects e.g. For that you need something like immutable.js from Facebook.


"I do not want to care if everything my processor performs is async."

The point is not the processor. The point is that all IO operations take time. Processing the IO operations concurrently with the serial program logic is where the beef is. Writing asynchoronous code in C or C++ is tedious to say the least in my experience, with weird concurrency bugs looming everywhere.


That's not accurate. If you have any code after the line in the first example, it will be executed before the function in the "then" call. You could run several asynchronous operations (nearly) simultaneously. I/O will execute as fast as possible rather than wasting cycles running things sequentially.


I'm fairly sure his point was that if he wanted code to execute between the calls he would rather write that in his code directly than deal with the cognitive overhead of reasoning about constant async calls.

Async makes a lot of sense in cases where you are waiting for an event outside of your server's process, but async calls in the same code base seem a bit over the top.


The difference is that an async call takes a callback and the side effects of the callback, if any, are not immediately available.

But anyway the second call may as well be synchronous and blocking.


You're right, and you're also being chronically misunderstood. It's not about "asynchronous" in the sense of "unclocked"; it's more about how incredibly fast processors are if and only if they aren't waiting for something else.

The programmer would like to think of the series of operations involved on a per-connection basis. Whereas from the point of view of the processor a "connection" is not a thing; there is only a stream of incoming message fragments at occasional intervals from the network and I/O subsystem. It can request data from persistent storage by what is effectively snail mail: send a message and wait a few million cycles for it to come back.

So the software must consist of a set of state machines. We can push those up into the operating system and call them threads, or down into the user's code and call them callbacks and continuations. Each approach has advantages and disadvantages, and it's important to understand what they are.

(Possibly the language which does this best is Erlang, although it's not as easy to get started with as node.js. Theorists really overlook the vital metric of "time to hello world" in language learning.)


> the mental model works better for asynchrony; instead of describing a series of steps to follow, and treating the interrupt as the exception, you describe the processes that should be undertaken in certain circumstances

I could never understand how this works better as a mental model. Say you ask somebody to buy you a gadget in a store they don't know. What do you do tell them:

a) "drive in your car on this street, turn left on Prune street, turn right on Elm street, the store will be after the second light. Go there, find "Gadgets" isle, on the second shelf in the middle there would be a green gadget saying "Magnificent Gadget", buy it and bring it home"

or:

b) when you find yourself at home, go to car. When you find yourself in the car, if you have a gadget, drive home, otherwise if you're on Elm street, drive in direction of Prune Street. If you're in the crossing of Elm street and Prune street, turn to Prune street if you have a gadget but to Elm street if you don't. When you are on Prune street, count the lights. When the light count reaches two, if you're on Prune street, then stop and exit the vehicle. If you're outside the vehicle and on Prune street and have no gadget, locate store and enter it, otherwise enter the vehicle. If you're in the store and have no gadget then start counting shelves, otherwise proceed to checkout. Etc. etc. - I can't even finish it!

I don't see how "steps to follow" is not the most natural mental model for humans to achieve things - we're using it every day! We sometimes do go event-driven - like, if you're driving and somebody calls, you may perform event-driven routine "answer the phone and talk to your wife" or "ignore the call and remember to call back when you arrive", etc. But again, most of these routines will be series of steps, only triggered by an event.


A list of steps to follow acts way more like the event model than procedural code.

The state of the world can change between steps and the next set of instructions don't apply til you meet some certain criteria or .. event, if you will.

> drive in your car on this street, turn left on Prune street, turn right on Elm street, the store will be after the second light.

Turn left on Prune street doesn't mean nothing else happened between you driving your car and reaching that street.

You could have stopped on the way to pick groceries, changed a tire, bought a snack ... you get the idea.

It doesn't mean you must drive straight to Prune or could get to the destination if you only drove straight there.

Getting to Prune street is an async call. When you reach it, regardless of how, a callback is fired and a new async task is given to you.


It's not "async call", it's a sequence of actions that you perform. Whenever it happens, it's still a sequence of actions. It is a natural way for people to describe a way to do something. The fact that one scenario can be interrupted for another does not change its nature.


> b) when you find yourself at home, go to car. When you find yourself in the car, if you have a gadget, drive home ...

This is basically exactly how it happens. I say "Can you buy me a thing". The rest is automatically performed by your brain's programming. There is no need to actually mouth the instruction "if you are at home without the gadget, go to your car" because "Can you buy me a thing" automatically links against the code which includes that state processing function.


I think a lot of the issues with JavaScript are around the movement of individuals from synchronous programming languages to the asynchronous JavaScript way.

For some time it resulted in callback hell (http://callbackhell.com/). This was addressed through Promises and more recently Observables, Generators, and async/await.

With transpiling, adoption of these asynchronous concepts has been possible quicker than would have been achieved otherwise.

I would suggest revisiting some of the modern JS constructs to support better use of the underlying cpu architecture.

I will however admit, getting my head around Promises took about a month before it really clicked how powerful this approach can be to support non-blocking IO.


> With transpiling, adoption of these asynchronous concepts has been possible quicker than would have been achieved otherwise.

You're basically admitting that Javascript is not good enough to write asynchronous code and you need to use a third party language that transpiles to Javascript ...


> This was addressed through Promises and more recently Observables, Generators, and async/await.

Parent is talking about ES2015 constructs (or ES2016, in the case of async/await). I don't think anyone will disagree that doing async stuff in ES5 is a massive PITA- but it's a bit disingenuous to claim that the newer Javascript specifications aren't doing enough to fix those problems.

That said, four years ago I would have agreed with you- Coffeescript seemed to be the only game in town making an effort to improve the state of Javascript. But I think the embrace of transpilers in front-end engineering has only helped Javascript thrive, by allowing the community (and core developers) to more easily determine what features will actually make the language better.

(On a completely unrelated note, nice username- DCSS reference?)


I've started using async/await and it is something of a game-changer in terms of code readability.

Promises were nice but also subject to nesting and readability problems which the new addition solves nicely.


>You're basically admitting that Javascript is not good enough

Most languages go through iterations and improvements. Transpilers are enabling Javascript to 'fix' issues with the language quickly while being backwards compatible. This is a good thing.

Note by necessity all languages transpile down machine code eventually. ;)


I would't say that transpiling and compiling are the same.


Try doing some research before you post nonsense. Google isn't more than a click away...


Says the guy who has nothing relevant to say.


That layout made my eyes bleed: http://i.imgur.com/hwKqaUc.jpg


It's not about how the processor works. A developer which is using a language of JavaScript's abstraction level does not think about exactly how a processor works. It's about how a human can easier think up a program and put it into code. And that is mostly synchronous indeed.


I find that many misunderstandings of this sort arise from inadequate mental models of the underlying system, and I was trying to present the mental model I use, in which asynchronous IO patterns seem totally natural and normal. Clearly that introduced some unnecessary confusion, so I'll frame things differently next time.


How many JS-targeted VMs / container things run just one thread of this async-ness when more are available to that container / VM / context? I'd rather run mod_php under preform MPM and take advantage of all the cores. Is there something as easy and well-supported for running Node apps in things like Docker?


> In the real world, almost everything is synchronous, and only occasionally do you really want async behavior.

From an Ops perspective, I disagree. At a systems level, everything already is async, whether you like it or not. So then the options are either to embrace the admittedly hectic nature of async, or pretent that things are synchronous and impose those constraints to maintain a logical purity from a programmer's standpoint.

For example, in a synchronous setup, it's perfectly logical to service an incoming http request with a few database queries followed by some more logic and some more DB queries or API calls; wrap all the result and send it back. The same in node.js forces the code author to think about those initial DB queries (which may run simultaneously), about how to roll up the result when they are all done and pass it to callbacks.

But while thinking about those callback interactions, there's a motivation to reduce the number of steps, or to factor them out into their own module or microservice. This promotes thinking at a systems level instead of just at the level of "1 request serviced by 1 handler". Perhaps that big request can be broken up into several API calls and the web page can load those asynchronously. Perhaps the smaller microservices end up building up less memory usage per request (or spread it out in a way that makes clustering more efficient).

One may see async as a hurdle in the way of logical abstraction, or as a way to align thinking with the realities of network programming.


The advantage you're describing is concurrency, which doesn't necessarily require an asynchronous/event-driven programming model. Synchronous concurrency is built around threads to achieve concurrent execution (i.e. instead of waiting on events for several requests to finish in parallel, you fork off a thread for each to process each request synchronously).

Node came of age about a decade after epoll was introduced, when not having access to nonblocking IO was considered a big liability for a couple of dominant web programming languages, and they built their concurrency model around the semantics of epoll.

However, there are languages like Haskell, Erlang, and Go that IMO did the right thing by building a synchronous programming model for concurrency and offering preemptable lightweight processes to avoid the overhead associated with OS thread per connection concurrency models. These languages offer concurrent semantics to programmers, yet still are able to use nonblocking IO underneath the covers by parking processes waiting on I/O events. It's not the right tradeoff for every language, particularly I think lower level languages like Rust are better off not inheriting all the extra baggage of a runtime like this, but for higher level languages I think its probably the most convenient model to programmers.


>>So then the options are either to embrace the admittedly hectic nature of async, or pretent that things are synchronous and impose those constraints to maintain a logical purity from a programmer's standpoint.

Logical purity is important. It helps people reason about their code better, which in turn makes them more productive.

The reason we have frameworks and high-level languages is so that we can take advantage of abstractions and not worry about the underlying complexities of the system. If a framework is making you "embrace the hectic nature" of those complexities, why use that framework in the first place? Might as well use a language like C, which actually is a low-level language designed for developing hyper-optimized software. Node.js, on the other hand, is supposed to be a Web (read: high-level) framework. That's why implicit-async doesn't fit its nature, and in large codebases you see nested callbacks all the way down.


Ever heard of the Law of Leaky Abstractions?

Logical purity is completely useless if the chosen abstraction leaks like the Titanic after the iceberg.

And no, the alternative is not "just use C", but choosing an abstraction that better fits the nature of the underlying system.


I think what this really gets at is that callbacks are a shitty concurrency primitive.

They certainly make sense from an implementation perspective (I've got this interpreter and I want to do concurrency... I know, I'll just use it to trampoline callbacks).

But callbacks are too low level to reason about easily to some extent, it's kind of like writing everything as a series of goto statements. Sure you can do it, but good luck following it.

The advantages of asynchronous execution are lost. This isn't limited to nodejs, I've experienced similar problems with Python twisted...

Channels (such as in clojure/core.async) or lightweight processes and messaging (such as in erlang/beam), are much more enjoyable to work with.

This shouldn't really be a shock, however, they're higher level abstractions based on real computer science research by people who really thought about the problem.


Callbacks are ugly when you have to deal with them directly. But a little bit of syntactic sugar to capture the current continuation as a callback - such as C# await, or the now-repurposed yield in Python - makes it very easy and straightforward to write callback-based async code.


Babel and regenerator let you use await in es/js too.


Python does have await from asyncio


Apparently, I wasn't paying as much attention to the state of async in Python 3.5 as I should have. Looks like they actually have the most featureful implementation of that right now, complete with async for all constructs where it can be sensibly implemented and provide some benefits (like for-loops and with-blocks).


Fortunately JavaScript is a flexible enough language that you don't only have to use the primitives it provides.

Promises (which were pure JS before they were ever added to the language) are a more-composable structure for reasoning about concurrency. I don't think I've put more than one line of callback-style code into production per month.


And async/await are such a game changer its amazing.

I finally bit the bullet and tried it even though it's still not final syntax, it simplified most of my code while keeping the execution exactly the same.

It really is a night and day difference.


On the subject of nice higher-level concurrency abstractions, I'm very taken with the current fad for Promises/Futures (a la ES6 or Scala), which are basically a translation of the Haskell IO monad to plain English.


Isn't closer to the Async monad?


I was a node developer back when it hadn't been packaged yet (yes, I'm a hipster). I left a few years ago for many reasons, including the fact that I think the community sucks.

I agree with everything you say, and I want to add to it. People compare node to things like Go, PHP, Python, etc, which is a mistake. As someone who had to write a fairly complex native module for node that a lot of people still use (5000 downloads this month, according to npm), nodeJS is a framework not a language.

Node is not in control of the underlying code that implements the ES spec. Node is a bunch of C/C++ middleware wrapping V8 and implementing various server and process utilities. That is all it is.

Why is this important? Because it biases the community towards piling on feature after feature and moving at an insane pace, sometimes in ways that are outside of the community's control. Node is biased against stability, because they feel the need to keep up with V8. Counter this with Go, which has been frozen as a language for 8 releases now, with each release fine tuning the stability of the compiler/language, runtime, and libs.

That native module that I mentioned earlier, I have a bunch of open requests to make it work with nodeJS 6 (no idea what this is), which is the fourth breaking V8 interface change I've had to deal with. The native module clocks in at >2000 lines of C++. I implemented the same library in Go and it clocks in at 500 lines. I've only had to modify the Go version of this library once, and it was to take advantage of a new feature (i.e. I didn't have to update it).

Node's continual quest for features is going to keep it in the niche web development space it has come to dominate. There is simply no way I would architect a system with such an unstable platform.


Maybe it's because I learned to write code in javascript (browser and node) -- does that make me an async native? -- that I feel the reverse of you. Synchronous just doesn't feel right, and I'm happy with the way node is assumes async. I'm one of those who finds the new Await syntax sort of unsettling at a gut level.


Async isnt about ordered operations its about waiting. When you make a database call, you thread can sit there for 50 ms or do something useful. You can outsource rendering work to a child process. You dont have to let infinite loops block events.


That 50ms isn't necessarily wasted. When I get the results of that database call I want the results returned to the user ASAP, with async operations the thread may be too busy to do that.

We do this daily in real life. It's more efficient for me to have 10 minutes down time at work while I'm waiting for someone else than it is for me to start something else then either get interrupted or finish another task before I respond to the person I'm waiting on.


It depends on how fast you can switch context. If you can switch context in 1ms, then waiting 50ms is a waste of 48ms. But if context switch takes 100ms then sure, it's better to wait idling.


This seems absurd to me. What exactly are we going to sit there and do while waiting on the data we need to fulfill a query?

I work on services where any given endpoint may handle many thousands of requests per second. We don't care about a slight penalty to query the database, because these are very short, and our services return responses on the order of 100 ms.

Maybe these are things you care about in a language like Javascript, but in something statically typed, they just don't seem like a problem.


> What exactly are we going to sit there and do while waiting on the data we need to fulfill a query?

Fulfil more/other queries.

> ...in something statically typed, they just don't seem like a problem.

What has static typing got to do with asynchronous IO?


You could use the same thread and process to do something else instead of having to spin up another thread or process.

If you're running a web server, you can handle another request, then come back to the first one when the data arrives.


So shouldn't the only question ever be:

* Would we rather burn computing resources (i.e. on new threads) or calendar time+human resources (i.e. on code refactoring)?

Followed closely by:

* And does it even matter for our situation? (i.e. since for many situations getting every last drop of performance out of a system isn't the real obstacle; it's making it do something useful for the end-user or business)


and let's not kid ourselves that anyone trying to get maximum performance out of anything would be running JavaScript. If you need max performance, you write C, and sometimes raw ASM.


That's like saying if you aren't driving a Veyron you obviously don't care about speed so use a moped.


Having a successful async paradime helps closer to bare metal by setting a standard for how things should work. Now you can likely find more asynchronous libraries that attempt to copy the functionality of something familiar than creating something completely fresh


In a client world, you definitely care about using all of your resources and never halting the rendering/main thread. There is often many things going on at once and you usually need to keep main thread halts under 16ms


> What exactly are we going to sit there and do while waiting on the data we need to fulfill a query?

On the data you need to fulfil one client's query. The answer is that you do stuff you need to fulfil other clients queries, until you're ready to come back to the first one.

I don't see what type systems have to do with your comment either. Scala uses asynchronous concurrency primitives (Futures) and it's statically typed.


It's common in high performance code to use spin locks (also called busy waiting) to reduce latency. In some circumstances it's better to do nothing and be available almost immediately to do some work than to naively context switch away and pay the penalty of context switching back later. Check out how futexes are implemented in Linux for an example - hybrid mutex and spin lock.


If you're running a web server, you're processing possibly hundreds of requests at any given time. Especially since JS is single-threaded, you want to minimize blocking by awaiting, aka yielding your thread to another thread to let it continue processing its own request. Async minimizes time spent doing nothing.


Doing synchronious is much clearer and easier, most of the async patterns (like yield/await and promises) really just try to simulate a sync paradigm.

But there is one great reason why node's async first mentality is superior- when you want to do async in node, you don't have to worry that some library you are using is going to lock up your thread.

In any other language, you have to painstaikingly make sure everything you use isn't doing sync, or try and monkeypatch all io operations (python has something like that).

Frankly, for me, when I'm dealing with webservers I find myself needing to use async quite a lot.


In any other language

No; for example, in Go, every time the code does IO, the scheduler will re-assign another goroutine, and they asynchronously get back to the other when the IO finishes. It doesn't need any special support by the library.


That sounds like an implicit async/await, which sounds rather awesome.

I haven't really gotten around to trying Go yet, but there are definitely great things going on there.


It's just green threading, which has been around for a long time.

You can run millions of threads on a single machine with modern Linux kernels if you either have lots of memory or simply minimize your stack sizes and usage. The kernel doesn't care much. You don't really need any special language features. The main reason fibers/green threads/goroutines/etc have become popular lately is that common language runtimes and standard libraries don't resize thread stacks on the fly and tend to like very deep call stacks, so minimising your memory usage can be rather hard.


> he main reason fibers/green threads/goroutines/etc have become popular lately is that common language runtimes and standard libraries don't resize thread stacks on the fly and tend to like very deep call stacks, so minimising your memory usage can be rather hard.

Green-thread, coroutine have been around for a long time, the recent popular is probably due to increase prevalence of people trying to solve the C10K problem.

Green thread, are more efficient both in term of memory consumption and context switching than kernel thread (even more so with compiler support).

The linux kernel might be able to handle millions of threads, but thruth is one can do it much more efficiently in userland


> Doing synchronious is much clearer and easier, most of the async patterns (like yield/await and promises) really just try to simulate a sync paradigm.

I'm currently working on some lighting (DMX-based theatre lighting) software and the ability to have my 'effects' written in a sequential manner by using 'yield' is actually incredible. It's simplified my code a huge amount and made it a lot easier to reason about.

There is a function called every frame, which then calls each effect. Since each effect is a Python generator function, it can be written sequentially and just has to yield whenever the frame is computed.


> when you want to do async in node, you don't have to worry that some library you are using is going to lock up your thread.

> In any other language, you have to painstaikingly make sure everything you use isn't doing sync

I haven't found that to be the case TBH. In most other languages i've used i don't have to worry about foo() being syncronous or not. I can assume that. In other words, after the control returns from the foo() call, i can assume that what foo() is supposed to do has been completed. In the case that foo() needs to do some IO operation, i can usually rely on it doing a syscall that will block the thread and put it to sleep, instead of doing something nefarious like busy waiting.

And that's it. The current thread will sleep while the IO operation in foo() completes, and the OS (or VM if we're talking green threads) will switch the CPU to do something else in the main time. And if it's a web server we're talking about, that something else may be another web server thread, serving a different request. Yay, preemptive multitasking!

In the case of node, you do very much have to worry about a function call being synchronous or asynchronous, as it defines the way it has to be called. In the former case, it can be called "normally":

  let a = foo()
  // use `a`
In the async case, some code gymnastics need to be involved:

  // Either callbacks:
  foo(a => { 
    // use `a`
  })
  // Or promises:
  foo().then(a => {
    // use `a`
  })
  // Or promises + await syntax:
  let a = await foo()
It may not seem a big problem at first, especially if using the `await` syntax is an option. But to me the biggest problem of this technique is that the "async-ness" of a function cannot be abstracted away. If a function `foo` that has always been synchronous, and that is used in several places, needs to be changed, and this change implies `foo` calling an async function `bar`, then `foo` will need to become async too, and all the places where `foo` is called will need to be changed. Even though this internal `bar` async call was an implementation detail of `foo` that shouldn't have concerned `foo` callers. And this propagates all the way up: if any of the places where `foo` was called was, in turn, a synchronous function too, then that function will also need to be refactored into an async one, and so on and so forth.

And that's basically why i find node's distinction between sync and async functions so frustrating :(

PS: Cooperative multitasking does not imply this weird syntax complexity of having two different kinds of function calls that cannot be freely intermixed. For instance, Eralng "processes" use cooperative multitasking, but the "yields" are managed implicitly by the VM, so you don't need to explicitly yield control of the current thread to let other stuff run concurrently.


> make pretty much everything async.

This isn't intrinsically bad. It just makes Node.js biased towards use cases that benefit from asynchronous I/O, which is a perfectly valid design choice. Sensible programmers already know that computationally intensive tasks that benefit from actual parallelism are best served by a different tool.

What's absolutely horrible about Node.js is how they make everything asynchronous: explicit continuation passing everywhere. In other words, the user is responsible for manually multiplexing multiple logical threads through a single-OS-threaded event loop. If it sounds low level, it's because it is low-level. A high-level programming language ought to to better than this.

Other languages get this right: Erlang, Go, Clojure, Scala, Haskell, Rust, etc.


1) All data has a type.

2) All I/O (if you're doing it right) is asynchronous.

Node does things the right way around; it's just a shame that Linux (unlike Windows) doesn't have async I/O baked into the heart of the OS and well supported with useful kernel primitives.

Synchronous I/O by default means you're wasting your CPU cores. When running at Web scale, you ALWAYS want to leverage async I/O in order to not have the CPU idle blocking on some I/O operation, and you don't necessarily want the synchronization and memory overhead of multiple threads. That's why Node is designed the way it is.

By using the 'co' library and generators you can write async code almost as if it were sync. It works like the inline-callbacks feature of Twisted.


Synchronous I/O by default means you're wasting your CPU cores.

But that's the thing: synchronous code doesn't mean synchronous execution. The language/platform should be able to make it async for you, without forcing you to re-shape your code. And it's not a rare system: even plain old threads work like that - in fact, they're often a good solution: https://www.mailinator.com/tymaPaulMultithreaded.pdf


The code itself is not the problem. Data, specifically mutable state and dependencies are. Relying on the platform to figure those out for efficient async execution hasn't worked out so well in the past...


> it's just a shame that Linux (unlike Windows) doesn't have async I/O baked into the heart of the OS and well supported with useful kernel primitives.

What about epoll()?

> Synchronous I/O by default means you're wasting your CPU cores.

You can implement synchronous IO on top of an async backend using the above mentioned epoll. I believe that is how read() and similar syscalls are implemented. They block the programs thread, but the core is free to run other threads while it waits for IO.


epoll doesn't change the fact that all of the well-supported I/O primitives under Linux are synchronous. Not necessarily blocking, but synchronous. There are aio_* system calls in the kernel but they suck. There are only a few, and the kernel devs seem to think that doing sync I/O in a different process/thread is good enough so they may not even last forever in the kernel.

Linux does not support I/O completion ports, or even a completion-based model for I/O at all. (Instead it uses the readiness model, requiring you to waste cpu cycles in a polling loop checking if an fd is "ready", that is if you don't want to spin off a new process.) In Linux you can't check an I/O buffer for data and schedule the call that would fill the buffer if it's empty in one system call, the way you can in Windows.

Windows was modelled after VMS, which was designed for reliability and throughput. Linux was modelled after Unix, which was designed so that Ken Thompson could play games on a scrapped PDP-7.


This sounds like something straight out of the Unix Hater's Handbook. :)

But epoll is not the fault of Unix. Linux gets it wrong, but others do not. There's a good (and wildly entertaining) discussion/rant about it in this BSD Now episode: https://www.youtube.com/watch?v=l6XQUciI-Sc


epoll (and kqueue) are products of Unix's approach to I/O, which is synchronous by design. You can build synchronous I/O routines out of asynchronous primitives, but not the reverse (without resorting to separate processes/tasks/threads).

epoll is more wrong and broken than kqueue, but neither are up to the level of capability the NT kernel offers.


> requiring you to waste cpu cycles in a polling loop checking if an fd is "ready",

epoll in edge-triggered mode does not require this loop to exist in userland. In edge-triggered epoll (or the similar kqueue construct), your work thread(s) will be asleep until the kernel chooses to wake one up.


> Synchronous I/O by default means you're wasting your CPU cores

When a process becomes blocked (like due to synchronously waiting for IO to complete), the kernel will context switch away to another runnable process. The process that becomes blocked essentially goes to sleep, and then "wakes up" and becomes runnable again once the operation completes. The CPU is free during this time to run other processes.

If you need to do multiple different of these things concurrently, then you can run multiple processes. Writing a single process with async code won't make that process faster. To do more things at the same time you can run multiple processes. Context switching between different processes is what the kernel scheduler is designed to do, and it does so very efficiently. There isn't much overhead per thread. If I recall correctly, Linux kernel stacks per thread are 8 kilobytes (with efforts under way to reduce that further [4] - also discussed in [1]), and the user stack space is something the application can control and tune. The memory use per thread needn't be much.

Using all available cores to perform useful work is the most important thing to achieve in high-throughput code, and both async and sync can achieve it. Async doesn't become necessary for high performance unless you're considering very high performance which is beyond the reach of NodeJS anyway [2]. Asynchronous techniques win on top performance benchmarks, but typical multithreaded blocking synchronous Java can still handily beat async NodeJS, since Java will use all available CPU cores while Node's performance is blocked on one (unless you use special limited techniques). There's some good discussion about this in the article and thread about "Zero-cost futures in Rust" [1]. The article includes a benchmark which compares Java, Go, and NodeJS performance. These benchmarks suggest that the other tested platforms provide 10-20x better throughput than Node (they're also asynchronous, so this benchmark isn't about sync/async).

Folks might also be interested in the TechEmpower web framework benchmark [2]. The top Java entry ("rapidoid") is #2 on the benchmark and achieves 99.9% of the performance of the #1 entry (ulib in C++). These frameworks both achieve about 6.9 million requests per second. The top Java Netty server (widely deployed async/NIO server) is about 50% of that, while the Java Jetty server, which is regular synchronous socket code, clocks in at 10% of the best or 710,000 R/s. NodeJS manages 320,000 R/s which is 4.6% of the best. In other words, performance-focused async Java achieves 20x, regular asynchronous Java achieves 10x, and boring old synchronous Jetty is still 2x better than NodeJS throughput. NodeJS does a pretty good job given that it's interpreted while Java is compiled, though Lua with an Nginx frontend can manage about 4x more.

I agree that asynchronous execution can provide an advantage, but it's not the only factor to consider while evaluating performance. If throughput is someone's goal, then NodeJS is not the best platform due to its concurrency and interpreter handicap. If you value performance then you'll chose another platform that offers magnitude better requests-per-second throughput, such as C++, Java, Rust, or Go according to [1] and [2]. Asynchronous execution also does not necessarily require asynchronous programming. Other languages have good or better async support -- for example, see C#'s `await` keyword. [3] explores async in JavaScript, await in C#, as well as Go, and makes the case that Go handles async most elegantly of those options. Java has Quasar, which allows you to write regular code that runs as fibers [5]. The code is completely normal blocking code, but the Quasar runtime handles running it asynchronously with high concurrency and M:N threading. Plus these fibers can interoperate with regular threads. Pretty gnarly stuff (but requires bytecode tampering). If async is your preference over Quasar's sync, then Akka might be up your alley instead [6].

> By using the 'co' library and generators you can write async code almost as if it were sync.

For an interesting and humorous take on the difficulties of NodeJS's approach to async, and where that breaks down in the author's opinion, see "What color is your function?" [3].

[1] https://news.ycombinator.com/item?id=12268988

[2] https://www.techempower.com/benchmarks/#section=data-r11&hw=...

[3] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

[4] https://lwn.net/Articles/692208/

[5] http://docs.paralleluniverse.co/quasar/ [6] http://akka.io/


Node isn't interpreted, it runs on V8 which is a fairly advanced JIT compiling VM.

The main reason Node is so slow is that, as you point out, it doesn't do threading at all, and JavaScript is just inherently difficult to optimise into machine code, even though V8 makes a good effort.


The benchmark code uses multiple processes to utilize all CPU cores (standard way in Node.js land).

I would blame the highly dynamic nature of the language for (relative) slowness – the price you pay for flexibility / quickly-writable code. That being said, it's still fast for dynamically typed language.


What do you think a VM is?

It does not target your hardware architecture. Just because your code is in binary instead of text, it does not become less of an interpreter.


https://developers.google.com/v8/design

C-f for "Dynamic Machine Code Generation"

V8 compiles it to native machine code for execution. It's not an interpreter or a VM with its own bytecode.


Cool. There is no VM, just a runtime not much different from the ones from other compiled languages. It compiles your code piece by piece, and optimizes during that compilation.

This is a very interesting solution for JavaScript, but I think it's constrained by the language and could become better if it were fully compiled.


Perhaps for the use-case of node, being fully compiled could be a bit better. But I'm not convinced. Java has demonstrated (over two decades now) that initial compilation to machine code isn't essential for performant software (ok, less than two decades for hotspot JITs in Java). The cost for node is in startup time, but this is, like with Java, amortized over the life of the program. For a single-run executable, it's perhaps too costly (but we use truly interpreted languages for this as well, so depends on the task), but for a server, it's potentially pretty good.


Yes, that was for the specific case of Node. For web use, full compilation is a clear loss.

Java can make great use of JIT exactly because it's interpreted. When you make partial compilations, you lose a lot of JIT opportunities, and a lot of static optimization opportunities too. Yet, it looks like V8 attenuated this somehow, and got good performance anyway. Must have been a feat.


> 2) All I/O (if you're doing it right) is asynchronous.

Not true. Synchronous I/O is higher-throughput. If you need to do a large amount of high-latency I/O (e.g. web requests) then asynchronous can end up being lower-overhead overall, but making everything async is just as bad as making everything sync.


NPM would be hugely improved if:

1) `npm install` on two different machines at the same point in time was guaranteed to install the same set of dependency versions.

1b) shrinkwrap was standard.

2) it was faster.

3) you could run a private repo off a static file server.


Check in your modules.


In the past, I was involved with building a Scheme interpreter which was automatically asynchronous, without need for manually handling it with callbacks.

Basically, it was lightweight threads which would yield execution to other threads whenever waiting for I/O, or explicitly yielding. It allowed for a very straightforward linear programming style, no callback hell.

Coupled with a functional programming style, there was rarely a need for mutexes or other synchronization between threads.

When a thread yielded, the current continuation was captured and resumed when the thread was ready to continue. At the bottom of it all was a simple event loop where the Scheme interpreter dispatched I/O and timeouts. Scheme happens to have support for continuations built in to the language, so the implementation was actually quite simple.


I've built something similar but a completely new programming language with backtracking as the major feature. The intended use is dynamic OS configuration especially network interface configuration.

The interpreter is single-threaded but there is simple concurrency between the "processes", that is a process yields when it has nothing more to immediately do.

Old online demo build through Emscripten (it misses many new features such as functions): https://rawgit.com/ambrop72/badvpn-googlecode-export/wiki/em...

Docs currently still dying on Google Code archive: https://code.google.com/archive/p/badvpn/wikis/NCD.wiki

Source: https://github.com/ambrop72/badvpn


> Basically, it was lightweight threads which would yield execution to other threads whenever waiting for I/O, or explicitly yielding. It allowed for a very straightforward linear programming style, no callback hell.

How is this different from a normal multithreaded C program on a POSIX kernel?

When I call read(), my thread is suspended until the data comes back, and another thread can run.


If you think promise-based code with a lot of "then" is annoying, perhaps ES7 async/await will make you happier.


This is exactly what I was about to write. Also, the amount of "callbacks, that's how it's supposed to be" and "hardware is like that" and "programming is hard, deal with it" kind of replies for a readily solved problem are staggering.


I don't know about others but the imposition of try catch for error handling in async/await seems to cut against the syntax gains over promises.


If by promises you mean the `some.fun(had).catch(had, function(){handle stuff here})` syntax compared to async/await, that is actually a very fair point and it does make two syntaxes similarly verbose.

However, it is still easier to follow a ladder of conditional statements using async/await compared to escher-style stairs of callbacks inside callbacks.


Interesting. In which language is this "readily solved"?


Erlang, Elixir, & Go have nicer models.



It's a solution for a made-up problem. Sure, async/await makes it easier. But not as good as the normal sync code would be. And for what reason? The concurrency problems are already a solved issue in many many frameworks. Sure, if you do fully manual threads in something like C++ and code up everything yourself - you're in for the trouble down the road unless you are a good professional. But there are heaps of concurrency frameworks in all major languages that make this very easy. And what about performance, how does JavaScript justify putting virtually every single return value of every function into some Promise object? Talk about wasted resources.


Await lets you preserve the sequential syntax that normal programmers are used to. It's syntactic sugar, which is fine if you prefer to code in a way that most people are familiar with, versus callbacks that can be messier (callback hell).


JavaScript uses one thread so not switching threads with all its contexts saves a lot of overhead.


JavaScript being single-threaded is one of it's biggest drawbacks. This is universally agreed on by both it's proponents and critics. Yes sure, some very few resources are wasted on each thread, but in case of JavaScript, it wastes a lot more by not using all the cores, of which there are usually several present in almost any modern computer that will run JS.


> The NPM stuff... well, I think all ecosystems have their pros and cons. I'm not a huge fan of NPM, but it does the job for the most part, and I'm curious as to how people would actually improve it, rather than just complain about it all the time. I don't really have any good ideas (knowing nothing about how package management actually works under the hood).

  - Remove the Couchbase dependency
  - Make login authentication a plugin or API-based
  - Integration with Github/Gitlab for automated publishes
to name a few.


Yes, and on top of that, it is asynchronicity without real concurrency. Imagine one part of the code doing a long CPU-bound computation; the rest has to wait!

It's like the worst of two worlds.


With due caution about @marssaxman's point that software is fundemantally interrupt driven, I will agree with this.

I work an a real-time system that is controlled by RPC from a desktop. We claim we want to keep the real-time code small and simple.

Yet I find people pushing complexity down onto the RT system even when the real-time guarantees aren't needed because the PC code responds to it using a callback driven async model. It's easier to put a non-real time for loop into the RT thread than to emulate a for loop on the PC.


I think the idea of having a powerful scripting language based on JS is a godsend in many ways. But like you, I also feel that async is a bad default for a language that has such widespread use.

I love being able to use one syntax to write 95% of my code(instead of having to switch between JS / python / php etc.) but some of NodeJS's programming and ecosystem realities are annoying to say the least.


Everything is async in node, because doing sync operations in event loop is terribly slow.


No. That's nonsense. Everything is async in node because node is based on the V8 Javascript engine which is not thread safe. V8 is not thread safe because it was designed for web browsers, and browser rendering engines are also not thread safe.

People have tried to retcon node's choices as being based on some sort of deep technical justification, but that's not the case: its limits are the limits of the material it's made of, not a specific design choice.

If you look at more advanced VMs like the CLR or JVM then they support all modes of operation: synchronous/blocking (which is often what you want), synchronous/blocking with many threads, async with callbacks, async with continuations ... node only supports one. How is that better?


JavaScript was explicitly designed to use one thread and prevent all kinds of concurrency problems. It requires getting used to, but it's very powerful.

Look at this page and tell me the JVM is better: https://docs.oracle.com/javase/7/docs/api/java/util/concurre...


Javascript was never explicitly designed to use one thread: it used one thread because Netscape used one thread. This is back in the 1990s when multicore on the desktop didn't exist and JS execution speed was a non issue.

The page you linked to shows a full and complete library of concurrency tools, which you don't have to use if you don't want to. How is this evidence of being worse?


Sure, powerful. As long as you're mainly I/O-bound and shared-nothing.

Oh, and your link actually DOES show that the JVM is better. It can give you everything node.js has and then some.

If you want to see something like node.js for grown-ups, take a look at Vert.x or Akka (& Frameworks based on it), for example.

(Sorry for the snark, it's been a long day...)


That library is probably one of (if not the most) powerful concurrency libraries in existence.


The universe is inherently concurrent. It only makes sense for programming languages (which are used to model problems in this universe) to follow along.


Your comments about the annoyingness of async are real.

BUT

Just because you want to do ABCDE does not mean it's feasible.

You can access a file, read it, compute something, depending on that access another file, read it, write something via Http Post somewhere else.

BUT the nature of i/o is such that the problem is 'intrinsic' - it's not a 'node.js' problem so much.

Of course you know that you can do everything syncrhonously if you want, but then you get hangups and what not waiting for underlying resources, networking calls etc..

The nature of a distributed system is inherently async, and so we have to live with it.

That said - maybe you have some examples of where async definitely should not be used?

The other huge problem with Node.js is the vast, crazy expansion of tiny libraries, supported by ghosts, that change all the time. A simple lib can depend on a dozen others, with varying versions, everything changes almost daily. A small change somewhere can really jam things up.

This is by far my #1 concern, although it can kind of be avoided if you're careful.


> Just because you want to do ABCDE does not mean it's feasible.

Every single language is able to do ABCDE, except for JavaScript. Yes, I guess that means it's impossible.

> The nature of a distributed system is inherently async, and so we have to live with it.

The nature of a distributed system is the one you impose onto it by design. Computer systems are not something that appear at nature and we just harvest.

There are plenty of synchronous architectures for distributed systems, from distributed logical clock ones, for clock domain ones. People tend to avoid async ones anyway.


I usually ask people to explain why they picked or like (or dislike) a particular technology and that surprisingly tells quite a bit about their proficiency level.

At least in interviews I found it tells a lot more about their proficiency than say knowing how to invert binary tree in under 20min or solve a digital circuit diagram in with object oriented principles.

Node.js is a technology that raises red flags when someone advocates it. I've heard stuff like "it's async so faster", "it makes things non-blocking so you get more performance not like with threads", "you just have to learn one language and you're done", "...isomorphic something..." When digging in to discover if they knew how event dispatching works or how these callbacks end up called data comes in on a TCP socket, and there is usually nothing.

The other red flag is the community. Somehow Node.js community managed to accumulate the most immature and childish people. I don't know what it is / was about it. But there it was.

Also maybe I am not the only one, but I've seen vocal advocates of Node.js steam-roll and sell their technology, often convincing managers to adopt, with later disastrous consequences. As article mentions -- callback hell, immature libraries, somehow the promised fast performance guarantees vanish when faced with larger amount of concurrent connections and so on. I've seen that hype happen with Go recently as well. Not as bad, but there is some element.

Now you'd think I am 100% hater and irrational. But one can still convince me that picking Node.js was a good choice. One good thing about Node.js is it is Javascript. If there is a team of developers that just know Javascript and nothing else. Then perhaps it makes sense to have a Node.js project. Keep it small and internal. Also npm does have a lot of packages and they are easy to install. A lot of them are un-maintained and crap but many are fine. Python packaging for example used to be worse, so convincing someone with an "npm install <blah>" wasn't hard.


I've been developing public and internal APIs with node.js full-time for the past 3 years. I can see where you are coming from, but nothing you've said explains why the platform itself is not useful. Most of what you complained about is the caliber of developer and the ecosystem. That reminds me a lot of the complaints about PHP.

The truth is that there are good developers using node.js, there is good code in the ecosystem, and someone that's worked with for a while has learned lessons.

I agree with your performance complaints. On my last project we had to spend considerable time reworking components of our application due to those components blocking the event loop with CPU intensive tasks.

I would say that node.js is probably selected more than anything else for speed of getting a project up and running. It's easy to find JavaScript devs. JavaScript doesn't require a compilation step so iterating and debugging small changes is much faster. There's a ton of pre-build frameworks for serving up APIs even with very little code for CRUD apps.

It's not that there's something you can do with node.js you can't do with other languages. There's just less of a barrier to entry.


This is an age old story.

There's _way_ more bad Perl in the world than good Perl (Matt's Script Archives anyone?), but these day's it's easy to find well written Perl and appropriate and useful Best Practices for Perl projects of any scale.

There's a _lot_ of bad PHP out there - but Facebook and HipHip clearly show that there are sensible, scalable, and well understood ways to write good PHP code.

Nodejs seems to me to be like the Perl world was in '95 or the PHP world in 2000 or the Python world in 2002 or the Java world pretty much forever ;-) There's not enough examples of "Good Nodejs" yet, and all the Google searches show you are "typical" Nodejs code - which as Sturgeon's law dictates "90% of everything is crap" - so most of the Nodejs code that gets seen or discussed is crap. That will _probably_ change as we "forget" all the worst written Node, and upvote, links too, copy, and deploy well written Node.

There's more similarity to PHP that other languages in my opinion too, in that Node _is_ Javascript, and like PHP, it's a fairly easy route into "development" for an html savvy web designer, which means there's a _much_ larger pool of novice Javascript/Node devs with little or no formal training. You don't need 3yrs of CS degree to dive in and start "scratching your own itches" in Javascript - and in "that part of the industry" it's much easier to leverage "a great looking portfolio but no CS degree" into a job offer than in, say, an enterprise Java or DotNet shop (or a GooFaceAmaUber "don't even respond it they don't have a PhD in another field as well as their CS one" reject-a-thon...)


You're touching the reason yourself. When 90% of the code you find is crap, it simply means that the language has a low barrier of entry. When languages like ML/Scheme/LISP variants, Haskell, ect. don't have that much crap, it's because the barrier of entry is higher.

And this goes not just for languages. Frameworks like Akka, is the same. The idea of actors to form the system is simple and elegant, but it's far from simple to get started with.


For Perl and PHP - as well as the low barrier to entry, there was also the almost ubiquitous availability - at least across web hosting platforms in the late '90 and early 2000s. I was a Perl guy through and through back then, but even I wrote some PHP because getting web hosting companies to allow/enablePerl cgi (or even worse fastcgi or mod-perl) was expensive or impossible, where PHP was "just there" and "just worked".

(But yeah, you don't get frustrated pixel-pushers from the coloring in department getting over ambitious and writing up a site backend in Rust or Go... Your typical weekend Haskell hacker is about 100% likely to be able to spell "algorithm" and at least handwave their way through a Big O discussion... No big criticism of talented designer types, but in general they've got as many holes in their understanding of the "S" bit of "CS" as most developers have in their aesthetic abilities...)


Python hasn't got a high barrier to entry (unless you are talking about deploying it to a web server). The code is generally better than PHP or JS code (though I have seen my share of crappy Python code).


IMO python has a much higher barrier to entry.

Hell just getting PIP installed can be a bear on windows, remembering the damn --upgrade flag, and the whole concept of a virtualenv means that often the casual python hacker doesn't have access to packages in python until they are much further along.

Contrast this with Node where npm is always right there, and installs are local by default means the kid that just sits down with js has access to every package out there and can even publish things his/her first few days.

I'm obviously more comfortable with the NPM ecosystem, and I might just be biased, but it does seem much easier to work with and use.


My understanding is that Facebook has migrated most (all?) of its PHP codebase to Hack, which goes to show you: even the company that built a global empire in PHP can recognize it for the shitty, inadequate language that it is.


<devil's advocate> On the other hand, that shitty inadequate language got them up to their first billion or so users and way past Unicorn stage. If you're optimising your language choices and talent pool for your second or third billion users when you currently don't even have enough users to fund the ramen invoices, you're 100% certainly playing around with "the root of all evil"... ;-)


FB's primary reason for creating Hack was to add type-safety to PHP code. They spoke about how PHP was great for the fast development cycle, but would benefit from more structure if the created codebase was to become long lived. So the new Hack code is the same PHP code, only with types (and the implied benefits such static analysis, optimisations, etc).

Incidentally, this is also why FB created Flowtype - the lack of types is a massive boost during the code's youth, and a massive burden from early adulthood and onwards. Hack/Flow are a clever way to bridge the gap between typed and nontyped languages.


I was going to say the same thing about PHP. I was at a PHP shop who thought all their problems would be solved with NodeJS. They basically created the same problems in Node. The next place I went to was heavily biased to Erlang and OCaml, but the front-end is done in Node. But the code and architectural quality are like comparing night and day.


JS doesn't require a compilation step, but every project I look at has one anyway.


I've done quite a lot of C/C++ work. In my experience, getting JS up and running is more exhausting. You might have webpack running, which runs babel, and sass, and a few hundred node processes. It's just bloody insane today. And none of it plays nice at all.


>I would say that node.js is probably selected more than anything else for speed of getting a project up and running. It's easy to find JavaScript devs.

Writing JavaScript for the browser and writing it for Node.js are different beasts. It's "easy" to find JavaScript devs because most people have tinkered with jQuery and think that qualifies them. Furthermore, since the Node.js explosion in 2012, lots of posers have been trying to get into this scene.

>JavaScript doesn't require a compilation step so iterating and debugging small changes is much faster.

As another commenter pointed out, this is true, but most projects use a Grunt pipeline or something similar at this point, because they'd feel left out if they didn't. Just more groupthink from the Node.js camp.

This isn't a unique property to Node.js. Pretty much everything that isn't Java, C#, or mega-crusty pre-Perl CGI has it. However, compilation is actually pretty useful; it's not something you necessarily want to throw away. Java and C# devs seem plenty capable of training their Alt+Tab / F5 fingers to get tweaked code running fast.

>There's a ton of pre-build frameworks for serving up APIs even with very little code for CRUD apps.

There may be, but the JavaScript ecosystem changes so quickly and integrates so many esoteric, not-supported-anymore-after-next-Tuesday things that it's offputting. My experience with the code quality of a lot of common libraries has not been great either.

I personally detest the Node.js fad and can't wait for it to die out. I have never been able to find someone who can actually give me a good reason for it to exist. At least when RoR was the fad there was a sensible reason behind it: PHP sucked. I really don't know why Node.js even exists except to push buzzwords and execute a really terrifyingly bad idea of making everything JavaScript.

Even Eich admits that JavaScript was a one-week, publish-or-die project that did many things wrong. It's frankly embarrassing that we still use it as the primary language in our browsers 20 years later. I don't know who looks at that and says "Let's take this to the backend, baby!"

Personal ambition is basically the only reason I can think of for Node.js to exist, both in general and in any specific organization. People see it as a tool to colonize parts of their company. That's the only explanation, because they assuredly don't see any technical benefits in it.


> If there is a team of developers that just know Javascript and nothing else. Then perhaps it makes sense to have a Node.js project. Keep it small and internal.

If you have a team of developers that know just JavaScript and nothing else, you are probably dealing with 100% front-end developers that have little to no understanding of systems programming and what you need to do to be performant and secure outside of a browser.

This leads to a lot of issues with security in the Node.js ecosystem, and is one of the reasons I have a job.


Any developer that only knows one language is carrying around a horrible cognitive bias of how software should be written and they don't even realize it. They are dangerous, keep and eye on them, and for the love of god teach them something new.


Depressing thought of the day; most of the developers I work with only know 1 language and/or framework and have no ambitions to expand. I guess this is just the life of an "enterprise" employee. :(

Honestly, when I mention something like Golang or Rust or Cordova they look at me like I have 2 heads or something. Yes, I don't expect the layperson to know what those are but these are long term application developers (mostly C#) and they have never-ever heard of these things. At first I just thought it was the people I work with but I recently changed jobs and its the same thing there. I really am beginning to think that I'm an anomaly.


ehh, this is just usually shortsightedness for all involved. A front end developer for JavaScript is going to know HTML/HTML5/CSS/JavaScript and the quirks of all browsers and quite possibly complex frameworks on top of all of this. The amount of knowledge to be good at all of this far surpasses the "oh, they only know javascript" naivety that I hear from other developers.

I know Java developers and .net developers who don't know or want to know any other language because they're not just "java" devs, they know the frameworks and have experience to build things.

If all you do is "Scripting" then sure, know a few scripting languages


> The amount of knowledge to be good at all of this far surpasses the "oh, they only know javascript" naivety that I hear from other developers.

No, that's not it at all. I am friends with front end guys and I'm aware at how much work it takes to just keep up with and be good at the framework du jour, Backbone or Angular or React (then Flux, then Redux) and whatever's next from there.

Being good at markup and layout, and being good at a front-end framework, are not skills that are direct analogues to systems-level issues that plague Node.

For example, Node LTS's Buffer() uses UInt8Array on the browser, but a malloc() on the server side that may return insecure memory with leaked information from other parts of the process. libuv's event system is written in C. Socket programming on -nix is nothing like using a WebSocket library in Chrome, and the front-end has no concept of filesystems outside of localStorage or limited WebSQL use. When was the last time a browser had you deal with malloc()?

How many front-end developers understand the performance of writing files to disk with different filesystems, master-slave or master-master replication of databases, building solid server-side deployment workflows, securing their code and their servers from all the things that can go wrong? This skillset generally doesn't exist; it's abstracted away by the Chromium sandbox, or the interpreter. Front-end generally does not require knowledge of the principles of the underlying operating system and architecture to get the job done.

On the backend, all of these problems become apparent. It takes a different way of thinking, and a different understanding of a computer system. My original comment to rdtsc is just that -- a "JavaScript" developer that works on front-end tech only doesn't have an adequate idea of how these things work to build good server-side architectures, and Node.js is not the tool to use for those types of things.

Frankly, if they only know JS, they haven't touched these things at all. Outside of Node and Electron, virtually nothing on a system outside of a browser is written in JS, so you wouldn't have picked up these systems skills in JavaScript.


This seems dishonest. Everything you said happens regardless of language. Being a go developer doesn't mean you know more about how goroutines are handles on your operating system. The very same techniques you use to expose performance metrics and analytics happen in go, java, ruby, python et all that happen in node and i'd wager that node/javascript has an abundance of profiling and performance tooling for people to embrace.

Sure.. some developers may not understand the implications of everything they do, but if the app has implication that impact their performance they will discover them.

Not everything needs to be over optimized, over engineered or overly complex.

As an Ops dude i find it infinitely easier to scale/manage nodejs apps than i do java.. For instance i can solve some issues by scaling out node wider across more machines than having big optimized single machine and an implication of going wider means if some things crash, the others handle it and because i go far/wide i've chosen to optimize on availability/performance in different ways than you.

I may know that a node app crashes after x days for leaky memory and deal with it because i have mesos/marathon watching alive pages and killing/restarting the container and redeploying the app or autoscaling it up and down as need be.

Replace node with anything else and you have the same problems.. we see 100's of jvms released a year.. many versions of go.. many point releases of ruby.. python.. its a never ending battle that has to be fought a lot more deeply than any one person or one developer ever can.

and lets be honest, even the best node app is building something to build the best front end so letting front end developers focus on being front end engineers and helping them solve any issues thereof to fill in the blanks is not a bad way to do business.

Expecting everyone to over optimize and over engineer and over theorize about how things should be is a huge waste of time.


I've never really understood the "one langauge" argument. I'm a good (not great) front-end JavaScript programmer, but I don't think that helps writing node.js. Sure, the syntax is technically the same, but the bulk of a node.js program looks and is written quite differently from front-end js (even js library writing). Surely having similar syntax (which is basically C-like anyway) isn't that big a deal?


It's supposed to mean you can share object declarations. But it's javascript so that doesn't mean much. And it's not like the data format in the database and the gui is ever the same by the end of a project anyway, so two formats end up being needed anyway, so it's a bit of a pipe dream.

The one case I've found where node really makes a lot of sense is if you're trying to build a webapp with a native version using the same code. Then node-webkit is the only game in town that doesn't involve a transpiler. Then again, the component parts are flaky enough that maybe the transpiler isn't so bad afterall.


The one language argument, as far as I can tell began when people tried to share code across the server and the front end, it sort of worked for one persons use case but nobody else really had much success with the endevor (at least that I'm aware of.)

That argument is a pretty bad one and is the one most people like to assume people are talking about so they can make fun of it.

The real argument, at least in my mind is that you can specialise and pretty much run your entire stack completely in js with only a bit of json and some html. Build tools, database access, deploy tools, dependency management, server side code, client side code, its all JS.

For me, I don't agree that it matters, I don't even think having a diverse stack takes much longer to learn. But I do understand the fanfare for it.


"I've never really understood the "one langauge" argument. "

Being able to do front-end and back-end in js is really nice, especially passing data back and forth in js-object like format.

If you have not tried it, do so, I feel it does streamline things nicely.

Node.js is a little lighter than most things and if you use it correctly I think it has advantages.

I wouldn't build a bank on it however :)


You can pass JSON using Python, or pretty much any server side language.


Of course you can. But in some languages it's a pain.

And you can't do python on both ends.

Using JS on both sides is nice if the architecture is suited for it.


It seems like an appeal to hiring manager to me. "You'll have all the developers you need! Look, it's all javascript and everyone knows javascript!"

The problem is assuming language proficiency is the defining characteristic of a developer. We really need to disabuse management of that idea.

I'd rather teach a good 'microservice developer' javascript than teach a good 'javascript programmer' what makes a good microservice.


I'm still surprised no one has created a JavaScript front-end for Erlang/BEAM, like Elixir or LFE. JavaScript is a decent language with a huge community and Erlang/BEAM could solve Node's deficiencies like async-everything APIs and single-threaded execution.


Probably because anybody smart enough to build that is also wise enough to leave the tire fire that is the Javascript community and language.


Maybe because JS is not really a decent language, just the pile of awfulness that we've had to grin and bear it with to do web front-ends for too long, and Erlang or Elixir is nicer to work with to start with.


That's because Erlang developers have good taste ;-)

But Elm is actually moving in that direction, the last release added a process abstraction inspired by Elixir, and I've read they're even considering BEAM as a compile target, in addition to the current JS target and plans for a WebAssembly target.


> JavaScript is a decent language with a huge community and Erlang/BEAM could solve Node's deficiencies like async-everything APIs and single-threaded execution.

While BEAM of course has facilities at the VM level that support that, you'd have to extend javascript to take advantage of them on top of building a JS->BEAM compiler.

At which point, the benefit of using JS rather than a language designed to deal better with concurrency and parallelism is dubious.


I've been a software dev for 10 years. When I started I saw the transition from Perl to PHP and a lot of snobbishness from the former towards the latter. Seeing the changing of the guard in web languages was pretty instructional and it's something I see again and again.

I think basic CompSci courses should really have a course or two on managing software projects and handling the problems of what framework do I use to build my new software app? Because fundamental language or framework decisions have both a technical and a business component and even as a front line programmer it helps to be aware of both.

Node.js is a great environment for getting a server side app going fast and it has very good tooling thanks to the rest of the JS community with additions like npm, gulp, bower, express etc. There's obvious benefit in having shareable libraries between client and server side and most importantly (to software companies) hiring coders who can work with it is far, far easier than say finding that rarest of unicorns - an experienced Haskell developer.

If (and it's a damn big if) you outgrow Node.js you're doing well. Then (and only then) look at the alternatives like Play Framework, Spring Boot, Vert.x or whatever else floats your boat.

Rants can be useful in giving a kick up the asses of the relevant community to go address certain bug bears. This rant though is so damn generic it reminds me of those Perl developers at college pouring cold water over the idea of using PHP because they felt threatened by it.


    I think basic CompSci courses should really have a
    course or two on managing software projects and handling
    the problems of what framework do I use to build my new
    software app
The problem (if it's a problem, depends on who's asking) is that undergrad CS courses mainly train you to be a CS graduate student (which in turn train you to be a CS academic), but most students choose to major in CS because they want to become professional programmers (aka Software Engineers).

I've done both bachelor and master level CS studies and job-preparation-wise would probably have gotten as much (or more) from a 1-1.5 years (2-3 semesters) vocational training than I did from 8 years of university.

Probably the most apparent perk my studies have gotten me career-wise was being invited to interviews at major tech companies like google and amazon.


I think this actually varies from school to school. My undergraduate university, where the CS program was part of the engineering school, focused on software engineering. My graduate university where the CS program was a part of the college of arts and sciences focused a lot more on theory. But my view here may be skewed, because I only remember one professor who taught most of our theory courses -- I took most of his classes.

From roughly the same number of years in university, (8ish) I can say there were some clear professional benefits from me having done a masters' program. From my advanced software engineering course I was able to talk to my teammates about Gang of Four and better ways go about problems than if-else loops. From my theory courses I was able to identify problems they were seeing, tell them which algorithms would probably be worth looking into etc...

That is all stuff one can learn outside of university, but I learned it while I was there. I get giddy whenever I can try something new and all the writing I did (I wrote a few academic papers) made me really comfortable with writing technical design specs and documentation. When I was in academia I had to document everything anyways so its become second nature to me.

I think 2-3 more years of working would have certainly looked better on a resume, but I think my skill set has added to my team as a whole and I'm glad I did it.


> most students choose to major in CS because they want to become professional programmers (aka Software Engineers)

Then why not major in Software Engineering? Is that not an option in American universities?


One problem is that it does not have the standing of a pure subject like math, engineering or CS.

Its more vocational, isn't offered by top-tier universities (in the UK) and therefore has lower status e.g. its like taking Media Studies at an ex-poly vs. reading English at Oxford.


It was very much seen that way back in the day at Birmingham University. We wrote an x11 window manager with audio and graphics processor apps for our first term in 2nd year. I saw academia start to go down the pan at college with the introduction of "ICT". To me, this was nothing but subsidised Microsoft training.


this was nothing but subsidised Microsoft training.

"Subsidized Microsoft training" is what most people actually want and need to get the sort of job they aspire to. People with this "Subsidized Microsoft training" are also what many companies are looking to hire. So why is it such a bad thing to offer people this option?


It's not a problem, unless you go to higher education looking for CS and end up with that instead.


Because we need people to program computers not use word.


There is a problem here, but to be clear, the solution is not to convert CS into vocational training. The problem with the way software engineering was introduced into US universities is that it was usually dumbed-down to broaden its appeal. I do not know if it is still like that.


Don't know about the US but in Israel anything to do with programming is in such high demand and so overbooked (therefor requiring competitive grades to get in) that you get what you can.

I got into my bachelors in 2002, at the nadir of the dot-com crash (when demand for CS education was at its lowest) & my grades in high-school were just good enough to get into the Math & CS combined program (both CS or SE had higher requirements).

The only other option would have been to go to a private college rather than a state university, which would have had (at least) double the tuition and (at most) half the prestige/employability-potential of a university degree.


Interesting. I'm from Germany and studied "Computer Science and Media" and it was rather overbooked, like Media stuff often is here.

But the regular computer science or software engineering degrees (like most technical degrees) aren't overbooked ever.

They let in everyone and throw 60% - 70% out after the first year.


Unlike Germany, in Israel how desirable a study is is directly correlated with how much money one can make working in it...So media/art stuff is very low on the totem pole.


Hehe.

I have the feeling most German (Millenials?) just study what they do in their free time.

Yes, there are the academic families, that tell their kids to become lawyer or medical doctor or something like that.

But most tell me I was just lucky that I liked doing computer stuff, so I could study computer science and make "mad bucks" while they liked to draw or read and had to study something like languages or fine arts.


I have actually been living in Germany for the past 3 years, and spent 8 years in Austria before that :)

I find the non-materialistic (or at least less materialistic) mentality here preferable. But to be honest I think the only reason is that life in the German speaking countries is much easier - you can earn minimum wage or close to it and still live a decent life.

In Israel I had to earn twice as much for half the quality of life (slight exaggeration but only slight).


I like this mentality too.

But Bafög (the german student aid) can accumulate to more than 10k€ of dept (you have to pay back half of it) and some of my friends even took credits for their (non-consecutive) master degrees or because they didn't finish their bachelors in regular time. Now they are stuck with 10k - 40k of dept and no way to pay this back in with the money they make.


Or my favorite - Business Administration with Concentration in Management Information Systems. This was my route and it laid a great foundation of how business works (you know, that part of a company that PAYS for IT... as well as business-focused IT (which does lag behind new uses of tech but does cover being ready for fundamental shifts/disruptions - which has been around in the business world since the beginning).


Yes, it is an option (at least in America). Many colleges' business departments do offer a "information technology" degree, which typically combines some high-level programming skills with some business school classes (eg accounting, economics, management, marketing, etc.).

This is probably the best undergrad option for anyone who wants to be a "professional programmer" over here, if you are more concerned about the writing web apps side and not the optimize-that-algorithm nuts-and-bolts type of side.


Not sure about America but it's more of lack of understanding in Australia.

Also, in the university i am studying in. The university course for a Software Engineer is for 5 years since that is the minimum study time needed to be accredited as an engineer while the CS degree is 3 years.

The more messed up thing is that everything taught in CS major is taught the Software Engineering major but Software Engineering has more electives and has a few general engineering units.


It's not common. Usually you can choose between Computer Science or Electrical Engineering, if you want to do things with computers. The problem is, Comp Sci tends towards an inordinate amount of proofs on blackboards and theoretical horse-hockey, that you don't get to actually do in code, while EE tends towards playing with soldering irons and wiring breadboards, with a little embedded low-level programming.


Well I would argue that universities are here to create acidemics, not create job candidates. That vocational work is exactly what you need to be a software engineer. I think the issue is more people don't understand what university level CS is, or that it's NOT a "get a job" pass.


Well companies demand it. You can get a job on work experience alone, but if you're just starting out good luck applying to Google/MS/Facebook/Amazon/etc with only vocational training.


I don't disagree with most of what you say but I can't shake the feeling that for the most common types of web project - nothing touches the productivity possible with frameworks such as Rails or Django.

(I don't know enough about mature PHP MVC frameworks to comment on whether Laravel et al should also be in that list)

Is there anything for node that offers that wide range of functionality and a mature ecosystem for content-driven sites?


> If (and it's a damn big if) you outgrow Node.js you're doing well. Then (and only then) look at the alternatives like Play Framework, Spring Boot, Vert.x or whatever else floats your boat.

I couldn't disagree more. If you like the concept of type safety, don't waste your time.


You called out the fact that you see repeating patterns in the industry, and go on to mention Node.js as a great environment for rapid prototyping. That made me chuckle because that's exactly what I heard about RoR just a few years ago.


There is no such a thing as 'outgrowing Node.js'.

Often, people who think that they're "too good" for Node.js are people who had a single bad experience; they designed their Node.js app poorly (E.g. they didn't think through their architecture enough) and then instead of blaming themselves for their ignorance, they put all blame Node.js.

It's always convenient to blame the tool. But I've used a lot of different tools and I can tell you that the tool is almost never the problem.

I've found that people who don't like Node.js are often people who were forced to learn it because their company used it. These people had resentment against Node.js from the very beginning - And they kept this resentment with them while they were 'learning' it - And as a result, they never really learned it properly - They never fully understood it.

Not everyone has to love Node.js but there is no reason to hate it either. It's a damn good tool.


    It's always convenient to blame the tool. But I've used a
    lot of different tools and I can tell you that the tool 
    is almost never the problem.
This is both true and also misses the point. Given a sufficiently smart software engineer Javascript is fine. But like the old adage about compilers the joke is that there is no sufficiently smart engineer. Even the best will occasionally make mistakes. Over time those mistakes will accumulate. When enough of them accumulate you experience serious pain.

Any argument that rests on the idea that all you have to do is: "Just hire engineers that can be perfect in perpetuity" is doomed to be a poor one. No language is perfect, however there do exist lanaguages that make it possible to fix problems after the fact with higher levels of confidence and more guarantees. Javascript is not one of those languages.

Not to mention that single threaded callbacks have an inherently lower ceiling on concurrency that multithreaded approaches. Some times you have to make the decision on whether to save thousands-millions of dollars on infrastructure so you can stick with your current codebase or to rewrite and take a medium term hit on velocity instead.

There most definitely is such a thing as outgrowing Node.js.


Anybody can criticize a language or platform, but it doesn't mean much if there aren't any better alternatives.

This article presents an extreme conclusion without much supporting evidence, so it's pretty pathetic that this made the front page. Nobody even uses callbacks anymore now that we have Promise and async/await.

Yes, Javascript isn't the best language (though ES6 improves tremendously on ES5). But right now it's the only language you can use in the browser (aside from languages like Clojurescript that compile to Javascript). The biggest advantage of Node.js is that you can reuse the same code on the client and server, and thus it's ideal for creating universal single-page web apps. Being able to reuse the same code on the client and server is a massive advantage that can't be understated.

Also, Node.js with Nginx is more scalable out of the box than Ruby on Rails, Python/Django, PHP, etc. Hell it's comparable to Java, which is incredible for a dynamic language. The difference is, you can write a Node.js web application 10x faster than the equivalent application in Java, and with a drastically smaller codebase (less code = less code to maintain). These days developer time is the biggest cost.

These rants come off as coming from either (1) back-end developers who never touch UI code or anything on the client-side (not where Node.js thrives) (2) armchair commentators who don't actually have to get shit done in terms of building and deploying web apps on deadlines, and thus have the luxury of criticizing everything without presenting realistic alternatives.

> "There are only two kinds of languages: the ones people complain about and the ones nobody uses." -Bjarne Stroustrup


> he biggest advantage of Node.js is that you can reuse the same code on the client and server, and thus it's ideal for creating universal single-page web apps. Being able to reuse the same code on the client and server is a massive advantage that can't be understated.

This advantage is totally overblown, and in fact I am not sure it even is an advantage. It definitely makes things easier in the short run, but it always comes around to bite you in the ass. The fact is, objects on the server and objects on the client are different things, and while you write less code up-front because the differences aren't always immediately obvious, you end up writing a lot more code later because you didn't think about the very important differences. Representing them as the same thing enables shoddy programmers to not think about the context of where their code will be run.

> "There are only two kinds of languages: the ones people complain about and the ones nobody uses." -Bjarne Stroustrup

"[A] quotation is a handy thing to have about, saving one the trouble of thinking for oneself, always a laborious business." -- A. A. Milne


I'm guessing you've never written a universal/isomorphic single-page application?

If you try to write one without Node.js, you're going to be writing a lot of the same code twice - once in Javascript to run on the client-side, and twice to run the same exact logic on the server-side (eg. fetching data on the server to pre-render a page and parsing that data, making an AJAX call for the same data on the client and parsing that data in JS).

One codebase is easier to maintain than two codebases in two different languages.


> I'm guessing you've never written a universal/isomorphic single-page application?

You guessed wrong.

> If you try to write one without Node.js, you're going to be writing a lot of the same code twice - once in Javascript to run on the client-side, and twice to run the same exact logic on the server-side (eg. fetching data on the server to pre-render a page and parsing that data, making an AJAX call for the same data on the client and parsing that data in JS).

> One codebase is easier to maintain than two codebases in two different languages.

Sure, until you realize that they aren't the same objects, because context matters. But then it's too late because you've so fundamentally coupled the front-end to the back-end that they're inseparable. So you throw in ifs and switches to handle the difference, and eventually your code becomes an unmaintainable mess, and you can't get time from your boss to clean it up because deadlines.

So you quit and get a new job and starting that new project with Node is so easy, and you don't see the problem because you never stick around to actually maintain your code. Your previous business will go out of business when they try to rewrite the app because development has ground to a halt, but that's not your problem, now is it?


In my experience, having to write the same code twice in two different languages is a hell of a lot more work both from a creation and maintenance perspective than just writing it once. I haven't really had to deal with many of those edge cases you're talking about.

So what do you prefer to Node.js for universal single-page apps then?


> In my experience, having to write the same code twice in two different languages is a hell of a lot more work both from a creation and maintenance perspective than just writing it once. I haven't really had to deal with many of those edge cases you're talking about.

Cool, I hope for your sake that your luck continues.

> So what do you prefer to Node.js for universal single-page apps then?

To be honest, I prefer not to write single-page apps, as they break some fundamental ways the internet works, with no real benefit. I have started using React for in-page responsive components, which are composable and reusable independent of the app.

I prefer Flask for the backend, although I've used Django w/ Django Rest Framework and that has also been a good experience.


I think this is right on. They are completely different environments and I can't imagine very many instances where you'll be able to write a generic function you really want on both front and back end. I certainly haven't run into any in real life


I think a good example of a function you'd want in the front and back end is one that takes data and outputs a template.


sigh I'm afraid to hear your answer, but I'm going to ask anyway: what would compel you to output the same template on both the client and server sides?


Possible, possibly dubious: search engine optimization.


I'd argue that this points to a poor design. If something is static enough to be crawled by a search engine, then it doesn't make sense to be generating it dynamically every time on the client side. It's static, generate it statically, and then all you have to do is serve it up (caches are a variant of this approach, but not the only one).


I'm currently working on an unofficial interface for Readability because they aren't working on it anymore and their Android app is really broken.

The webpage needs to work offline (whole point of the project) but should be able to work without js if need be.

I create templates server-side and serve them to the client,

If js is available, it takes over and renders any further client-side interactions.

Does this sound like poor design or a valid reason to be able to render on the server and client?


It sounds like a poor design.

Webpages don't work offline. All the user has to do to break everything is hit "refresh". What your business users wanted was an app, and you gave them a webpage instead.

I also don't understand why you would do rendering both server and client side. That's an overcomplicated architecture that doesn't buy you anything. If you chose one, you'd have a simpler architecture.

And before you claim "optimization"--did you profile?


You're pretty opinionated for someone who doesn't seem to understand a lot of things ;)

1. Hitting refresh doesn't break everything. It resets your current state to the default. I default to the url combined with locally stored state, so you will lose text in an input field, but not much else.

2. I want a website not an app, thank you very much. I'm really just displaying text. I don't need to download an app for that when I have a perfectly good browser on every device I'd like to read from.

3. Maybe it's over complicated to you, but the same function that renders on the server is what I use to render on the client. Zero difference. It's not a choice I have to make so I can use either when I feel it's appropriate.


> 1. Hitting refresh doesn't break everything. It resets your current state to the default. I default to the url combined with locally stored state, so you will lose text in an input field, but not much else.

Really?

1. Go to your website.

2. Disconnect from the internet.

3. Hit Ctrl+Shift+R.

How's that app you wrote as a website working for you?


Works well, actually.

It's called appcache. Works in most browsers. You should look it up ;)


> These rants come off as coming from either (1) back-end developers who never touch UI code or anything on the client-side (not where Node.js thrives) (2) armchair commentators who don't actually have to get shit done

Half this post is from the creator of Node.js


I agree with most of your saying, but the second rant is by Ryan Dahl, creator of Node.js. Often these rants come from people who are very competent and know what they're talking about, but anybody can get burned out by working too much, and the annoyances of flaws can become overwhelming. It's worth remembering that we can always iterate and improve on what we have, even if it's good enough, if nothing else than for the sanity of the people who have to work on cleaning it up later.


> Nobody even uses callbacks anymore now that we have Promise and async/await.

Sadly this is not true (yet)


> Also, Node.js with Nginx is more scalable out of the box than Ruby on Rails, Python/Django, PHP, etc. Hell it's comparable to Java, which is incredible for a dynamic language.

Citation or link to any benchmarks ?

> The difference is, you can write a Node.js web application 10x faster than the equivalent application in Java, and with a drastically smaller codebase

Building a prototype, maybe; But's it's hard to believe that Node.js give a 10x boost in programmer productivity in the long run. The recent success of typescript is an indication of the limit of dynamic languages for big projects


It wasn't so long ago that critics of XML etc were shouted down, mocked, dismissed.

Then every one just kinda woke up. It's not even a debate.


The number one issue I've found with Node.js is when developers make things overly complicated for no apparent reason. The bar may be too low to get in so possibly you'll get a higher degree of poor design decisions.

The second would be the overuse of build scripts in that the build seems more complicated than the app in both time to get the thing up and complicated chaining steps. I've not had much fun debugging grunt, gulp or webpack in some of these fortune 1000 projects and I have a hard time wanting to give a shit about knowing them in great detail as the app should be the focus.

The parts of node that I most like are the core libraries that come with the install. When I try to stick with those as much as possible rather than using some half-baked npm module for every whim, I have a very pleasurable experience.

The async/wait and promises, etc along with piping streams are quite elegant in how modules can be snapped together like lego pieces but I find that people fuck it up terribly when they half know it as I initially did and it becomes akin to the messiness of es5 callbacks.

It does take some time to really utilize async well so I would recommend to read up on those concepts in great detail prior to jumping in.

Please npm responsibly.


Everything in node is just slapped together with no rhyme or reason. npm module quality is piss poor - even the flagship express web server where nothing works out of the box. God help you if you find a bug in a node core library - the typical reaction is that developers now depend on the buggy behavior and as result it won't be fixed. I've returned to sane programming languages after using node for years and I don't miss node's async nonsense where you can't perform a computation without halting all users of your application without performing major programming gymnastics.


That comment reminds me of those informercials where people can't open a carton of milk without pouring it all over themselves so they need some fan dangled gadget for $19.95 to help them do it. Node has many issues but for me its just really easy to use, maybe because I only use it in a minimal sense, similar to lambda scale as you can see on my sebringj github repos.


It would be nice if there were build warnings npm acceptance would give you so you had the chance to clean up things, such as build tests etc and also a recommended standard way to build them. Then if npm developers were lazy in that sense and letting the warnings pass regardless, when you install an npm, it would give you a warning, "this asshole put in a shitty module" or something more PC.


You say you are over node, but I don't think you are. You need to let go and move on.


Grunt, Gulp and Webpack are for building front-end assets. They are all built using Node.js, but their complexity has to do with managing complex front-end code base and then optimizing it for the browser.

If they weren't built with Node, then it would be Ruby, or Java, or Python, or even Make.

It is a shitty shitty problem that people are still trying to figure out what the best way is.


I still don't understand why people couldn't have just learned Make, instead of reimplementing it badly two or three times in every different special-snowflake language.


The same reason a dogs likes his balls because he can.


At least it's not maven.


Maven is consistent, fast, and a de-facto standard. I can't say any of that about any Node.js-based "build" tool.

Maven has faults too. But it would be a mistake to ignore its strengths just because it is ugly.


I fight with maven every day. I was just being snarky, the fact is maven can be abused in the same way anything can. Gulp gets insane when people start writing opaque custom tasks. Maven gets insane when people write opaque plugins. Nothing is perfect. Although people tell me Gradle is...


Maven is as close to perfect as it gets, IME. Certainly a lot more so than gradle. Insane plugins are a probem, but since plugins are first-class code you can solve that the same way you'd solve insanity in your actual codebase.


Oh man, Gradle isn't perfect? Crap. I was holding out hope.


It's telling that none of the posts defending Node.js are talking about it's technical merits. They're all saying:

1. Attacks on people--you're being too negative, you're saying this to feel superior.

2. Choosing Node.js is a tradeoff! We can't really say what you get in exchange for using this crappy ecosystem, but "tradeoff" sounds good even if you're trading using a reasonable ecosystem for nothing.

If you really think Node.js isn't a flaming pile of crap, I challenge you to come up with something it does that isn't done far better in another ecosystem.


I've been working on a project that uses node for the better part of the year. I'm in charge of most of its architecture and design.

After all these months, my conclusion is exactly the same as yours: there's nothing that can be done in done that I can't do better in other platform/ecosystem.

Sure, it's unique (event loop) and has a lot of good things (simplicity), but for anything serious, it's lacking real advantages.

Most of my issues with it comes with javascript itself, and the absence of proper tooling. I have no idea of the advantage of choosing this platform given the level of immaturity.

Oh, for those wondering: Node was chosen by a person caught on the hype (my boss, a friend and mentor who can make mistakes as anyone), who hadn't done any real world programming in it (only hello world stuff).


> Most of my issues with it comes with javascript itself, and the absence of proper tooling.

Does anything in ES2017, FlowType, or TypeScript resolve your issues with Javascript itself? If not, what are the three biggest?

> absence of proper tooling

What do you mean by proper tooling? What are you comparing it to? I think you might find that many of the other modern options also lack tooling in terms of debugging, IDE support, etc.


Right; but C# does everything Node.JS does easier and faster (even the event loop, if you like, although it also has first-class support for threading), and has excellent IDEs, debuggers, documentation generators, etc.


Ah yes, C# has awesome tooling. I thought you were comparing to something like Go.


Yours is not a technical argument either -- you're just pre-emptively claiming that no counterargument can be valid.


No, a technical counterargument could be valid if Node had any technical merit.

I'm claiming that no valid technical counterargument exists. If you disagree, it should be very easy for you to present a valid technical counterargument instead of trying to offload the burden of proof onto me.


geofft's reply to your grandparent comment is a valid counterargument IMO.

I noticed your reply, which is basically "nobody should be using an event loop". Whether you like it or not, that's how 99.99% of the GUI software in existence works (Win32, Cocoa, whatever). Node allows people to leverage their existing mental model into server apps.


But UI's have to be async if they want to be responsive, while the HTTP request-response cycle (which is what Node is predominantly used for) is completely synchronous.

Sure, things like WebSockets are changing this, but you don't really need an event loop if you have proper threads or a multi-process server (you know, like Node clustering, which you tend to need anyway once you outgrow the limits and pains of a single event loop).


Global event loops are fundamentally a bad abstraction for a parallel (not just concurrent) system.

It makes sense to leverage existing mental models in a lot of cases, but when you're moving to a new platform, you have to learn some new mental models because your existing mental model fundamentally doesn't work in the new system. Global event loops don't work, at all, in a system that actually needs to be parallel (i.e. most server systems).

Giving up on writing software that scales even to a basic level just so you can reuse your existing mental model isn't a good tradeoff.


The burden of proof is on you. What exactly is wrong with Node.js? It is a network server. It is programmed in a popular, expressive, dynamic language that a lot of coders already know. It has a great, well thought out base library. It does concurrency and asynchronous interactions really well. Since it is programmed in the same language as the client side, it allows code reuse between the server and the client. It has a vibrant ecosystem around it with a library for every possible need. It has a large community of developers.

So, if you want to claim that Node is bad, you need to come up with reasons why it is bad.


It lacks a coherent type system, its threading model is awful, and its security is fundamentally broken.

Your move.


To add to my other comment: the technical criticisms of Node have already been stated and were widely known. So if you're going to argue that using Node in a server environment is a technically sound choice, it's up to you now to respond to the criticisms.

And let's be clear: these are technical criticisms and if you could respond to them it would be with a technical defense of Node. Continuing to attack critics of Node or claim that we haven't already met our technical burden of proof only goes to show that your defense of Node is bankrupt from a technical perspective.


> I challenge you to come up with something it does that isn't done far better in another ecosystem.

A single, standard event loop that literally every library in the ecosystem uses so you don't have to worry about it.

The other options are compiled languages (Windows message pump, Qt or GTK+ event loop, etc.), which aren't what people seem to want for Node's use case, or things like Twisted. For all of those options other than maybe Windows, you don't get to assume that the entire ecosystem speaks the same event loop protocol. And I don't think any of these options do it far better than Node does. Twisted is pretty good, probably a little better, but Node is just fine too.

Note that I am not a Node.js programmer. I can kind of work with existing apps in Node because I can kind of write JS sometimes (I'm really more of a C person). I just know what other technologies are good at.


> A single, standard event loop that literally every library in the ecosystem uses so you don't have to worry about it.

Tcl? Erlang/OTP?


> A single, standard event loop that literally every library in the ecosystem uses so you don't have to worry about it.

Using an event loop with all the other threading models available out there is like bringing a knife to a gunfight. I could maybe argue that JavaScript's event loop isn't as good as other event loops, but while actor models exist, I'm not sure why we'd even talk about event loops.


Actor models don't solve the problem of getting a good-sized ecosystem going. If you have production-ready actor-model libraries for all the random things I might want to connect to, great, but I'm not currently aware of such things existing. (Maybe that's a failure of listening on my part!) If you want to develop that ecosystem, awesome, and I totally support that.

But I would much rather bring a knife to a gunfight than some totally awesome, straight-from-the-laboratory gun that's never been tested in combat.


> If you have production-ready actor-model libraries for all the random things I might want to connect to, great, but I'm not currently aware of such things existing.

If you have production-ready event loop libraries for all the random things I might want to connect to, great, but I'm not currently aware of such things existing.

Part of the problem with the Node ecosystem is that the bar for "production ready" is very, very low. For example, literally nothing in NPM is production ready where security matters, because the package system is fundamentally insecure. left-pad broke builds everywhere--what if instead of failing to download, it introduced a backdoor? And even if you assume all the randos who maintain NPM packages are trustworthy, there's no signature verification on code, so any of them could have their accounts compromised and there would be no way to tell. This is the code you want me to run on my server? Really?

Node has a large ecosystem, but the subset of Node's ecosystem which is actually quality software is quite small.

> But I would much rather bring a knife to a gunfight than some totally awesome, straight-from-the-laboratory gun that's never been tested in combat.

Me too, but I'm exceedingly surprised to hear you quote this as a positive of Node.


I am not really familiar with how to deploy Node in production, having never done so, but aren't there relatively easy ways to mirror NPM locally? Or can't you just check node_modules into version control (possibly a separate branch or something)?

Honestly if I had to deploy Node in production I'd be inclined to see if I can just use those Node libraries that are packaged for Debian. I do the same thing for deploying Python, Perl, C, etc. in production.


> I am not really familiar with how to deploy Node in production, having never done so,

Maybe don't form uneducated opinions on things you don't have any experience with, then.


I phrased it that way to be polite, but if you're not interested: I know that these tools exist. I was giving you an opportunity to save face for your uneducated and factually incorrect claims. The left-pad incident broke CI tools that intentionally referenced public NPM, not competent internal deployments. It broke incompetent internal deployments, yes, but every other language ecosystem can be deployed incompetently.

If you're relying on the availability and correctness of a public service for deployments, whether or not it's signed or allows packages to be removed, you're doing it wrong. This is as true for NPM as it is for Debian.


> I phrased it that way to be polite,

Unless you were just outright lying, not having ever deployed Node in production isn't just being polite, it's literally admitting you have no practical knowledge of the technology you're defending.

I can see some value in the feedback of someone who has only worked with Node 3 months or so--newcomer reactions give some impression of the learning curve of the technology, which matters. But you've done what, read a blog post? Your opinion on this subject is worthless.

> I know that these tools exist.

As do I, and what's more, I've used those tools.

> I was giving you an opportunity to save face for your uneducated and factually incorrect claims.

Which ones were those exactly?

> The left-pad incident broke CI tools that intentionally referenced public NPM, not competent internal deployments. It broke incompetent internal deployments, yes, but every other language ecosystem can be deployed incompetently.

Yes, you can use NPM shrinkwrap (and I do) but let's be clear: that means you get to devote perhaps a week out of every year updating everything manually. "Competent" deployments with NPM involve enough of a pain in the ass that, after a few years of doing this, I'd actually prefer to manually import dependencies. But that's not really a viable option.

If you're going to claim this is just incompetence, I am inclined to agree with you. But that means that an inordinate portion of the Node community is incompetent, so I'm not sure this can be represented as a defense of Node.

> If you're relying on the availability and correctness of a public service for deployments, whether or not it's signed or allows packages to be removed, you're doing it wrong.

If that's the case, then I don't know of ANYONE who is using NPM correctly, because to avoid relying on whether or not things are signed, you'd have to audit literally everything pulled down by NPM, which even for simple projects can be half a million lines of code. Shrinkwrap means you only have to do it once, but that's still more time than anyone I know of has.


None of this is different in any other language, including C. If you run a C app that uses outside libraries, or a Python app that uses outside libraries, or anything else, you either devote well more than a week out of every year to updating libraries, or everything is super old and frozen and impossible to change, or everything risks breaking when you upgrade your OS distribution (or other source of packages). That's how production software works.

If you want to claim that the state-of-the-art in every single environment isn't production-ready, you're using the term "production-ready" in a very different way from its common meaning.

Can you present an ecosystem that does this better? All I've heard you claim is that Node is bad, in ways that are not specific to Node.


> None of this is different in any other language, including C.

Really? Because NPM is the only dependency system I know of that doesn't verify packages in any way.

> If you run a C app that uses outside libraries, or a Python app that uses outside libraries, or anything else, you either devote well more than a week out of every year to updating libraries, or everything is super old and frozen and impossible to change, or everything risks breaking when you upgrade your OS distribution (or other source of packages).

This hasn't been my experience. Unlike the Node community, the C community tends to feel strongly about reverse compatibility. The Python community has become a bit more diluted in recent years, but as long as you stick with mature libraries, upgrades are usually no more than updating some version numbers in the `requirements.txt` and running `pip install -r`. In cases where Python packages break dependencies, they're usually pretty decent about giving deprecation warnings.

In NPM, mature libraries break backwards compatibility without warning all the time, sometimes without even a version number change (by virtue of the dependency's dependencies).

Can you point me to an example of a mature library in Python breaking like Babel did when left-pad happened?

> Can you present an ecosystem that does this better? All I've heard you claim is that Node is bad, in ways that are not specific to Node.

The ways I mentioned are absolutely specific to Node.


> Really? Because NPM is the only dependency system I know of that doesn't verify packages in any way.

npm verifies the SSL cert on its connection to the registry. pip does the same thing. Neither verifies signatures on the packages, or anything else.

Both of which are better than C build scripts that involve downloading tarballs from SourceForge and running `make install`.

> This hasn't been my experience.

Then you're not running C in production, sorry. Or you're ignoring the friendly build engineers and distro packagers (like myself) who spend their days solving these issues so you don't have to notice. Which is great, we solve these problems so you don't have to, but you really need to defer to the people with expertise.

> Can you point me to an example of a mature library in Python breaking like Babel did when left-pad happened?

Sure. Babel didn't break when left-pad happened. A certain way of installing Babel did. I have no idea how you keep conflating a method of installing packages (that isn't even what serious production users use) with code. So, "all of them."

I can give you countless examples of mature libraries and software packages in C, Python, or many other languages where `./build.sh` or equivalent, that wgets some crap from SourceForge, stopped working in the same way. But if you're doing `./build.sh` as part of your regular production workflow, you know that it's going to break and that you have better options.


> npm verifies the SSL cert on its connection to the registry. pip does the same thing. Neither verifies signatures on the packages, or anything else.

The top result from Google: https://pypi.python.org/security

> Both of which are better than C build scripts that involve downloading tarballs from SourceForge and running `make install`.

True, but I thought you were the one who wanted to limit our conversation to competent installations?

> Sure. Babel didn't break when left-pad happened. A certain way of installing Babel did. I have no idea how you keep conflating a method of installing packages (that isn't even what serious production users use) with code. So, "all of them."

Really? So how do you install Babel the first time in order to shrinkwrap it?

> I can give you countless examples of mature libraries and software packages in C, Python, or many other languages where `./build.sh` or equivalent, that wgets some crap from SourceForge, stopped working in the same way.

Okay, give me countless examples.


> The top result from Google: https://pypi.python.org/security

I'm not sure what this has to do with anything. Yes, you may sign your uploads. Nobody verifies them on download.

Since you seem to be a fan of Google, try Googling 'pip signature verification' and reading the results. Here's one place to start: https://github.com/pypa/twine/issues/157

> Really? So how do you install Babel the first time in order to shrinkwrap it?

The same way you install anything else from any other ecosystem? The packages have to be up and online when you initially retrieve them, yes. I have no idea how you think that's NPM-specific. If you would like to download some code, you have to have somewhere to download it from.


There are some impressive libraries in the Node ecosystem. As an example, passportjs gives you support for a lot of weirdly custom OAuth implementations, Kerberos, etc


Passport JS is horrible. Sure it's easy to use, but it's not secure, which is kind of the entire point.

There are other libraries that are interesting, but that's more a function of necessity than of it being a good thing--people write JavaScript because it's the only thing that runs on the browser, and then people port it onto Node. And it's worth noting that a lot of these have been held back significantly by being on Node. Take Babel for example; it's useful and interesting, but the code is a mess because of callbacks, which makes it hard to modify, and it was taken down by the left-pad fiasco a few months back (which broke thousands and thousands of builds worldwide). Some interesting things have been built on Node, but that's despite Node, not because of Node, and they would be more reliable if they weren't built on Node.


Yeah, that's fair. I can't say I've used it beyond prototyping, for which it is very easy.

I came across it vs. doing something similar in C#, which required one library per vendor (Quickbooks OAuth, Google OAuth etc), and when I last used them I ran into issues where they conflicted so I like the combined interface approach.


It's not really new though, see for example Omniauth for Ruby.


Could you tell me more about Passport not being secure? I've been meaning to use it in an upcoming Node project, so I'd really love to know why I shouldn't use it, and what I should be using instead!


JavaScript is missing some fundamental cryptographic primitives that make it impossible to write secure software with JS, most notably it lacks any cryptographically secure random number generator.

Additionally, package installs are insecure, so unless you have time to audit the entire `node_modules/` you can't actually guarantee that the code you want to run is the code you have installed. Additionally, because JS doesn't have any namespace restrictions, a vulnerability in any package allows access to the whole codebase.

For a survey of problems with JS (none of which have been fixed since this was written), take a look here: https://www.grc.com/sn/files/jgc-javascript-security.pdf


Thanks for the info!

Do popular Ruby and Python solutions suffer from the same problem?


  ispositivenumber.js
Wait, there's also

  is-positive-number.js
So confused


The JavaScript community really needs to get together and fix it so only one version of any possible library is published.


> isn't a flaming pile of crap

The hyperbole in every criticism of Node must be there for a reason. I think it is more a reflection on the state of mind of the critic rather than the language.

I would posit that many have dedicated huge amounts of time to a new or less-popular language that may not be around in 5 years.

JS is a safe, long-term bet. It's tentacles have made it into every platform and use case out there. Backend, frontend, embedded, mobile, desktop, etc.

I'm curious, what does your vision of the future of web and mobile development look like?

Which language do you wish was in browsers instead of JS?

How would you objectively rank what makes a "good" language and a "crap" language, and how would you test these hypotheses?


> The hyperbole in every criticism of Node must be there for a reason. I think it is more a reflection on the state of mind of the critic rather than the language.

"It's telling that none of the posts defending Node.js are talking about it's technical merits. They're all saying:

1. Attacks on people--you're being too negative, you're saying this to feel superior."

> JS is a safe, long-term bet. It's tentacles have made it into every platform and use case out there. Backend, frontend, embedded, mobile, desktop, etc.

By that argument, Java is a better language than JavaScript.

There are a lot of established languages that aren't going anywhere. And certainly with WebAssembly in the works, I'm not as confident that JS will be popular in two decades as you are.


Agree that WebAssembly is the big one to watch.

On the prior point, from my experience there is a big confirmation bias at work and I think these discussions would go better if there was some disclosure of how invested each person was in their own language of choice. E.g. if you realised you could ship features faster in language X, how much wasted time would you have invested in language Y.

When I was working all Scala, I was always looking to confirm my decision by reading positive reviews, reading about the switchers from Ruby, because I was spending huge amounts of time learning it. This was right up until the point that I switched to Node and could ship features 10x faster.

I think people are always happy to read articles like this and see people switching away from Node if they are working in something trendy like Go. It feels good. But at some level there is a fear that the thousands of hours you spend on learning the intricacies of a language may be wasted.


Again, why not just stick to the technical merits here? Why accuse Node critics of trend-chasing?

I'm not a trend-chaser. Python has been around since 1991 and is my language of choice for most things. I like Elixir, but I wouldn't even consider using it at work, specifically because it could just be a fad. I'll keep an eye on it and maybe in 5-10 years consider looking for Elixir jobs if it's still growing.

There is a ton of technical experience which goes into me saying that Node is a flaming piece of crap. It's not just shipping features faster, it's how stable those features are, how many bugs you get, how secure your system is, how performant/scalable, etc. I have worked with Python and Node for years, every workday. Before that I worked in C#, Java, Ruby, Clojure--and it's exactly the experience of chasing a few trends versus using some old solid languages that makes me want to stick with tried-and-true tools.

Your post has a sense of "everything is relative, there's no such thing as one language being better" to it, but I don't think that's true at all. I think we can look at the technical merits of languages, compare those to the problems we have, and see which ones are better. And that's important because when we have the choice, we can choose to use the better tools.


JS is not a safe long term bet if you want to build reliable systems. There are better languages out there for this purpose. Just because JS is widespread does not mean it is good.


The pot is calling the kettle black.

Its always funny to read something like this coming from proponents of Go. Now thats a language that threatens to single-handedly bring us back 30 years in the past. A typed language invented in 2009 lacking generics and algebraic data types, and being proud of it (or at least its users being proud) - thats pretty much at the exact same level as JS "reinventing" callback-based async programming in 2009. And its users were similarly proud of this in 2010-2011.

At least Brendan Eich is modest enough to apologise for JavaScript's flaws and push as much as possible for fixes without backward compatibility breakage of the web.


How can one language not even in top 10 bring us back 30 years? Has it been made compulsory to be used by developers?


Not OP, and I don't have an opinion on Go, but I think that's why he used the words "threatens to".


People are already asking for the (err, res) crap in other languages too on Twitter.


There is an easy sense of superiority that comes with derision of "X" and the authoritative sounding romanticization of idealized "Y" seemingly adds weight to the argument. Clearly these "Y" people are smarter and better than all those "X" people.

Look at all this horrible code! So sad that all these people are not as smart as me. Look at this horrible language! So sad the people that created it are not as intelligent as me!

Really? There is no more "problem" with Node.js than there is a "problem" with any other platform. There is no more problem with JavaScript/ES-(name your flavor) than there is with any programming language. Different languages are different. Different platforms are different. Of course every system has its own problems. Sometimes people who appreciate them call these "tradeoffs" or the superior types call them idiotic.

The cliche´ of hacker news haters is really really really getting old. So here are some things that are actually good.

As much of a pile of hot steaming code as it is, Babel as an idea (AKA transpiling one language to another) is pretty cool. Of course you can do this in other places but its featuring prominently in the JS community leading to an interesting result. The language and its features become configurable, easy to adapt and change and evolve over time and suit to your liking. This is interesting!

Finally as opposed to what others may have said about the community being childish I have found the opposite. I find it to be very welcoming and supportive, friendly and honestly creative. Of course there are lots of negatives, lots of horrible code, lots of mistakes happening. But what is missed in all of this? Theres A LOT of stuff happening that is good even great! It's beautiful chaos! So go on hating, but I see lots of great stuff out there. As one great systems and iOS developer told me the other day "Have you tried Express? Its awesome!" HA yeah. But he just tried it, and loves it!

Oh but look at that callback YUCK! Cmon


No, I think criticism is a good thing. Rants are good. They highlight issues with our current tools and over time that's what leads to progress. Using a plow isn't just different from digging in the dirt with a stick. It's better.

For instance, nulls are clearly a problem. After countless rants we're seeing languages that have better ways to deal with missing or optional values enter the mainstream (Swift, Scala, Rust).

J2EE's original idea to configure a system through separate XML files was heavily criticised for being too bureaucratic. After countless rants we got configuration by convention, better defaults and annotations as part of the source language.

Of course progress is not a straight line and quite often it's not clear what is and isn't progress because there are many trade-offs. But where would we be without a constant process of criticising our tools?


Totally not against solid critique. But it becomes a culture, a signal of intellectual superiority. Which is too easy. You can be, and I have been highly productive in plain ole JavaScript. Same with Java, Same with Ruby, same with Python. Are the language debates irrelevant, absolutely not. But “Node.js is one of the worst things to happen to the software industry” Hey if it is then I fully submit that I'm just some stupid fool that should rage quit the internet and give up because.... Wait no that would be stupid.


Instead of responding to the fact that JavaScript:

1. Has no adequate threading model.

2. Has no reasonable type system.

3. Has better alternatives on the server side.

...you're making it personal, and about the people. Look at your post; you're just accusing the critics of Node of feeling "intellectually superior". That's just an ad hominem attack. And you think that critics want you to rage quit the internet and give up? No, that's not what critics of Node want. Or at least, that's not what I want.

What I want is for people to either acknowledge the problems with Node and fix them (unlikely), or start using better alternatives on the server side, and start investing more in WebAssembly and langauges targeting WebAssembly, so we don't have to use JavaScript any more. I want this because as long as Node/JavaScript are around, I have to either deal with those horrible tools, or not take those jobs.

This isn't about feeling superior, it's about improving my life by using tools that aren't godawful.


1. For an async-first environment, which node mostly is, lack of threading is hardly an issue. Only threading node should ever have is the web worker w/ shared memory & atomic ops that’s already going into the Web Platform[1]. I say this is an advantage.

2. Typescript is great, it even has strict null checking via union types. Way better than the half-hearted attempt at optional typing you’ll find in Python 3. So if you think Python is a good ecosystem, then node is miles better. Sure, it’s not Scala, but on the other hand interop w/ Javascript and Web Platform is seamless. It’s a trade-off, but one that I think is very much worth taking. Also it compiles in seconds, not hours.

3. You can share code w/ client which is actually useful considering web app logic is moving client-side. See: React.

4. WebAssembly is nowhere ready. No idea if using it/tooling/debugging will be any good for anything other than what emscripten enables right now. It’s a very risky investment to be considering that early.

Even if you think WebAssembly will pan out, the languages that will target it already target Javascript. It’s just alternative bytecode, really.

[1] http://tc39.github.io/ecmascript_sharedmem/shmem.html#Struct...


1. There are two built-in solutions to this (as well as a slew of other solutions in modules), the cluster module and the child process module: https://nodejs.org/api/cluster.html and https://nodejs.org/api/child_process.html

2. JS's duck typing is thought to be a "feature" to some and a "bug" to others, so if you think it's a bug, use TypeScript or Flow. They'll give you "reasonable type system[s]"

3. Nice opinion


> 1. There are two built-in solutions to this (as well as a slew of other solutions in modules), the cluster module and the child process module: https://nodejs.org/api/cluster.html and https://nodejs.org/api/child_process.html

Process spawning and threading are two different but related mechanisms. The former is much more expensive and hard to use with optimization techniques like pooling. It also forces message passing for IPC rather than shared memory.

> 2. JS's duck typing is thought to be a "feature" to some and a "bug" to others, so if you think it's a bug, use TypeScript or Flow. They'll give you "reasonable type system[s]"

Nothing that transcompiles into JavaScript can fix JavaScript's lack of a native integer.


Not that this should be necessary, but it'd actually be pretty simple for a compiler to fix #2: transform `const i: int = 3;` into `var i = new Int32Array([3]);` and change future references from `i` to `i[0]`.

No clue what the performance implications would be for really heavy uses of this, but it'd at least be a workable solution if you absolutely required a true integer type at run-time.


asm.js has valid, correctly-behaving integers, and asm.js works correctly on non-asm.js-aware implementations, so the lack of a native integer doesn't seem like a problem.

The emitted code might have lots of "|0"s in it, but I don't see many people complaining about the beauty of their C compiler's generated object code and the lack of native anything-other-than-integers.


> The emitted code might have lots of "|0"s in it, but I don't see many people complaining about the beauty of their C compiler's generated object code

What are the performance implications of that? It's basically adding an extra operation. Also, having been one to dive into intermediate assembly from time to time, there certainly are people who complain about the obtuseness of object code. Particularly since the generated object code is (out-of-order execution notwithstanding) how the CPU is going to actually execute the instructions.

> the lack of native anything-other-than-integers

Um, what? C has native floating-point types on every platform with an IEEE-754-compliant FPU, and probably on some that don't as well. Pointers are also not integers, though bad programmers frequently coerce them into such because most compilers will let them.


Regarding "|0", there are absolutely no performance implications. An ASM.js optimising compiler will recognize these code patterns and interpret them completely differently (it will not execute a bit OR operation, and instead will only treat |0 as a type hint)


If you assign only 31 bit integer values to a JS variable, it will be treated by V8 as an integer (small int, or SMI)


> Nothing that transcompiles into JavaScript can fix JavaScript's lack of a native integer.

Really, who cares? Types are important for the programmer not for the compiler. If I know what I'm doing then lack of ints doesn't concern me.


There are certain classes of programming and mathematics that really need integers such as cryptography and fields that would need big number libraries.


That's totally valid, but they shouldn't use Node.js if that's a mission critical part of the project. Nobody is saying that Node.js is the holy grail, but I see that expectations are that high.


>Has no reasonable type system

I'll bite on this. Javascript's type system is with no question my favorite part of it! JS's object model is incredibly powerful, which is why ES6 classes are implemented basically as syntax sugar, and why so many languages can be transpiled to Javascript.

I also don't think that the lack of threading is as much of a problem as you might think it is. For one, it means you can sidestep a lot of the data ordering problems that you get in a threaded environment.

There's no question that Javascript has shortcomings, but so many of the problems that happen in Javascript come from people treating it like Java, when it's a lot more like Lisp. In many ways, the architecture of Javascript revolves around a very simple, powerful object model in a way that's tough to parallel in most other languages.


I'm guessing he mean't static type system. This is the formal meaning of "type" in programming language theory. What you describe is a system of runtime tags.


Completely agree.

I'd actually like to list the worst things (IMO) to happen to the software industry.

(1) Marketing of computers at boys resulting in a generation of girls being excluded. (2) Software patents suffocating innovators. (3) DMCA & international equivalents (4) Daylight savings time. (5) A generation of programmers being taught that anything other than "OOP best practice" was heresy

I'm sure there are plenty I'm missing.

Does one particular programming environment deserve to be listed with that stuff? Personally I think it's a pretty ludicrous suggestion.


> (4) Daylight savings time.

It's always annoys me whenever programmers complain about time zones and daylight savings time, because of the difficulty and inconvenience handling them in software. It's getting things backwards. Software should model the (human) world* ; the human world shouldn't be changed to model what's most convenient for software.

* That doesn't mean you can't have a simpler model that you translate to/from.


> Software should model the (human) world* ; the human world shouldn't be changed to model what's most convenient for software.

Agreed, but my point is that daylight savings time is a huge human-world complexity that brings a marginal boost to some sectors. There are a ton of negative effects and I question whether there's a net benefit, even before all the software complexity needed to support it.

https://en.wikipedia.org/wiki/Daylight_saving_time#Dispute_o...


Oh man, "OOP as gospel"... If there's one thing that I like about Node.js, it's that it made me actually think about when I should take an object oriented approach vs a functional approach.


Daylight savings time? What's the connection to the software industry?


Dealing with time-zones and daylight savings time (which countries have it and which don't) can be a very challenge and messy problem that is really easy to get wrong.

Heres a pretty good video from computerphile on the subject https://www.youtube.com/watch?v=-5wpm-gesOY


Won't most mainstream languages have a built in library for that sort of thing these days?


Most do, but the problem is less about the library and more about understanding how to use it.

For example, if you want an event to happen in 12 hours, and it happens to be 9pm the day before daylight savings time, do you schedule it using the timezone-aware API (so it happens at 9am, which is actually 13 hours away) or 8am (which is 12 hours, but non intuitive)? What about the event that's supposed to happen every 12 hours in perpetuity?

What happens when the user/device is mobile, and crossing timezones? Which times do you use?

What happens when you're scheduling something far in advance, and then the timezone definition itself changes (as happens a few times a year) between the time you scheduled the event, and the time something actually is supposed to happen? Does the event adjust for the new definition or follow original time?

Luckily for many problem domains, the details around this don't matter too much, but this is just the tip of the iceberg with timezone challenges.


The libraries don't really help with some issues.

E.g. a rather trivial example of displaying a hourly graph/table of some measurement, including comparison with yesterday (because there are daily patterns of fluctuation).

DST means that some of days have 23 hours and some days have 25 hours. The libraries will help you make the relevant calculations, but now you have to make a decision wether the comparison that you make with "yesterday equivalent" of today's 11:00 is yesterday's 11:00 (23 hours ago) or yesterday's 10:00 (24 hours ago).

For another example, accounting of hours worked - you may have a person that has worked 25 hours in a single day, such events break some systems.


They do. The problem is that the rules of DST change over time (sometimes at the very last minute https://github.com/eggert/tz/commit/868f3528a9fd60491439ce45...) and can lead to all sorts of date math bugs when comparing timestamps across timezones.


A lot of them are... not great. Moment.js makes it somewhat bearable in JS, but the native stuff is hot garbage.


Spring forward, fall back!


But not every country or even every part of every country...


DST adds complexity to the already ugly mess timezones are. Timezones are literally driven by real-world politics, and they can be as messy as only humans can invent.

See also: http://naggum.no/lugm-time.html


As well as adding to the already-confusing mess of timezones and date calculations, daylight savings can change frequently, and sometimes at very short notice.

This year, Azerbaijan decided to cancel DST, and agreed the cancellation just 10 days before the scheduled clock change [1]. Egypt cancelled DST with even less notice - just 3 days! [2]

[1] http://www.timeanddate.com/news/time/no-dst-azerbaijan.html

[2] https://www.washingtonpost.com/news/worldviews/wp/2016/07/06...


Hah! You've had a very lucky programming career indeed if you can ask that question in all seriousness.


It has to be accounted for in time and date calculations and that poses a new set of challenges.

It is also just one of many geographical, cultural, and temporal challenges that need to be addressed by any business system relying on accurate international dates and timing.


Time and scheduling are a bit tricky, to say the least. It would help if there were some reasonable assumptions that you can make - for example, a day containing 24 hours. DST makes such assumptions not true.


Wow! You haven't run into DST issues in software yet?! How I envy you! If you work in the industry, you have a lot of fun unintuitive but sort of neat bugs to look forward to!


+1 daylight savings time -- it's behind countless bugs. You can add weird and split time zones to that, too


null?


Yes. Null belongs on any list of things the software industry got wrong.


Should be the first one.


> Rants are good.

Rants get me down. It really is possible to be constructively critical. This author failed.


A good rant should be like political satire, a biting critique using absurdism to highlight the truth of the matter.

A bad rant is usually just impotent rage without wit.


Yeah, but think about how infrequently political satire is actually good and you'll see where some of us are coming from on this.


Have you read the whole page? Just the subdomain is "HARMFUL". The page is a compilation of good and BS critics for any language.


Have to agree (although I'm one to rant)


Rants without proposing any alternative solutions can't redeem any use at all. Their only reason to be is to wrongly make their authors feel smart or skilled.


He proposed list of languages. And really, there's a lot of good languages to use on the servers. JS was never good language and we use it on the client-side just because it's the only language browsers know.


He doesn't explain why... other than ranting about the callback model (which has been in our desktops forever, most event systems rely on callbacks). The quote in the post has also misleading statements, NodeJs does scale in load and performance which is one reason why people use it instead of Python (which was one of the quote suggestions)


That was not one of his suggestions, unless you count stackless python. It was an example (i.e. twisted in python) in support of his point that callback's are a bad way to structure concurrent programming. His suggestions were Erlang and Go which are arguably better approaches in a purely technical dimension to NodeJS.

What his rant misses is that most technical decisions aren't made on purely technical merit for a host of different reasons.


>His suggestions were Erlang and Go which are arguably better approaches in a purely technical dimension to NodeJS.

I would say that it is based on a subset of the technical dimension. Maybe Erlang and Go might have nicer ways to handle concurrency flows but if it doesn't have a library X to communicate with backend component Y then there is a technical reason not to use it.


There is value added in identifying and clarifying a problem. It's partly how we, as a group, find solutions to bigger problems.


Identifying a problem is separate from coming up with a solution for all but the most trivial of problems.


No, J2EE's original idea was to separate wheat from chaff, i.e. good programmers from mediocre ones. Good programmers would then work on containers and mediocre ones would write standardized apps to deploy into those containers, with containers being easily interchangable. In this worldview it's irrelevant whether those standardized apps are developed with lots of XML, or convention over configuration or whatever, because mediocre programmers won't complain.

Now as much as this worldview is flawed, this is just one of the manifestations of an attempt to make software development easier, which is, of course, a noble goal that even Node.js aspires to.


> For instance, nulls are clearly a problem. After countless rants we're seeing languages that have better ways to deal with missing or optional values enter the mainstream (Swift, Scala, Rust).

No they are not. They are a state of an object. What is the problem is a lack of documentation.

One thing I've seen coming up, at least in some code I've looked at, is the @Nullable and I've not seen any complaining about null from those who use this tag.

The idea is to always document when a variable or type can be null. This way you know, with 100% certainty (given a strict type system adherence) that you're not going to erroneously see a null object where it isn't expected.

From this, you have a huge benefit: a performant error state or a better method of representing an issue without throwing an exception.

Riddle me this, how would a Map be implemented if Null was stripped from Java? Should it throw an exception if a key is not found? If that is the case, then you should always need to check for this exception, otherwise you're prone to bugs. You've also increased the execution time of your code by in some a huge performance hit [0] that is more cumbersome.

This is why I think rants are a big problem. They don't describe the opposition to a statement accurately. They are one person's opinions and after reading them, if it all sounds nice, you're usually willing to take it all as fact. "Hey, they're writing an article! They must be smarter then me. I'll take their word for it." As a result, our communities don't really evolve we just spew memes back and fourth dictating what someone else thought.

What really matters, in every sense, is reading a discussion had between two experts in opposing camps. That way you can see both sides of the arguments, their weak points, and make an educated choice on what side to agree with.

That being said, what do you think of my points? Can you address them? My oppinion is that NPEs can be easily avoided by using very simple measures AND null more closely resembles most problems while being more performant and less cumbersome then throwing exceptions willy nilly. I'd love to hear how you feel about this.

Also...

> J2EE's original idea to configure a system through separate XML files was heavily criticised for being too bureaucratic. After countless rants we got configuration by convention, better defaults and annotations as part of the source language.

I'd say that is a good example of discussion, brain storming, and coming up with a much more accurate representation of the ideas we are trying to convey (the use of annotations to configure methods).

[0] - http://stackoverflow.com/a/299315


The problem with null as implemented is that it type-checks as a value of any type. This means that anywhere you have a value of some type, it may be a valid value of that type, or it may be null. If you think about it, this is very odd. What if the number 42 were considered a valid value of every type, such that it were usually necessary to check whether you actually have 42 before using a value? That seems ridiculous, but it's almost exactly how null works.

I'm someone who uses @Nullable and complains about null, so now you've met one of us! The problem with @Nullable is that it's mostly useful when all your code uses it, but the compiler will not force you to do so. It is a Sisyphean task. (But at least Java has @Nullable - most languages with null don't have anything like that.)

In a parallel universe version of Java with no nulls, a Map implementation would return Optional.<Foo>empty(). This seems similar to null, but with uglier syntax, but critically, it is not of type Foo. In order to get a valid Foo, for instance to pass to a method that takes one, you must unwrap the Optional you have and if it is empty you simply can't call that method with a Foo because you don't have one. The advantage here is that this method taking a Foo now knows for sure it has one and does not need to bother checking that assumption. How pleasant!


> Riddle me this, how would a Map be implemented if Null was stripped from Java?

Obviously you would use an Optional<T> (for a Map<K,T>)

See https://docs.oracle.com/javase/8/docs/api/java/util/Optional...


Which would have been a great idea if it had been in Java 1.0 :-(

Alas, the horse has left the barn.

Much like immutability: if it had been the default, with a "DANGER: mutation ahead" keyword required otherwise, that would arguably have been good. But it's too late, now. (Java) Beans, beans, the magical fruit...


Yeah, it's a bummer dealing with backwards compatibility. Don't get me started on guava Optional vs JDK8 Optional.

But at least you can stick to just using Optional and no nulls in new code you write, and only using nullables to communicate with existing libraries.


I mean when people say Nulls are bad I don't think they necessarily mean they are categorically bad, only that allowing every single object in a system to have a null value without any supporting language features requires a lot of discipline. I don't know anyone that's used something like Maybe[String] and felt it was categorically no better than 'null' values, but on some level they are both features that express the same thing. Just one of them is more coherent and targeted.

When people say that null was a billion dollar mistake I don't think they were referring to any possible implementation of a value that indicates nonexistence.


> Riddle me this, how would a Map be implemented if Null was stripped from Java?

Optionals.


Riddle me this, how would a Map be implemented if Null was stripped from Java?

  map.getOrThrow(key)
  map.getOrDefault(key, defaultValue)


So I can either throw an exception and take a speed hit in my performance critical code or I can remove a possible value from my map?


It's not "removing a possible value from your map", it's having your type system explicitly check that you're handling the edge case at compile time. This works in practice in a wide variety of languages.


Maps should do whatever ArrayList does when your index is out of bounds. And so long as null is a value that could be in the collection, lookup should definitely not return null.

If a lookup fails, it is often a bug or a violation of a constraint that you shouldn't be handling. Throw an exception. If a lookup is expected to fail in normal execution, you can use an Optional to mash together a check and lookup in a typesafe way. Lookup failures should not be allowed to propagate as nulls only to be caught as NullPointerExceptions in disparate parts of the code.


Tony Hoare, the computer scientist who introduced null, called it a billion dollar mistake. He's pretty accomplished in his field, and there are better options than null for signifying that a Map doesn't have a particular value, like Option types.


No, I think the real issue here is people taking languages personally, when there really shouldn't be any reason to. It's very interesting that people form such a personal bond with programming languages, to the point where they will refuse to see how they could be perceived as bad, or cumbersome.

Pretty much everyone does it, myself included, and it's one of the more problematic things when it comes to actually discussing languages. People just won't accept that their "baby" (or pet, as was mentioned below) might just be bad.

While elitism isn't great, not all criticism and ranting is done from a "We're/I'm better than you" perspective and most rants aren't directed at people. The people using language X might feel they are, but that's simply their own insecurity showing. I think a good portion of defensive posts show this fairly clearly.

In the end, is it really that surprising that some of the newer ideas are better than old ideas? Why is it only "fair" to say that a later version of X is better than an earlier version of X, but not that Y can be better than X?


Great post; completely agree. I feel a lot of us associate "things" like programming languages with our identity. So if our identity is attacked, we take it personal.

Generally you'd think new ideas would win out on old ideas. But you hope the new at least learns from the old. No need to repeat mistakes. This is the criticism I usually see from BSDs towards Linux or programmers towards Golang. Not that I disagree/agree, just an observation.


I absolutely agree that lots of new stuff is not necessarily better than the old. Particularly when they end up more complicated than what preceded them. I think maybe I overstated the issue of time (what came before, etc.). I should've written it more generally: "Is it inconceivable that one idea/concept/product/language might be better than another?"


> It's very interesting that people form such a personal bond with programming languages, to the point where they will refuse to see how they could be perceived as bad, or cumbersome.

From what I've seen and personally experienced, this happens most often when someone is only strong in that one language. If you've only ever used that language to build applications and someone comes along telling you it sucks, it feels like a personal attack because it would mean you've been doing things wrong or your code/applications suck. It's part of an identity. As people learn new languages and actually gain professional experience with them, that original language is no longer what defines them as a programmer so they no longer need to defend its legitimacy.


> Really? There is no more "problem" with Node.js than there is a "problem" with any other platform.

While I agree in principle, I've been bitten by some incredibly bad node packaging architectural decisions. Just yesterday I was attempting to install tfx-cli in order to CICD our build scripts to our build server - it went as you would expect with npm: MAX_PATH issues. The git clean then failed on all subsequent builds. A problem became unsolvable by virtue of the platform that was used to write it - had tfx-cli been written in almost anything else, this would not have been a problem (I say "almost" because I'm egregiously assuming that at least one other package manager is also broken in this way).

So there's an example of node-specific problem and there doesn't seem to be any impetus surrounding solving it.

> Oh but look at that callback YUCK! Cmon

In defense of Node, it now supports ES6: callback hell is self-imposed for all new code.


> So there's an example of node-specific problem and there doesn't seem to be any impetus surrounding solving it.

??? There's so little impetus around solving it that both sides(npm and Microsoft) have put in significant effort to do it - npm 3 reduced the chance of having a very large path, and preview builds of the .net file api are removing the MAX_PATH limit.

https://github.com/Microsoft/nodejstools/issues/69

https://github.com/Microsoft/nodejs-guidelines/blob/master/w...


Fair enough, I'd strike-through that quip in my comment if I could. Still, this problem needs a bold/breaking fix. I most likely didn't run into this with my dev-testing because I am on a filesystem with reparse support.


Isn't the max path issue an operating system bug exposed because the tool you were using was designed to run on an OS without that bug?


You can't store an arbitrarily long path regardless of OS. EXT4 only pushes the value to 4096 - meaning that encountering this bug on Linux is only a matter of time. The unreasonably small value on Windows merely resulted in this boundary condition failing sooner.

Code merely pushes electrons around, meaning that it has constraints firmly rooted in physics. So no: NPM runs in a universe (regardless of OS) and fails to account for the constraints of that universe by falling into unbounded recursion far too easily. This could have all been avoided by correctly applying graph theory and/or prior wisdom in regard to dynamically loading modules - which has existed for decades. Now we're forced to kludge it with reparse/symlink and that only works on file systems with these features.


are you implying that design decision is good?


There is nothing wrong with using callbacks. Or promises. Or async/await. Or any other abstraction that makes sense for you.


This is a "writing code"-centric point of view. From the maintenance perspective "you" aren't the only person for whom the tradeoffs of callbacks/promises/async-await matter. It also matters to the people who have to come along years later and maintain the software.

So really it's about what abstraction makes sense for the maintainers as well. Not just you.


> people who have to come along years later and maintain

The problem is there is no way to know years in advance what makes sense for those future people. All you have is current standard at your org and/or common sense. Do what they say.

When you choose what the standard is, select whichever makes more sense at the moment. That is kind of my original point.


You may not know their own personal preference when writing code. But you can know what will make their job easier. Strong typing for instance will make refactoring easier. Good perf tooling will make analyzing performance easier and so on.


> There is an easy sense of superiority that comes with derision of "X"

There's also an easy sense of superiority which comes from being actually superior.

> There is no more "problem" with Node.js than there is a "problem" with any other platform.

Well, yes, there are problems with Node.js, to include the concurrency model and the language — as indicated in the article.

> Different languages are different. Different platforms are different. Of course every system has its own problems.

Yes, every system has its own problems, but some systems have fewer or better problems than others, while others have more or worse.

There are continua of languages, platforms & OSes. JavaScript-the-language is a bad language. It's not as bad as INTERCAL, but it's worse than C or Lisp. Node.js-the-platform is a bad platform. It's probably not as bad as the Java platform, but it's worse than Go. POSIX is a bad OS. Not nearly as bad as Windows or Mac OS Classic, but worse than Plan 9.

One has a very limited number of hours in one's life. Why waste those hours in pursuit of anything other than excellence? Node.js isn't excellent; why spend any time on it?


I agree with the general sentiment, maybe not with all the choices of what's better or worse. And this is where the trouble starts: What is opinion and what are facts?

I also agree with your conclusion that avoiding wasting precious lifetime on shi^Wnon-excellence is undesirable. However, given the first problem I mentioned, one can easily waste just as much time trying to figure out what the excellent things to choose are.


There are certainly languages that are better than others. Some of them excel in a particular domain, others do well in other domains. But in general you can tell if a programming language pushes you to write good code or bad code.


Not every tool is the right tool for the job, and finding the right tool for the job is part of doing the job correctly.

The thing that always bugged me about node is the argument that using the same tools on the server as in the browser is somehow an advantage. That idea is not a given.


> The thing that always bugged me about node is the argument that using the same tools on the server as in the browser is somehow an advantage. That idea is not a given.

Not a given, sure. But there's an argument to be made there, beyond the scope of mere programming, as a web-focused company can benefit from having a dev team that is versed on both frontend and backend.


> Not every tool is the right tool for the job

There are not only two jobs: frontend and backend. Saying X language is good/bad for backend..


You are oversimplifying the situation. I used to be a C++ backend developer and that was our stack because we needed return the response as fast as possible.

Nowadays I work as a backend developer using NodeJs and C#. I wouldn't use any of them for the former job and there are good reasons why I wouldn't use C++ for my current job. So there is such thing as right tool for the job even in one part of a web platform


There's more to computing than just websites.


Sure, but name any specific computing related thing that doesn't categorically reside within either frontend or backend. And don't say Javascript.


What is a "computing related thing"? Do transistors count? If yes, aren't they both front- and backend? What about threads? Don't they belong to both, the frontend and the backend of "computing related things"?

I'm not trying to be obtuse here, I genuinely don't understand your request.


What would you call data science/computational data stuff? Not backend, because you often sit and view the results in your R Studio console or whatever while working interactively. Not frontend because you are often directly mutating data structures and storage.


I'm working on an OCR engine right now. We intend to use it to do capturing of data using a mobile phone, but also to do verification of captured data on our backend server. So is this backend or frontend code ?


What about games? Lots of them are frontend and backend at the same time.


Interesting things aren't necessarily good. JavaScript is very difficult to read and maintain, there are dozens of dialects and versions, npm is a complete mess, it's hopelessly single threaded, and we've arguably hit a performance wall optimizing JavaScript in the general case. It is a necessary evil.

If you're going to address the "haters", actually address real complaints rather than make very vague comments about beauty and forum culture.


No. Some languages and platforms are just plain bad. Not everything is equal. Not everyone gets a medal.

Liking any particular language doesn't make anyone a bad or stupid person though. Nor does preferring another make you better or smarter.


The cliche´ of hacker news haters is really really really getting old.

I don't think your rebuttal of a criticism as exhibiting a hater culture is any less cliche.

Until the disgustingly contrived artificial hype, and regular poorly-founded misguided hype are all derided too, the haters persist to balance the equation.

Or, to rephrase my point in a business-friendly way: garnter_hype_cycle.jpg


Node is very similar to continuation passing style which is used primarily as an intermediate representation in compilers. It's typically considered poor for production code because it's very error prone.


So for example, the Javascript rules are no more problematic than the casting rules in other languages, they're just different?


No, like my cat likes to snuggle but sheds and my dog likes to play but poops too much. Either way they both deserve a bowl of water and some food at the end of the day. You can get all pedantic and say cats are better or dogs are better but guess what. I don't care, I just like my pets.


> I don't care, I just like my pets.

Their main purpose is entertainment.

Useful tamed animals like cows or horses are compared on much more practical indicators, and critique of a racing horse that "it's too slow and sometimes throws the racer off on curves" is reasonable.

That's why your pet metaphor wasn't applicable I think. also it involves emotions in misleading way. Programming languages won't feel bad when you don't use them.


It's not all gonna be gold.


If you're trying to pull a sled through the arctic, then you probably want huskies, not chihuahuas.


You don't build projects with your pets.


Hence why it was a metaphor.


A very bad metaphor. You shouldn't build a project for a company, using a certain programming language, just because you like that programming language. It's your responsibility to look for and provide the best possible solution to the client and that must involve picking the best programming language for the job. Of course there can be exceptions when a tool/library/framework is already being used.


I think the point of the metaphor was that 'the best programming language for the job' is often rather subjective.


Oh? Could I subjectivly suggest that Javascript would be appropriate for a system with hard real time requirements or formal verification requirements?

Sometimes using a band saw to conduct brain surgery is just completely wrong.


"often subjective" != "always subjective"

It's pretty clear that COBOL would be a bad choice to write a multiplayer online game, and that JavaScript would be a bad choice for an operating system.

As for the far far more common scenario (the one I had in mind) of choosing whether one of Ruby, Python or Node.js might be better for a certain project, the choice between the three is often going to be fairly subjective. You could solve a lot of problems in all three quite well. Trying to decide which one is objectively the "best" is often a waste of time that could be spent actually solving the problem at hand.


A fairly useless one at that. It plays on the idea that people don't consider technical merit when choosing programming languages and the metaphor did nothing but trivialize the idea of objectively comparing languages and their strong/weak points.


I agree that the metaphor was partially useless (perhaps poorly worded) - but it's not much better to think that the issue of choosing programming languages is purely objective. You could spend a very long time arguing whether Ruby or Python was better for a particular task, when the important thing is either of them will do a relatively good job for the task at hand and be a better choice than COBOL.


The fact that two or more programming languages are particularly suited for a certain task has absolutely no bearing on the general usefulness of having conversations about what languages are good for. It has absolutely no relevance to state that; the discussion didn't become more or less valid by that assertion.

Discussions that are prolonged like that become long because people are not actually being _objective enough_.

If you had an objective discussion about Python and Ruby you would conclude that yes, both of these languages have very broad ecosystems and are particularly suited for scripting (and glue work). Python is better suited for scientific computing, etc.

Nothing about the scenario you speculate about invalidates good discussion about programming languages and I don't know what about the scenario makes you think it does.


> Nothing about the scenario you speculate about invalidates good discussion about programming languages and I don't know what about the scenario makes you think it does.

I agree - nothing invalidates good discussion on this topic. My point is that trying to decide the best programming language for a particular task (or trying to decide why one programming language is ill-suited) isn't something that always needs a perfectly objective answer.


> My point is that trying to decide the best programming language for a particular task (or trying to decide why one programming language is ill-suited) isn't something that always needs a perfectly objective answer.

Ok, I hear that. I agree that you have to find a local maximum to settle at in these things. It's about being reasonable and pragmatic, after all.


> I don't care, I just like my pets.

Sure, cats and dogs are pretty good pets, say cats are Go and dogs are Lisp. That'd make C a ferret and JavaScript an incontinent chimpanzee with rage issues which periodically lays waste to the house, destroying anything breakable and slaughtering everything living.


> Babel as an idea (AKA transpiling one language to another) is pretty cool

As a sidenote, Babel didn't invent compilers (yes, that is what it is called) or DSLs.


>Really? There is no more "problem" with Node.js than there is a "problem" with any other platform.

Javascript's type system really doesn't stand up well to scrutiny. That's more crucial than you'd think - the type system is the foundation the rest of the language is built upon.

I suspect this is partly why javascript cycles through technologies so often. The foundations are shaky.


It's dynamic typing with a prototype-based object model. This type system tends to be very good at iterating quickly and testing out many alternative approaches to solving a problem without committing to a data representation up-front. It tends to be pretty bad at communicating invariants to other programmers or nailing down APIs for other subsystems to build off.

Small wonder, then, that the Node.js ecosystem tends to very good at iterating quickly and testing out many alternative approaches to solving a problem, and pretty bad at communicating invariants to other programmers or providing a stable API for other programs to build off. You tend to get what you ask for.


The problems with the JS type system has nothing to do with its dynamic typing or its prototype model.

JS has unfortunate implicit type coercions, bad scoping rules (not type system related but still crappy), and annoying tri-value silliness (NaNning all over).

I'm a big fan of prototype OO. JS not so much.


There exist strongly typed dynamic languages that are good at both.


Examples?

If you're about to say one of SmallTalk/Lisp/Python/Ruby, then no, they are good at prototyping but have all the pitfalls of Javascript when used on large teams. If you're about to say one of Java/Go, then no, you give up a lot of prototyping flexibility for their explicitness.


This is exactly the intention of optional/gradual type systems. You can prototype away just as freely as with a purely dynamic language. Then when you have learned enough to be more definitive you can start putting in types. Dart is a good exponent of this (I think). Its a fully dynamic language but the compiler will catch the majority of static type errors (for types you define) and you can run it in checked mode during testing to catch other type errors. Typescript also has optional types (without runtime checking) but when I tried it it seemed to be harder to mix and match dynamic/static.


>Python/Ruby, then no, they are good at prototyping but have all the pitfalls of Javascript when used on large teams.

No, they absolutely don't.

>>> "1" + 1

Javascript: 11

Perl: 2

This is the kind of approach works ok for small programs. Where the code that does these implicit type casts isn't too buried.

When it's buried 3 layers down in a dependency used by a dependency used by a dependency, you want this to happen instead:

Python/Ruby: Type error


... And? That's simply "not having unpredictable type-juggling."

It does nothing to communicate invariants or guide API usage at design-time.

Consider PHP, which does have some of those bizarre and frustrating conversions, yet at the same time it also has stuff like type-hints and interfaces that can be used for design-time checking.


>... And? That's simply "not having unpredictable type-juggling."

Literally what this entire thread is all about.

Lots of languages have this issue and it manifests in the instability and indeterminacy of large scale applications in Perl and PHP much as it does in javascript.

Type hinting is, honestly, a pretty poor way of dealing with the problem of turning "1" + 1 into 2, but that's a separate issue.


While I think that JS's loose typing is a little annoying and certainly isn't what I'd build if I were designing the language from scratch, it just isn't much of a problem for me in practice. Type errors in general (either of the implicit coercion variety or the dynamic typing variety) aren't much of a problem. They rarely happen, and when they do, they usually take all of 5 minutes to track down and fix.

I find that the big benefit of types are in documentation. They are executable, explicit, compiler-checked documentation of the programmer's intended usage. Without this, it becomes much harder to read through and comprehend a large codebase that was mostly written by other people. It becomes even harder to change it.

I like Python, I like Javascript - but they have their uses, and those uses don't include large-scale mature software engineering. I actually argued otherwise for my first couple years at Google (where both are used extensively, though the JS is largely statically typed), but eventually saw the wisdom of more experienced engineers in this, and relegated my Python to exploratory coding.


> [type juggling is] Literally what this entire thread is all about.

No it isn't, that's just what you've been fixated on. Please re-read nostrademons' post, because you aren't actually addressing the issues they brought up, namely:

> It's dynamic typing with a prototype-based object model. This type system [...] tends to be pretty bad at communicating invariants to other programmers or nailing down APIs for other subsystems to build off.

While it's true that Python "fails nicer" than JS, it addresses neither of those two issues about preventing failures.


Java: 11

I guess companies should stop using Java for big enterprise applications with tons of dependencies. Or this isn't a problem because programmers usually know the basics of the language they are using. And if they don't, then that is the problem, not the language.

btw, Python: '1' * 3 -> '111'


   Main.java:17: error: incompatible types: String cannot be converted to int
    int z = x + y;
>btw, Python: '1' * 3 -> '111'

In theory I agree with this but in practise I see this so rarely used it barely matters.


Sorry, I meant "11" (String), same result as with JS.

see here: https://repl.it/Cqeq/0


The python example is not an issue; you're going from string to a longer string, not to an integer.


That's not true, you're going from string _and_ an integer, because '*' is a binary operator. (And it isn't communicative as shown here, which is another 'problem')

But that's not really the point, crdoconnor seems to think that languages should throw an error in cases like this, not just keep the type of the first argument.


dart , typescript, typed/racket ,julialang ?


>There exist strongly typed dynamic languages that are good at both.

What concrete examples were you thinking of? (You didn't mention any strongly-typed-dynamic languages in your replies to the others.)


I really don't think that people have been coming up with new frameworks because you can add [1] to 5 and get nonsense.

Javascript cycles through technologies quickly because web developers cycle through technologies quickly, and they do that because they get to work on greenfield projects with minimal expectation for them to be around in 5 years a lot. Check out all the different server side frameworks and infrastructure components that have been in vogue with web developers over the last decade.

In comparison, people who aren't writing consumer-facing websites and webapps tend to have much more clearly-defined goals, longer lifespans of their apps, and don't often get the choice to use anything but industry standard technology. Because of the slower cycle, there's less opportunity to redesign frameworks and core technologies even when it's allowed.


>I really don't think that people have been coming up with new frameworks because you can add [1] to 5 and get nonsense.

There's plenty more nonsense where that came from. That's just an easy to understand, concrete and obviously stupid example.

>Check out all the different server side frameworks and infrastructure components that have been in vogue with web developers over the last decade.

In python it's mainly been django and flask for a decade. In Ruby it's mainly been Rails.

Why do they have sticking power? It's partly cultural but it's largely because they were built on far more stable foundations.


In web frameworks, over the past decade, we've had various PHP frameworks, Zope, Ruby on Rails, Django, a variety of microframeworks in various scripting languages, Play, node.js+express in a handful of configurations (though generally MVC-ish), and Phoenix. There's very likely more that I'm forgetting.

In deployment, we've gone from SFTP scripts to Fabric-like software to PaaS to configuration management software to Docker, and then it seems every 6 months there's a different preferred way to make Docker do something useful. And of course there's the eternal argument over whether to deploy to VPSes, cloud platforms, or bare-metal hardware.

The "hip" web companies at any given moment - the ones who need the latest and greatest components to impress investors and attract "rockstar" "10x" developers - aren't tied to any specific language. They'll pick whatever was the most popular at the time they needed to pick something. Hence, right now, you have a whole lot of companies building on Golang and node.js, where they'd've been building on RoR five years ago.


Great comment illustrating the big picture of a few years. Now we just have to figure out which of those is the best for arbitrary, use cases with easy maintenance, unlimited scaling, and high availability. Whoever figures that out will finally be "Right." ;)


Is it a tradeoff or idiocy (bad idea)? Clearly you are in the "bad idea" camp. No worries, you can use TypeScript. Enjoy! (edited for clarity)


    > with Babel you can use TypeScript
Be careful with throwing "idiocy" at otehrs when it seems you yourself don't even know what you are talking about. To write TypeScript you use... TypeScript (their compiler).


Thanks the Babel thing was incorrect, Typescript is separate.

To clarify, I'm certainly not implying this person is an idiot. I'm saying, they lean toward thinking that the typing system is more of a bad idea (idiocy) than a valid tradeoff.


Yes, that's accurate.

I don't think typescript is necessarily the answer though. You don't fix the problems caused by weak implicit type casts with static typing, you do it by turning off weak implicit type casting and raising a lot more type errors.


Try to add some counter-arguments instead of just emotions.


> Look at all this horrible code! So sad that all these people are not as smart as me. Look at this horrible language! So sad the people that created it are not as intelligent as me!

It's not about being smarter; smart people are a dime a dozen. It's about the fact that what happens in the industry happens to all of us. When a large chunk of the industry adopts callbacks as their threading model, that means I have to either work with their horrible threading models, or not take those jobs.

Don't psychoanalyze people who you disagree with; that's just an ad hominem attack.

> Really? There is no more "problem" with Node.js than there is a "problem" with any other platform. There is no more problem with JavaScript/ES-(name your flavor) than there is with any programming language. Different languages are different. Different platforms are different. Of course every system has its own problems. Sometimes people who appreciate them call these "tradeoffs" or the superior types call them idiotic.

This is the favored defense of people whose favored languages/tools are under attack. But this is absolutely not a tradeoff. A tradeoff is when you make a choice that has some downsides, but you get something for it.

With JavaScript in the browser, the tradeoff is clear--you use this shitty awful language and you get to run your code on the browser, because that's pretty much the only reasonable way to run your code in the browser right now. There are alternatives (TypeScript, CoffeeScript) but then you're limited to a smaller community with fewer resources.

But with JavaScript on the server, there's no tradeoff. You use this shitty awful language and you get... a shitty awful language. You don't get a reasonable threading model, you don't get a reasonable type system. You don't get anything you couldn't get from another language.

It's not a tradeoff, it's just a bad choice.

> As much of a pile of hot steaming code as it is, Babel as an idea (AKA transpiling one language to another) is pretty cool. Of course you can do this in other places but its featuring prominently in the JS community leading to an interesting result. The language and its features become configurable, easy to adapt and change and evolve over time and suit to your liking. This is interesting!

Interesting, yes, and useful. But there's really no reason this had to be written in JavaScript, and the code would likely be a lot less of a "pile of hot steaming code" if it were written in a more reasonable ecosystem.

> Of course there are lots of negatives, lots of horrible code, lots of mistakes happening.

The problem isn't that there is bad code, it's that there isn't any good code. If you needed to write a server-side program and you chose JavaScript, your code is bad and you should feel bad. It's literally impossible to write good server code in JavaScript because it doesn't provide adequate types or threading primitives. It would be different if there weren't alternatives (like in the browser), but there are alternatives which are better.


Creativity is awesome. But when someone asks you to deliver something, spending all day wading through "beautiful chaos" might not be the best approach. In that case boring is better than interesting, IMHO. The more attention can be devoted to the problem at hand, the higher the chance to get the solution right.


Your argument proves too much, i. e. that all languages / runtimes / frameworks are of equal quality and usefulness and all differences come down to individual taste – and that's obviously absurd.


I don't see anything wrong with Node's approach to concurrency. It uses IPC which is much more scalable than threads and mutexes. Also, you don't have to use callbacks anymore, now we have Promises and there are tons of libraries that allow you to do reactive programming so you can just wire-up streams of data together in complex sequences.

This article is just inflammatory and illogical.

I think Node.js is one of the best backend engines which was ever created - And I've programmed in everything including AVR Assembly Python, PHP, C#, C/C++, Java and many others. I like Node.js the most.

And yes, you're right, the Node.js community isn't a 'proud' community - We're more interested in constantly improving than sitting there being satisfied with ourselves whilst bashing other tools.

There is no perfect tool/stack; they all have pros and cons. It's all about personal preferences.


> I don't see anything wrong with Node's approach to concurrency.

to me, erlang's model seems to be the best one that is around. without a runtime scheduler, message boxes + selective receive etc., i don't really think it is possible to do justice to the whole csp thingy.

you can have approximations to it, but that doesn't go very far.

for example, in erlang, i can start processes etc. and have them run essentially

    while(1) {
        
    }
and nothing really bad happens to other concurrently running threads.


Well said.

To everyone responding positively to this article, enjoy your confirmation bias. Have a side of emotional reasoning while you're at it.


All the points in the first (Uriel) rant are attacking JavaScript itself, and most if not all those points are remedied by using ES2015 and FRP.

All the points in the second (Ryan) rant are attacking bloated/poorly-scoped software and abstractions, and none of the points (except mentioning $NODE_PATH) have any relevance at all to Node.js.

So really none of this is relevant to Node.js (or JavaScript) being inherently bad, it's just pointing out that people can use it for bad things, which is no different to any other language or runtime.


you are absolutely right this is a click bait post that should be down-voted.


"The async-style of programming is almost entirely inaccessible to my brain"

Proceeds to write article shitting on language which relies on this.


More like “the world's worst implementation of asynchronous programming is shitty and didn't learn from its predecessors”.


I am certainly willing to entertain the notion that async models do not map naturally to most peoples' brains and as a consequence the callback style of concurrency might just be really difficult to handle. On the other hand the writer admits it may just be them not being able to understand it, which I think is also reasonable given that many people have managed to do more complicated stuff in much less than 2 months.

This post is a land of contrasts, in short.


You can interpret the question of whether an abstraction “map naturally to most people's brains” in two different ways:

(0) Is it easy to understand how the abstraction works in principle?

(1) Is it easy to actually use the abstraction to build large robust systems?

Clearly, (0) is a necessary precondition for (1). But it's not sufficient. Pointer arithmetic is an example of an abstraction that's easy to understand “in principle”, yet hard to use right (for “most people”, anyway). I'm afraid the same is true of explicit continuation (“callback”) passing. Programming with tools that only work “in principle” is hell.


Not sure, why this is getting downvoted.. It's very sensible and the poster provides a valid example.


"processes orchestrated with C. It’s a beautiful idea."

That's all well and good until programmers don't know how to manually manage their memory properly or use pointers and it creates massive security holes and bugs in general. Which is unfortunately very, very often.

Apparently this guy disagrees and that's fine, but in my experience he's wrong to bash complexity and praise C. I've read many large C projects' code, and it's about as complicated / messy as it gets.

Some good points in the article other than that.


Here's the post by Ry that was supposedly deleted: http://tinyclouds.org/rant.html

The point I took away from the post is that all abstractions carry a cost, no matter how elegant. Not "Ryan Dahl hates Node.js". But sure.

Seems the author is intent on complaining and that's cool with me, but if you're doing Node then yeah don't let people like this get you down. Node's fast, Node's flexible and has so many users that virtually any abstraction you like is available through npm. Callbacks are part of core because it's the lowest common abstraction for Async. Node is not Python.

Tired devs can complain all they like, but that doesn't make them right.


>Tired devs can complain all they like, but that doesn't make them right.

But don't you think this is the very reason why we need to worry about these issues. Us developers are tired, its not right, this is something we all love doing, and I want more people to love this industry and we cant afford to throw simplicity in the garbage, we need to spend some effort in to making things easier for the people who will come after us.


If you're gonna fall of the cat-v bait and try to "make things right" then you pretty much can't use any language except minimal C or Go. Their page on Java (what people in this thread propose) is way longer.


JavaScript in Web browsers has a lot of sandboxing (i.e: babyproofing). Node on the other side doesn't. If you are an experienced developer, chances are you won't do something lousy like coding by trial and error and playing with concepts you don't understand and copying and pasting from StackOverflow.

If you follow the golden rule of not touching anything you don't understand, you should be in a safe spot. But that's not the culture in the node community. The node community is all about sharing lousy snippets of code without error handling, without input validation, and without any regard for any non-functional requirement in poorly written blogs and npm. A community of excessive optimism and irrational risk taking.

Floating point numbers, for instance. Every number in JavaScript is a floating point number, and 99% of node developers don't understand a basic thing like how to compare 2 floating point numbers. And that's just the basics... let's not even discuss concurrency, parallelism, how to deal with files, memory, I/O...


Is this advocacy for safer development environments or just mud slinging at the Node community? You called a critical security layer that protects literally billions of users every day "babyproofing". No only Javascript but most features of web browsers are sandboxed as well, is that babyproofing or smart(prudent?) design?

Infantilizing programming communities. I'm ready to move on from this drivel. People make choices based on their contexts and while some large amount of those contexts differ only slightly from my own, I find it impossibly difficult to pass judgement on those who have made different decisions than me.


I said very clearly that node.js is a serious technology, and so are web browsers. The problem is that the community at a large and companies today have been influenced by a mindset that does work in sandboxed environments but not in actual server-side software.


This could be said about everything.

BUT, lets be real. Operations people love Node because i can run 100 containers of a node app behind a load balancer. If a single node app dies the orchestration system re-starts it. If developed well you have a reporting/analytics/metrics platform and tracing tool that allows you to capture failures, fix them and deploy new code.

I'd wager that for the majority of the world, no matter the language used, most people are implementing patterns/behaviors because that is what they need to do something. Very rarely do people create something / invent something novel thus the level of developer abilities you seem to project don't need to be the standard abilities and that is ok.


So what do people use as a server these days?

All I want is to annotate some function calls with paths and have something handle http requests and mangle the input and output into json for me. Preferably in a boring safe language. Preferably in Java, the boringest safest language.

If I have to call a method on a provided object instead of returning in the name of async, great, that's fine too.


I use Scala with spray.io. Plain-old-code rather than annotations or config files (they've structured their API so that you can write a route definition that looks almost like a config file - see e.g. https://github.com/spray/spray/blob/release/1.3/examples/spr... - but it's all plain old code that you can refactor according to the normal rules of refactoring code). You can use callbacks if you really want but generally you just return Futures and compose them with for/yield, which is the sweet spot IMO: your code can look like:

    for {
      user <- loadUser(userId)
      bestFriendId = user.friendships.maxBy(_.strength).friendId
      bestFriend <- loadUser(friendId)
      response = buildResponse(user, bestFriend)
    } yield response
so there's very little overhead to doing async calls (just the difference between "<-" and "="), but you do have visibility over which calls are async and which aren't when you're reading the code. (async/await are arguably even better in this specific case, but they're tied to async, whereas for/yield is generic functionality that you can also use to e.g. manage database transactions).

JSON-wise if you use spray-json-shapeless you get automatically derived json formats at compile time, so better performance than reflection-based approaches, O(1) code to support all your different data types, but if you accidentally try to include something that doesn't make sense to serialize (e.g. a file handle) then you'll get an error at compile time. Similarly, the output "adapter" is resolved implicitly at compile time, so once you've imported the adapter for spray-json and the spray-json-shapeless stuff you can do "complete(foo)" where foo is a Future[MyCaseClass] and it will work correctly, but if you try to complete with something that can't be converted to a valid HTTP response then you get the error at compile time.

(And if you're already using Java, it runs on the JVM, with seamless interop with Java libraries; you can run it as a servlet too, so you can migrate a web API one endpoint at a time (using servlet routing) if you like).


This looks surprisingly close to the hypothetical dream framework in my head. I'll definitely take a look into it.

I love the for/yield construct for handling boring concurrency.


To echo the sibling comments, use what's in Go's stdlib and you get trivial build and deployment. Spring Boot, Dropwizard have a steeper on-ramp if you're not up on modern Java dev, and the deployment is fractionally more involved, but they're both good options.


Take a look at spring boot - http://projects.spring.io/spring-boot/

its incredibly easy these days to do what you're suggesting in Java


And if you want to add icing to the cake I suggest Jhipster. It has made my life considerably easy with it's angular JS support and spring boot /JSON/ API support.

I could create an e-commerce website http://www.haggell.com in a space of month using jhipster & angular.


I second this. Spring Boot is the greatest thing to happen to Java in a long time... Well, that and lambdas...


Sounds like a job for Dropwizard. http://www.dropwizard.io/


I think .Net Core is the new hotness, but I've enjoyed using the self-hosted Katana OWIN webserver components for the last couple of years. Slap something like WebAPI or Nancy on top of that, and it's really easy. Of course, Windows-only...


It's not Windows only any more though


True, but the OWIN stuff I've been using was (https://www.nuget.org/packages/Microsoft.Owin.Host.HttpListe...)


if it's not performance critical, I'd go with Python. If you need max performance, I guess Java or Go or something along those lines, though caching and other techniques can soften the blow of using a slower language quite a bit.


If you're going to take the bait of this article, you can't use Java either [1]

[1] http://harmful.cat-v.org/software/java


Spring Boot, Dropwizard, ASP, RoR


What those repeating "async is good" don't seem to get is that we have had better ways to do async than what Node.js offers for decades.

Better as in, all of: faster, easier to reason about, better implemented, more robust.

Erlang is but one example.


I don't understand why people think node doesn't scale or that you need to use something else when it's time to "get serious". I can't get into specifics, but I recently worked on an app for a rather large company that had 10s of millions of daily users. The app written in node needed 3x fewer cpus than a java app running in the same system and they were performing similar tasks with almost identical usage. The team maintaining the node app was also 4x smaller than the team maintaining the java app.

The main difference was that the node app was thoughtfully architected and written and the java app was not. It didn't have anything to do with language or tooling. It had to do with good software engineering.


I dislike shitty languages and successfully avoid using them myself, but I do have to be a bit happy with NPM ecosystem teaching people to reuse more code.


I think NPM has finally shown that you can, in fact, have too much code reuse. The representative example for me is this actual isPositiveInteger module:

  var passAll = require('101/pass-all')
  var isPositive = require('is-positive')
  var isInteger = require('is-integer')
  module.exports = passAll(isPositive, isInteger)
You don't even want to know what the is-positive and is-integer modules look like.

Software pundits have been saying how modern programming is just wiring together existing components for years; the node ecosystem has taken that to its illogical extreme.


No discussion on Node.js and "overmodularisation" is complete without a mention of the left-pad debacle:

https://news.ycombinator.com/item?id=11348798


No matter how few dependencies you have, any of them can break you if your build system automatically moves you to an updated version without first testing it.


True.

And also, the more independent dependencies you have, the more that risk is compounded.


It seems to me that what other people call 'functions', npm calls 'modules'. There are so many 'modules' that are just a short function. Why not just make a 'standard library' with all these basic things in it?

One place I work at has a node app product with 500 (!!) modules like this, all of which have their own submodules. There are so many files that a server with a bog-standard 8GB EXT4 / drive cannot handle both the installed module, and a replacement during deployment, due to INODE exhaustion (solution: bigger EXT4 or use xfs).


There is a "true" module, which returns true.

https://www.npmjs.com/package/true

From the usage:

> Usage

> Simply require the true module. The export is a function which returns the Boolean value true:

> var t = require('./true') , myTrueValue = t(); console.log(myTrueValue === true); // Logs 'true'


And it's not as easy as it seems: https://github.com/mde/true/issues/5




The thing is, nobody is forcing you to use excessively small modules.

If you use an excessive amount of small modules, that's an error of the developer, not an inherent flaw in the platform/ecosystem.


My app does not use small modules... but its dependencies do, and the dependencies' dependencies, so in the end my node_modules folder has hundreds of packages and everything compiles slow like molasses :(


Compiles?


Nobody is forcing me to use node. If I use it, is that also an error of the developer?


The problem is that the ecosystem encourages small modules, and so the vast majority of existing modules in the ecosystem are small modules, or else are bigger modules that use many small modules. Basically, the only way to avoid depending on something this silly in your real-world Node project is to not use anything from npm. At which point you pretty much lose all the advantages of Node.


Is this real or are you kidding?


Real. However, the same day the left-pad debacle stuff was on HN, this was changed to be self-contained: https://github.com/tjmehta/is-positive-integer/commits/maste...


I think the overall trend has been from no abstractions to bad abstractions. As a systems programmer, I like reading about the battle stories of e.g. console game developers from the 90s and earlier who practically wrote an OS, but I hope our current trajectory will eventually lead to good abstractions (in good languages).


The only way to avoid using shitty languages would be to stop programming altogether.

I have found in my experience a pattern that repeats itself.

1) I find a language (B) that I think is great because it solves X problem with the language (A) previously being used.

2) I use that new language long enough that eventually I learn it's pitfalls and hate it.

3) Switch to language (C) because it fixes problems found in language (B).

Eventually this loops.


I'm sorry that's been your experience. I've been using Haskell for 4 years, Rust for 2, happy with both. Anything I don't like other people usually don't like and is fixable.


There are less shitty languages. I'd consider mine to be OCaml and Perl (and Rust is growing on me, but I might be in the first phase above).

I think, ultimately, this just boils down to “know a lot of tools and use the right tool for the job”.


I find that it doesn't loop for me, unless you count new versions of old languages (which fix old problems) same as old. The sequence is familiar, but in general, I find the tools that I use today better, in very specific ways, than tools that I've been using 5 years ago, which were better than those from 10 years ago etc.

So it's more of a spiral. A straight line would be better, but I'll take progress in whatever form.


For years I was stuck in a cycle: A->B->A->B ...

Where A was C++, and B was Python.


While there is no arguing the pitfalls of dealing with Javascript. As a one man team, to build a product with the same language on the backend, that I've already been using on the front end. Is awesome. The ability to fire up a Node.js project on a server is even easier than a PHP one, which is arguably why PHP took off in the first place. And like you mentioned, I think the NPM system is awesome, super easy to use, and a voracious community.

Sure there is TONS of not nice things about JS, but it's slowly evolving to be better(ES6 + Promises), and it's not going anywhere anytime soon.


The worst thing is a gross exaggeration. I've been guilty of these at times. It seems common. You have to be a very passionate person to care deeply enough to master programming. These are infuriating machines operating in the domain of discrete maths... one error and the entire building comes down.

That being said sometimes, when there's no one else you can kvetch with, a good rant is just what you need. Computers are stupid. Programming is horrible. Everything is terrible.

But Node.js is hardly the worst thing. Some perspective is required.


Well I am by no means a great computer programmer or anything so I can certain accept other people saying node is crap but as someone who has used node my reasoning was simple: if you are going to use javascript in the browser and pretty much everything about the "app" depends on what happens in the browser then why use some other language on the server for models and controllers with javascript in the views? Why not just make it javascript all the way down?

Sure, smarter people than me might know why I shouldn't use javascript for everything but for me I do not find it useful to try to figure out how to do so much work in Rails or Django AND THEN ALSO end up using javascript in the views every time anyway. I would rather skip using Rails or Django and just get to the fun part of figuring out what happens next when this button is clicked on the website. I don't think I am alone in that. Rails and Django were a means to an end but they set the bar too high in terms of "programming" and require a lot of programming knowledge that I don't actually need to know to do what I want to do now with node. Sure, my code might suck but unless you are working in some super professional environment with highly skilled managers, pretty much everyone's code sucks. It's about getting the functionality you want with the least amount of effort. Node makes that possible.


You do everything you can to use something other than JavaScript in the browser.

The list of good languages that targets the browser is long, the last of bad languages better than javascript that targets the browser is even longer.

I personally used WebSharper and it's excellent. All these sophisticated frameworks (thinking Angular foremost) have nothing on WebSharper.


> As soon as I wanted to write any non-trivial code to read stuff from a database and do something with it, I got stumped - I didn’t know how to proceed. I could write some code, but it would turn out very ugly. I couldn’t write code that was pleasing to read (and it certainly wasn’t pleasant to write).

When this was written, he had point, but with es6 I don't think this is a problem anymore.

Accordingly, I don't think this stands

> You lose expressiveness

Arrow functions and destructuring cut down on boiler plate significantly.


As much as I love Node.js, I also love this post. It screams the ever-repeated viewpoints of the experienced and tired developer.

There isn't enough of this on HN.


Almost all pages of cat-v.org read like that. It's a good read. Most should be taken in a light hearted way - it's just software, after all. People unfortunately tend to get outraged by it. Also unfortunately, the original author, Uriel, passed away a few years ago. If you enjoy it, try taking a look at suckless.org

Most pages in cat-v (and suckless) tend to favor the Unix way, small connecting components communicating through pipes. It seems sensible to me. It's comprehensible, it's easy to break down and analyze stuff when it's breaks. I never dabbled in Node.js - js is not my favorite language - but it does looks like complex and hard to understand its internals. Maybe it's not.


What I find interesting is, that everyone has rather hard feelings for any language but Python.

There seem only 2 things people complain about Python, performance and the two major versions out there.

Overall I have the feeling most devs thing it's pretty okay.


Package management is another typical complaint. Admittedly it's a complaint in nearly every language that has packaging tools.


Great line: "The only thing that matters in software is the experience of the user."


Uh, it's an odd way to end a rant on software architecture. How about: "The only thing that matters in software is abstractions/interfaces". The end user only sees the top interface.


Some of the software I've had to maintain had the best UX, but winds up having the shittiest code underneath.


I would consider that good news (if the problem domain is important enough) because that means that the only thing left to do is write a better implementation.


There is good meme potential in this..."The only thing that matters in software is ___________."

The only thing that matters in software is an MVP that doesn't suck so much that it can't be productized.


So I can accidentally under charge users in an online shopping app and that's ok?


The owner of the shop is arguably also a user, who would be having a not-so-great experience.


This is exactly how you get professionally marketed crap like Mongo, Node, Hadoop, you name it.

It is a fast-food way of building software - cheapest processed shit wrapped in a good SEO and naive user experience (quick and rewarding installation and effortless bootstrapping of a meaningless hello-world - I am so clever meme).

What matters in software in the long run, be it an application or even a programming language, is appropriate design decisions and strong emphasis on design around protocols (by duck-typed interfaces), using declarative (instead of imperative) mostly-functioal paradigm, bottom-up process with focus on modularity and code reuse, while sticking to the standard idioms and principle of less astonishment. This is what is still being taught at MIT, and this is how the internet works as a whole.

99% of Node or PHP code is a toxic amateurish crap, precisely due to ignorant over-confidence of the coders and lacking of proper CS education about right principles and programming paradigms, importance of louse coupling and proper encapsulation, which leads to modular, share-nothing, mostly functional designs with emphasis on immutability and persistence (that's why Clojure is so good - it has been, like Erlang, well-researched with tons of good stuff borrowed form Common Lisp).

Erlang or Haskell or Scala or Swift or Golang are on another side of the spectrum, which could be characterized by discipline of sticking to the right principles and rigorous attention to details (of which Haskell might be a bit too strict, but Erlang or Scala just right).

BTW, these observations about the impact of a proper design, based on right principles, which have been made almost 20 years ago still holds - http://norvig.com/java-lisp.html Today we could state the same for Python3.5+, which, finally, evolved to be so carefully refined as old school Lisps, but emphasizing that Python3 is a prototyping language, while CL is both prototyping and implementation language.

No sane person should even touch PHP or Javascript or any other crap with implicit coercions, error supressions, etc, designed by amateurs (hello, MySQL, Mongo, Hadoop!) like one avoid chap processed junk food or drugs.


> ignorant over-confidence

Believe me or not, Node and PHP are the laughing stock across many Googlers and Amazonians I know. Not just the languages but the behavior of the communities. I think you just summarized the problem in few words.


I'd really like to hear more about this louse coupling idea.


I try to understand why people would write JavaScript but I just can't. The only explanation to me is that they're not aware of what they're missing out on, or the deficiencies of their own platform, in a Dunning-Kruger sort of way.

I would be interested in hearing from some JavaScript developers about why they use JavaScript, but I have probably heard and refuted it before. Especially after Ryan Dahl disavowed Node, I think it's time to reconsider your viewpoint.


"The only thing that matters in software is the experience of the user." "if you add unnecessary hierarchies in your code directories, if you are doing anything beyond just solving the problem - you don’t understand how fucked the whole thing is."

Oh really? So making the code easily maintainable and easy to comprehend is just a waste of time and simply ads complexity? Comments like that makes this hard to take seriously.


The article is not very relevant in 2016. Much of the arguments are not valid anymore.


Quasi-related, one of my favorite programming language essays of all time: https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

From what I've observed, I feel like Node is where hipsters who decided that PHP was "icky" ended up.


I would like to read what is your definition of "hipster". It's used for everything nowadays and mostly to describe negatively people someone do not like.


》There will come a point where the accumulated complexity of our existing systems is greater than the complexity of creating a new one. When that happens all of this shit will be trashed.

I think it applies to bank system, politics, and more in general to anything that can be improved and where there are enough money to think about re-design. At that point yes, another life cycle will starts.


Node.js solves the problem of "live webpages" w/ Websockets and Polling and chat apps and notifications etcetc.

And that counts. A lot.


> Websockets and Polling

Those have nothing to do with Node, you can implement them with just about any server-side stack.


I feel like a large part of the negativity around Node.js comes from the callback hell and almost everything being async.

Solution to these two problems (an ingenious one I think) is async/await (and there is an awesome polyfill on npm) - if you want, you can enclose your whole code in one async block and effectively make everything synchronous. Or you use it to its full potential and remain synchronous in all parts of the code that need it while retaining the benefits of async (other parts of code can run while the current one is waiting for a result.

The only caveat I've encourtered (a manageable one, at least for me) is that you must keep in mind that the underlying code IS async, so you can only guarantee synchronous behaviour in the single asynced block od code - so sometimes, race conditions could happen between some parts of your program, but with proper scoping (and ideally, using pure functions), you can avoid it all together.


NASA is using Node.js for its Mission Control Framework https://nasa.github.io/openmct/ https://github.com/nasa/openmct


It's a Non-technical argument in a technical discussion.

Is a framework necessarily better/good because it was picked by the SE team at a large company?


> better alternatives around with much more sound models and environments, Erlang and Go

Node.js makes it extremely fast to build robust and fully featured web apps quickly. I'm using Go currently for the backend and it's a pain since everything has to be written from scratch. I still enjoy it since the language is simple.


I've done a fair amount of messing around with javascript.

1. Recompiled webkit into javascript using emscripten/asm.js (webkit.js) 2. Created a desktop version of node (tint2) using FFI calls for OSX/Windows 3. Reconstructed much of the oracle o5login/tns/tti protocol for giggles (and if you want to see a very head scratching way of packing 64 bit signed ints, look no further than 11g).

What I can say is it isn't perfect. But it has one gigantic advantage is you don't need to build or compile to run, but still get nearly the same performance from compile time languages.

It's a near perfect prototyping language, and for most peoples use (e.g., 99% of crud/general business humdrum logic) it's perfectly fine.

Just stay away from relying on too many npm modules.


I see a lot of desktop apps made from electron/CEF using Node. examples are Brackets and Steam.


Meh. It works. Npm is okay if you exercise quality control over what packages you use and keep dependencies down. The language is decently productive for scripting and everyone knows it so it's easy to find maintainers. It performs well and can be multithreaded just fine by running many concurrent node workers and using a good database and cache.

Go may be better but has a less rich ecosystem of turn key solutions and easy integrations. Java is a decent language but has an outdated ecosystem. .NET going OSS is interesting.

Ultimately what you do with languages and tools is more important than which ones you use.


From Rob Pike quote at the end, after the Ryan Dahl quote: "... the UNIX/POSIX/Linux systems of today are messier, clumsier and more complex than the systems the original UNIX was designed to replace."

"Messier, clumsier and more complex" are adjectives that could describe almost all of today's software vis a vis software from the 1970's. This is not a criticism of today's software it is just the evolution (or devolution) as it happened, an objective observation.

By and large, programmers do not attempt to make software more clean, more efficient or less complex. Most do not spen all their time cleaning up messy code, fixing bugs, or sacraficing inefficiency at the expense of usability or abundance of "features". And almost none spend time removing code and reducing complexity.

They do the opposite: add features, pursue "ease of use" at the expense of common sense and incessantly generate and commit code believing that any decline in commits or "software updates" signals a project is "dead", not "modern" and probably in need of a "replacement". Again, I'm not critiquing this, I am just stating the facts. This is what they do.

Not sure about Pike, but the reason I think some older software is higher quality than most newer software is not because it was or is high quality in an objective sense. It's because today's software is so low in quality and in too many cases worse than yesterday's. Indeed, it's "messier, clumsier and more complex."

In an objective sense, 1970's UNIX is nothing to celebrate. But compared to the then-alternatives, what came afterwards, and what we use today, it can be held in high regard. It's only good in a relative sense. Everything else was and still is so bad. (Why is anyone's guess.)

Avoiding the hassles Dahl alludes to[1] brings me a certain feeling of satisfaction. My language of choice is Bourne shell. And if I am just working with text, such as reading and writing, I do not use a graphics layer - no X11.

The question is: Does Pike's comment apply to Plan 9?

1. Not to mention avoiding the needless complexity and obscurity of Microsoft Windows.


Facts are nouns, adjectives are subjective. A subjective analysis is hard to trust. I hate Java, because I choose to do so. Java being a bad language cannot be proven by my personal opinion. People do amazing things with it, Minecraft was initially Java, Android apps run on JVM etc etc.

This article has more than 10 adjectives (many have adverbs attached to them or written like "... much more sane and reasonable language like Python ...") in its first five sentences. Thanks for the effort but no, I cannot trust any of it.


That's a weird metric. "Java is an abomination" uses nouns but is entirely subjective, "Java is class-based and garbage-collected" uses adjectives and is objectively true.


I'd try to save my argument here but I know I'll fail hard. Let's try :)

"Abomination" implies disgust, so your example can be paraphrased as "Java is disgusting.".

To fail harder, let's paraphrase further.

"Java programs are built with classes and collect their garbage."

At least I've tried. My own comment actually has adjectives too (like "subjective") but this is getting too meta.


One more thing; f word in critical posts about computer science related stuff is a bad abstraction.


I used "callbacks" when needed in objective-C in my app days and got sh#t done just fine. I think the best thing to come from node is the ability for a younger crowd to jump into proton/electron desktop apps without needing to get into sdl, qt, or some platform specific language like objective c or the .net C# whatever family. Nose might not be the finally language to take us to that area but it's a start and a ton of awesome open source apps came out of the scene quickly.


I think that bloated piece of software is one of the worst things to happen.



I only use JavaScript on the browser, there are so much better alternatives outside of it.


This whole thread would not exist when async await gets traction


There will be new threads about how the streams and other standard classes dealing with IO need to be rewritten/expanded to work well with async. I've been using async/await for about a year now(in node.js, much longer in C#) and took to wrapping them when needed; you find various levels of difficulty depending on how much eventing is involved. All the information hidden away in non-public properties/api's doesn't help. MS had to go through and expand out all their stuff with Async methods. I wonder how long the node.js maintainers will take..

Async/await, let, and TypeScript have made node.js bearable for me but, TBH, I'd rather be working with Go or C# more. Owell, right tool for the job and all which has a temporal component as well.


You don't need o wrap anything promise based and curretly all major libs deliver promises


He talks about events.


Async I/O -

CON: I hate having to set up callbacks (even with promise syntactic sugar), whether I need them or not. But of course I do need them when I am coding a U/I.

PRO: the "promise-all" trick is pretty useful if you have several queries that all need to be completed before you can continue with their results. I have some Java code that might benefit from this since it does several queries to generate a PDF. (then again, caching might make this moot - YMMV)


I embrace change in life in general, such a view translates over to the complex ecosystems in technologies, I embrace and accept the complexity of every single technology, honestly it is quite amazing. It's good that we try abstracting away many things and make them more simple and less complex, but sometimes inevitably leads to more complexity and more shit, and I like it and it's fun to be part of such journey!


NodeJS works great as a simple web or network container, if you're trying to create a complex enterprise app in Node, basically you're doing it wrong. Simply it's difficult to design complex nested logic or algorithms in a clean manner. It's possible but it's harder than other languages.


J2EE is up there.. We have a generation of corporate Java programmers who love boilerplate code and obfuscation.

High-School teachers inflicting Java on teens - cringe-worthy. Many public schools only offered Java if you elected programming/CS (in High School). Only recently has Python made inroads.


Yeah. I hate all these great text editors and other cross-platform software that Node brought about.


4 years old?


"The only thing that matters in software is the experience of the user." "if you add unnecessary hierarchies in your code directories, if you are doing anything beyond just solving the problem - you don’t understand how fucked the whole thing is."

Oh really? So making the code easily maintainable and easy to understand is just a waste of time? Comments like that makes it hard to take the rest of what he says seriously.


I rather like JavaScript. ES2015 anyway.


Uriel didn't quite explain why PHP is any better than nodejs. I'd be even interested to hear!


You're too late. He suicided.


Cannot access the site.


Can you make it good?


Unbelievable, so many up votes for this complete idiot statement..


cannot agree more, but we still use it.


TL;DR:

>I hate almost all software.


So, this is totally an ad hominem but the OP webpage has a post category titled "Political Correctness" [1] with one post titled "Claims that sexisim drives girls away from Computer Science are feminist bullshit." [2] and one link to an external site titled "Sex Differences in Mathematical Aptitude - By La Griffe du Lion." (a.k.a. "Lion's Claw") [3] starting off with this bit, styled like a research paper abstract (you know, to make it look seintifikal):

Mathematics is a man's game. A gender gap appears early in life, blossoms with the onset of puberty and reaches full bloom by mid-adolescence. It indelibly shapes women's prospects for doing significant mathematics. In this account of cognitive sex differences, Prodigy shows how sex-differentiated ability in 15 year-olds accounts for the exiguous female representation at the highest levels of mathematical research. A female Fields Medalist is predicted to surface once every 103 years.

I stress again that this is essentially an ad homimem: I'm absolutely flagging up this dude as a sexist baboon with brains the size of a small, frozen pea. And because I have much better things to do with my poor feminine brain (say, finish off my MSc dissertation, on a novel grammar induction algorithm) I'm not reading a single word of the OP.

________________________________

[1] http://harmful.cat-v.org/political-correctness/

[2] http://harmful.cat-v.org/political-correctness/girls-in-CS

[3] http://www.lagriffedulion.f2s.com/math.htm


Not that I agree with this persons opinions at large, I do recognize that he got some points. Perhaps he may be a bit sexist but so are most people and feminists at large are extremely sexist at large in my opinion.

A quote from the second link:

"The whole issue is being overcompensated now. On some technical forum I’ve seen somebody ask for advice on whatever. Another user replied with a link to women.debian.org. What the fuck? Do we have “men.debian.org”? If you’re no different wrt. technical matters, then you need no different website. If you need different treatment, then don’t be surprised if you’re treated differently."

Couldn't agree more on that point and matter, I do think we should treat each other with respect and integrity but why treat people differently because of their gender? Can't we just treat everyone the same?

Honestly, that is not what society is doing and especially not so in the IT sector. I think this is one of the points the OP is making with his post. Just because he has these views does not necessary render his views on nodejs or whatever else nil.


> but why treat people differently because of their gender? Can't we just treat everyone the same?

This is one of those well-intentioned questions that makes a lot of hidden assumptions. Since straight white men founded this country, wrote the laws, and hold a disproportionate number of positions as presidents, congressmen, CEOs, etc, this means that the directive to "treat everyone the same" in practice becomes "treat everyone like a straight white man"

Men and women and non-binary folks have different needs and are oppressed to varying degrees. Trying to treat them all the same ignore these important differences. This post discusses how conventional city planning of treating everyone the same is less effective than bringing those differences to the forefront: http://www.citylab.com/commute/2013/09/how-design-city-women...


> This is one of those well-intentioned questions that makes a lot of hidden assumptions. Since straight white men founded this country, wrote the laws, and hold a disproportionate number of positions as presidents, congressmen, CEOs, etc, this means that the directive to "treat everyone the same" in practice becomes "treat everyone like a straight white man"

I am sorry, but the last part just doesn't make any sense. First of, I am probably not from the same country as you are and treating everyone according to the golden rule doesn't mean treating everyone like a straight white man. You are accusing me of making assumptions when the irony is that you are making the assumptions.

I don't really believe that cities generally are being designed for any specific gender and I do not believe any gender is being oppressed in my country. But at last, I don't see how this is relevant at all for this discussion, I was simply talking about treating everyone the same and for sure if you can make data supported general statements about specific needs for men or women go ahead and design a better city.


Umm, I read the article from the Lion's Claw, and it seems internally consistent.

It cites an actual study on mathematical prowess, takes its results, and uses mathematics to demonstrate a banal fact - that a small mean deviation in ability combined with significant competition yields unintuitively disproportional results. shrug

You may consider that the study was flawed(perhaps - I haven't particularly dug deep into it), or that he cherry-picked a particular study(something readily admitted in the text, as it describes how difficult it is to execute a study on mathematical ability). Simply declaring the OP "a sexist baboon", however, seems like an emotional outburst rather than a rational takedown.

Edit: I read OP's article(his site was down when I wrote the comment), and it seems like a rant. Comparatively the Lion's Claw article is cleanly and concisely presented, so I wouldn't really put those two together as an example of sexism.


..I was going to write my usual long speech about how all humans are basically the same. but you know what ? Nah. There will always be narrow minded people. And to battle narrow-mindness one should read books (good books[0], that is).

If dismissing someones opinion not because of the opinion itself but because the author of that opinion linked in an unrelated opinion piece to a completely different persons opinion (edit: that, by reading it, may even be a satire) makes your boat float, then we have nothing to talk about really.

Have fun in your academic life. (sincerely!)

[0]http://lkml.iu.edu/hypermail/linux/kernel/0707.2/4230.html

edit2: I'm sorry if i offended you. I don't know anything about you really. I was just going a bit into the extreme to make my point.


I completely agree with you. While I think it would be pandering to universalism to say that "anyone who bemoans 'political correctness' is a dolt", I could instead perhaps say: "I have yet to encounter anyone who bemoaned 'political correctness' who were not in fact simply concerned about losing the privilege they have to treat other people poorly without feeling the guilt they should be feeling for treating other people poorly."

As to the research itself, any self-respecting scientific study of gender's influence on anything needs to start with "Step One: in order to obtain a reasonable comparison, we need to dismantle the Patriarchy." It is pointless (and exhausting) to "engage and refute" any research on a point-by-point basis, when that research is so transparently driven by a need to prop up that very construct which deforms the research landscape. I think we can pretty easily categorically dismiss such research as grounded in an incorrect paradigm. (others may disagree, of course, such is the nature of paradigms. I merely hope we outlast them).

So yes: perhaps an ad hominem, but it is a worthy one. This person's opinions are indeed tainted by the (objective) fact that xie is literally a sexist baboon with brains the size of a small, frozen pea.


> "I have yet to encounter anyone who bemoaned 'political correctness' who were not in fact simply concerned about losing the privilege they have to treat other people poorly without feeling the guilt they should be feeling for treating other people poorly."

> As to the research itself, any self-respecting scientific study of gender's influence on anything needs to start with "Step One: in order to obtain a reasonable comparison, we need to dismantle the Patriarchy."

> This person's opinions are indeed tainted by the (objective) fact that xie is literally a sexist baboon with brains the size of a small, frozen pea.

Poe's Law applies here, because I honestly can't tell if you're serious or trolling.


Thats just really the tip of the iceberg for cat-v to be honest.

It is usually best to just avoid it in general.


Just to clarify for HN-ers who didn't click the link: the linked page was apparently written by some reddit user in /r/programming, then deleted there, and "is preserved here for posterity."

I don't know if that changes anything for you, just thought I'd clarify that.


> And because I have much better things to do with my poor feminine brain (say, finish off my MSc dissertation, on a novel grammar induction algorithm) I'm not reading a single word of the OP.

By refusing to engage with the substantive arguments made by both uriel & La Griffe du Lion and instead committing the logical fallacies of argumentum ad hominem and appeal to emotion, aren't you actually just making their point? Why not engage & refute instead?


Cause that is tiring. Not every battle is worth fighting.


This post is baity but the thread turned out to be rather good, so let's try turning off the flags on it. If it ends up a flamewar we'll have to put them back on, so if you comment here, please keep the discussion thoughtful!

Btw, the top bit of the article is an HN comment by uriel from 2012: https://news.ycombinator.com/item?id=4495305, and the other part had a major thread in 2011: https://news.ycombinator.com/item?id=3055154.


You can't flamewar with Uriel any more. He suicided.

Sodium pentothal bought online from China. Every time my Skype starts is has "Hey Uriel, don't drink the wrong drink" in my recent messages. That crazy bastard.


Well... all the things like DBus and /usr/lib and Boost and ioctls and SMF and signals and volatile variables and prototypal inheritance and C99FEATURES_ and dpkg and autoconf that he's complaining about were originally conceived as a way to simplify something that their respective authors considered too complex in the first place. I'm not holding my breath that the next attempt to simplify all of those doesn't just add yet more complexity.

On the plus side, job security.


bit of an overstatement if you ask me


I'd just like to point out the linguistic paradox of the "one of the worst" syntagm, as only one thing can be the worst, and "one of" implies a multitude of such things; if 20 things are the worst, then none of them are. In short, this is a lazy way for the writer to express his opinion but label it as a fact.


I am interested. What is Node.js people opinions about Django in 2016? I am a newbie and I chose Django for a somewhat serious personal project. Is it really bad to use django for a new project in 2016?


Nothing wrong with Django, it's a perfectly decent web framework.

Not every system requires the async paradigm of Node in order to be useful.

For those that do, the ecosystem is more diverse than just Node anyway. On the Python side, for example, you can use Crossbar or Django-Channels. Personally, I prefer the former but the latter is also usable.


No. You got it right Go ahead!




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: