Hacker News new | past | comments | ask | show | jobs | submit login
I finally escaped Node (acco.io)
273 points by out_of_protocol on March 23, 2021 | hide | past | favorite | 176 comments



We went away from node as a backend technology for a bunch of reasons. Here's a list of the biggest pain points:

- Lack of a good standard API; compared to environments like Java, C# or Go, node's standard library is significantly sparse.

- The tendency for small libraries/frameworks leads to a very high number of third party code with all the problems attached; bigger attack surface, licensing challenges, it's economically impossible to vet and review dependencies

- There's a tendency in the ecosystem to abandon projects rather soon (~1-2 years) and to keep changing things. Further, we have had several situations where maintainers did not respect semver, combined with npm's approach of updating patch versions upon installation, we have had too many broken builds from one day to another w/out code changes. The state of documentation of a lot of projects is non-existent.

- Lack of multi threading. We have used all the options, including RPC implementations, but that doesn't even come close to approaches like Java threads or go routines. Neither in performance, nor in maintainability.

- Lack of typing. That's probably the biggest one. Yes, we use TypeScript, quite extensively even. But TypeScript brings its own problems. First, it's only declarative. If you have a `something: number`, there's no guarantee that it's actually a number upon execution, so if you have a bug in a layer interacting with another system, that might fail a couple levels deep. You hence end up with type checks at some places and you cannot really trust it anyway. Second, TypeScript's tooling is slow and has some annoying quirks (e.g. aliases not being resolved upon compilation). Having aliases allowing to shorten import paths is a big, big win, though. Third, the typing, given the complexity of JavaScript, can be confusing, sometimes even seemingly impossible to get right.

Is node a bad technology? Not at all. I'd not chose it for enterprise, big or long-lived projects, though. It's a very good technology for a lot of things, especially smaller projects. We are building on Java + Spring Boot now.


To each their own. I was a Java engineer for most of my career since the early 00s, switched to Node a couple years ago (primarily running Apollo GraphQL server in Node), and I find myself so much more productive now. Addressing each of your points:

1. Yes, I think it's true that server-side Node isn't really usable without tight reliance on NPM. That said, I think NPM has improved by leaps and bounds over the past few years and it now makes it quite easy to have a stable and secure set of dependencies. Integrating our build with npm audit (and audit-resolver) and other tools like snyk, using package lock files, and keeping dependencies up-to-date on a regular schedule has worked very well for us.

2. Again, I think the NPM ecosystem has settled down in the past few years and I don't see as much churn. While there has been some issues with large projects (e.g. lodash and moment) going into hibernation, that's been fine for us.

3. I've found all of our uses for multithreading were better served by having an event system publish events that were then handled by serverless functions. The lack of multithreading, and the way Node manages concurrency, has been a godsend. Just check out the recent GitHub report where they were accidentally leaking information from other users into their sessions.

4. Typescript (combined with autogenerating typescript type files from GraphQL schema definitions) has been honestly heaven for us, and the benefits I've seen with the structural-based typing of TS made me realize the huge number of times I had to battle the nominal-based typing of Java and the immense pain that caused.


Thanks to you and the parent comment both for this rundown on the pros and cons, I found it helpful; and also, using a throwaway account to defend nodejs is, if intentional, a fantastically subtle joke.


Meh, check my post history. I created a throwaway account years ago to comment on some somewhat controversial topic back in the day. I just forgot to throw it away.


If you've come from java and you like node, maybe you should spend some time with the alternatives?

A big part of it depends on what your exact requirements are but my experience with node didn't bite me for quite a while.

1/2) my experience is that even the supported packages have had glaring holes where they don't in other languages. Just to give a quick example, I had a project that used node-cache-manager to implement a tiered cache. There was a bug (in the cache library with the most stars) just last year where the cached values in a memory cache were passed by reference as opposed to copied. That meant any mutation on them affected other fetches from the cache! That would never happen in java. This particular bug took weeks to debug in production because values were being randomly mutated. After the fix, it also had different behaviour for when the cache value was new vs when it was retrieved. So two mutation bugs in the same cache codebase see https://github.com/BryanDonovan/node-cache-manager/issues/13....

I'm not blaming the author, he's a really good guy. What i'm saying is this is a wart both in the language and the library ecosystem - it's not unreasonable to expect a sensible caching library.

3) I agree that threads aren't necessarily the way to go. But can we agree that a language that CAN efficiently take advantage of multiple cores would be better? It's not just for your application. It's also for any compiling eg. typescript!

> Just check out the recent GitHub report where they were accidentally leaking information from other users into their sessions.

Concurrency is hard! except in a language where it isn't. In elixir each "thread" (erlang process) would get a different copy of the data so this type of bug doesn't happen.

4.

> Typescript (combined with autogenerating typescript type files from GraphQL schema definitions) has been honestly heaven for us, and the benefits I've seen with the structural-based typing of TS made me realize the huge number of times I had to battle the nominal-based typing of Java and the immense pain that caused.

That is an interesting assessment. I've never really noticed a difference in practice between structural/nominal type systems to the extent that i didn't realise typescript was structural. Normally if you have multiple classes implementing the same structure, you want an interface anyway to make sure they don't diverge i.e. there is a higher purpose for them being the same.

Would you have an example of how this would be a deal breaker?

I think besides this aspect, Kotlin might be up your alley.


I would upvote this comment a hundred times if I could. I had got a Node project in my previous gig and I hated it every minute. The team got immensely productive when we migrated it to Golang.

In the times of RESTful APIs and µServices, is there really a need for node? JVM, C#, Elixir, Golang etc provide excellent development experience and are battle tested the way single-threaded Node just hasn't been.


The main advantage of Node is that you're already using Javascript on the client side, and this lets you share code. If you've got business logic that you want to use on both client and server, keeping them in synch between two different languages is a nightmare. It's pretty much the epitome of Repeating Yourself.

That said, it's kind of a narrow domain, where your server-side logic is simple enough that you can successfully write it in Node, but complex enough that there's code you can't afford to keep synched.


How do you feel about the impact on developer productivity after migrating to Java + Spring Boot? I haven't used Java in a long while and every time that I try to come back to it, I get driven away by the difficulty and complexity to do simple things (thinking of annotations, dependency injection, complicated design patterns). It feels like an effort of one hour of Node or Python programming (or even Go) would take 10x more in Java.


Great question. I was very concerned about that, too. I strongly recommend learning Domain Driven Design before diving into Spring, you will recognize a lot of patterns and it then makes sense how things work. Even things like having an interface + an implementation for basically everything becomes apparent. I had a lot of bad experience with Spring and Jakarta EE in the past, but I can assure you Spring Boot does a fantastic job here.

Point being, it is an enterprise setup and in enterprise applications you favor maintainability and friends over speed of development. So implementing a feature takes more files and more time, in a DDD setup you can estimate something like 1 controller method, use case + interface, >=2 DTOs, potentially a domain model if not present plus a repository + interface if not present.

So yes, it is more overhead and you have a slow down in productivity. I have no proof, but I am 95% sure that the time you save upfront in a more flexible environment like node is being paid back significantly worse when it comes to maintenance.

Edit: you can of course use Spring Boot without enterprise/DDD and hence make it simpler. It's actually pretty minimal.


+1 for DDD in any strong-type-first language like Java or C#. Even if you land in a system that was designed horizontal-layer "at speed", you can retrofit good language, aggregate boundaries and migrate to a world away from spaghetti. Speaking as someone who usually works on code bases 10+ years old.

Great comment, and the root comment too.


Thanks for the thoughtful answer. Do you have any recommendation of a good book to solidify my DDD understanding? I've read one in the past (can't remember the name) and that one felt like IT consulting bullshit :)


Read in order:

1. Anatomy of Domain-Driven Design, S. Millett

2. Domain Driven Design quickly, InnoQ

3. Implementing Domain Driven Design, V. Vernon


This is why choosing the right language for the job is so important.

I know PHP has a bad reputation, but I personally think it offers an excellent middle ground for web projects. It has some great frameworks which have been used at scale while not having all the complexity and rigidness of Java.

As a dev that's used Java, PHP and Node in the past I perfer working on PHP and Node projects because I feel far more productive and less constrained, but I can certainly appreciate why a larger corporate project might choose Java.

Arguing about the perfect language seems pointless without some idea of the requirements.


> How do you feel about the impact on developer productivity after migrating to Java + Spring Boot.

I can answer that as we just strangler-pattern'd a legacy nodeJS app to Golang. Abstracting away from Node and JS paradigm is bit harder, but the blame totally lies on the callback ridden code we end up with Node. It also forced us to think about our DataStructures, abstractions and module structure. The efforts took comparatively the same time as it took the original team to write the Node app. Also I wouldn't put Golang and Python in the same league as Node.

> effort of one hour of Node or Python programming (or even Go) would take 10x more in Java

Modern Java (and Kotlin) is surprisingly powerful and the days of AbstractFactoryBeanFacadeInterface are behind it.


I've got to say that I'm more tempted to start something with Java + Lombok than Kotlin. Somehow it feels like I'll have an easier time with the tooling, but I'm probably exaggerating.


I maintain a Java Spring Boot service full time, and when something stops working or doesn't do what you expect, it can be a total nightmare to debug. There is so much misdirection, it can be incredibly difficult to figure out which code will be executed, in which order.

I try to make things as explicit as possible, which can help in testing and debugging.


Java and Spring is a nightmare and I can't believe someone can actually look at that stack and not see it as a complete joke.


Inversion of control means you have no control.


Spring makes simple things easy and difficult things impossible. Spring Boot was created to mitigate some of the mess Spring is, specifically that mere mortals can not assemble version compatible libraries that comprise Spring so spring Boot does it for them.


> I get driven away by the difficulty and complexity to do simple things (thinking of annotations, dependency injection, complicated design patterns). It feels like an effort of one hour of Node or Python programming (or even Go) would take 10x more in Java.

You are comparing stuff that is super helpful on large project with how to be quick in 1 hour by one person. If I have 1 hour long project in java, I wont use complicated design patterns. If I dont need Spring Boot or dependency injection, I wont use them. I am not sure what you mean by complicated annotations tho.

In Java, if all what you want is a method that processes some data, you wont use any of that and will just do a class or something like that. So if python have better methods to achieve whatever you want to achieve out of the box, it will be somewhat faster. But it will have zero to do with annotations.


I agree that the way I've made my statement can be ambiguous. What I meant was when using Node/Python/Go for projects of similar complexity. Maybe not millions of lines of code, but a web app with some order processing, database access and specific business logic. It feels harder in every way when using Java + Spring Boot, but my question was sincere: I'm pretty sure that this combo is a successful platform and I want to understand why.

Finally, by excessive use of annotations, I think I'm talking about ORMs. I remember checking a Hibernate-based project in the past and some methods had more lines of annotations than actual code. I'm not sure this is bad, but it kind of creates a new dimension of code for me to wonder about.


Imo, if you know what all of that means and what to do, it is fast and easy. If you are new to it all, it can be frustrating.


What would you use to write a quick RPC server or JSON endpoint in Java, if not Spring Boot?


Deno tries to solve few of the problems you have listed.

1. https://deno.land/std is deno's standard library.

2. Yet to be seen. Deno has few good modules like deno cliffy and drash which are structured nicely and easy to use. Otherwise, plenty of matured library for web and node can be used too.

3. Yet to be seen. Module management in deno is very explicit and there are no magic updates.

4. Deno support workers with an additional sandbox layer for multi threading. There are few proposals to solve cumbersome worker setup (requiring a separate file) in the pipeline. They will get standardized soon.

5. Deno has first class support for typescript. Type checking speed is a bit faster due to some optimizations and you can use no-check in development to run your typescript code. That will give it a huge boost.

Having type checking at runtime would be great but it would add significant performance penalty. There were some attempts to add it to typescript via plugins but nothing panned out.

6. You can share much more code between deno and browser. It is more compatible with web than node. Huge win for switching context for front-end developers and using the same skill set.

Other than that, deno cli will feel exactly like go and cargo. It tries to provide a similar tool chain and UX which node ecosystem lacks. That should feel fast (most stuff is written in rust except the type checking part).

At the moment, they are trying different approaches to speed up type checking. One is to convert swc AST to something that tsc can understand.


Agreed. I've been on a few projects where other people were doing node.js. It's fine but my take away from that experience is to not use it for something that matters and that it's rarely the best tool for the job. These projects have a tendency to get ugly in a hurry.

The good news is you can do real stuff with it relatively easy and that all you need to know is javascript. Which is great if you have some frontend people getting their feet wet with backend stuff. But most of what it does you can do in other tech stacks as well and they tend to be pretty good at the stuff node is used for.

Go is decent. Python is decent (forget about threading though, global interpreter lock is still a thing). I use a lot of Kotlin & Spring Boot myself. There are many other tech stacks. Each of those has rich ecosystems with great libraries, frameworks, tooling, etc. Whether you are doing batch jobs, data engineering, server code, etc. They each have many options for technology.


I got really into Node for a backend around 2015. I loved it, its execution was fast enough, I could speed through feature implementations, it was so much easier to build things compared to the 5 million line Java project from my previous employer.

Then a not-so-competent team member joined and it made working in JS complete hell. I had to be so much more careful on code reviews because there was no compiler to catch common mistakes, and still issues slipped through.

TypeScript is here now and that's great, it solves some of the issues I mentioned, but I moved on to Rust (and stopped doing web stuff for the most part) and I'll trade my implementation speed for easier code reviews and majorly reduced debugging time.


> Is node a bad technology?

You describe a lot of things that factually are signs of a bad technology.

Why languages have this "free pass" where it cost BILLONS in troubles but, "no, not exist bad languages"?


You see, node is not exactly a language.


How do you share code between client and server? That's key


Share code between client and server using WebAssembly[1]. The Twitch video player is written in C++ via WASM[2]. C# can be "full stack" with Blazor[3]. Rust can be "full stack" with Yew[4]. Similar support exists for other languages including Go[5] and even the TypeScript-syntax AssemblyScript[6].

[1]: https://medium.com/wasm/webassembly-on-the-server-side-c584f... [2]: https://news.ycombinator.com/item?id=16835769 [3]: https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor [4]: https://github.com/yewstack/yew [5]: https://github.com/hexops/vecty [6]: https://www.assemblyscript.org/


- Lack of a good standard api: I don't understand this one. I've never had a problem with its documentation and the API always seemed fine to me, but I don't use C# or Go so maybe I'm missing out?

- There's a tendency in the ecosystem to abandon projects rather soon: This can certainly happen, but it's important to choose libraries and frameworks carefully. My node servers (Express.js) are extremely light weight, so I rarely have this problem.

- Lack of multi threading: I think you are wrong on this point but I don't know your expectations since I'm not a Java developer. I have written some multi threaded code and saw massive perf improvements. You can't expect a scripted language to perform like Java, but quality of code is also very important. I wrote a generator that replaced a Java utility that was 10x faster. I rarely have to worry about this because it's easier to write async code that's extremely fast.

- Lack of typing: this seems more like a criticism of javascript.

I get that this isn't the platformm for you, but I think you might be wanting it to be too much like what you are familiar with.


About lack of standard library:

I don‘t personally work with node for the backend but typescript/javascript in the browser. And I think what is meant is something like LINQ in C# (all kinds of functions you can apply on enumerables/arrays), some basic DateTime library (how to add two dates in JS? How to get the diff of two dates in days or hours in JS? Both are 1 liners in C#) and maybe some more stuff for string mutation and so on.


There’s work on new JS standard called Temporal that’s a stage 3 proposal. People should be using its poly fill instead of other data libraries where possible.

https://github.com/tc39/proposal-temporal

JS standard libraries are forever. It pays to take a bit more time and get it right.


It's a one liner if you use Day.js. It's makes no sense to include this functionality in Node.


Strongly disagree.

Who maintains day.js? It‘s (i did not look, i guess) not the same people that develop the JavaScript standard library. At some point, day.js is abandoned or breaks something and nobody is going to fix it because it is abandoned, or even worse, soneone creates a fork and fixes it there and now we have 2 day.js.

If it is in the standard library, you have a guarantee that it works today, works tomorrow and with a quite high certainty still works the day after. There are not that many languages that successfully break backwards compatibility often.

And don‘t get me started with the legal aspect. There are many projects where one cannot simply pull another dependency just to calculate a date diff.


What makes the Date constructor a good candidate for the standard library but not the aforementioned methods? Not everyone knows about Day.js, you had to learn that at some point. Why Day.js instead of Moment or Luxon or Date-Fns? There's a lot of cognitive value to having a fully featured standard library.


That why we read and learn. You can't expect node to work like java. I feel like I'm talking to a wall with you guys. I've been working with JS for 15 years. I would never presume to start Java dev and expect to do things my way. I saw the same thing when Java devs were trying to learn Ruby... you have to approach with a different mentality.

Regarding Moment, on their site and they plaster exactly why you should not use moment and explain exactly why other libs (Day.js included) are better. Node is designed to be modular, Java isn't.


That‘s not the point. Nobody expects that Node/Js does exactly the same as Java or C#. If so, one could simply use Java or C#.

A language should provide a set of functionality that makes the developers life easier and consists of a set that he can achieve basic (daily) things without the need of 3rd-party dependencies. And if that usually starts with „npm -i“ I‘m going to question that...


We are going to have to agree to disagree then. Both Node and frontend js share the same package management system (npm). Everything is modular because it has to be. When you work fullstack JS and use one package manager for everything, it makes all the sense in the world why things are done the way they are. The same code has to work in Node, Blink, Webkit, Electron, etc.


Node's lack of a robust standard library results in significant dependency bloat, even for small projects. I often choose other languages (Racket, Python) for most projects because of this. I tend to consider a large number of dependencies as a risk for the stability and maintainability of projects. Through this lens, adding date handling to the standard library can make sense, I think.


I think others think differently https://tc39.es/proposal-temporal/docs/


Was the Java utility one that started and didn't run for very long, like a command line application? The JVM (at least the Oracle one) takes a long time to start up, so any application that runs on the JVM takes a long time to start up. The JVM is much better for long running processes.


A vendor gave us a .jar cli tool for managing an enterprise product last year, and I swear it starts up in under half a second (to the familiar spring boot splash too), so it is possible, just uncommon :)


Ah, my bad, my experience is old. Thanks for the correction. :)


I'm still astonished that someone was writing Javascript on the browser and found the experience so great that he thought "I also want to do it on the backend!".

More seriously, I'll admit I'm not a competent Node dev but the worst part for me is how convoluted you have to write large parts of code that don't need to be asynchronous or non-blocking in the first place. I also never figured why sometimes Node just silently ends without pointing my syntax error (and more importantly the line number).

Like the author, in my mind Elixir is a better Node (Go never clicked for me).

Some weeks ago I had to write a feature that was so perfect for Phoenix Channels, Presence etc that it was almost magical and quite fun. Nothing groundbreaking but I wrote about it here https://conradfr.github.io/ProgRadio/


"More seriously, I'll admit I'm not a competent Node dev but the worst part for me is how convoluted you have to write large parts of code that don't need to be asynchronous or non-blocking in the first place. I also never figured why sometimes Node just silently ends without pointing my syntax error (and more importantly the line number)."

This, so much. Even for moderately complex data workflows, 95% of the time what I needed to do was "execute a query or other IO operation, then do something with the result of that operation." But since js is async by default, I have to do extra work every time. Even if it's just a simple async/await declaration, there can be complications. Oh, your query failed with an exception so you want to do a rollback? Sorry, you lost scope to that db query when your promise was rejected.

And don't even get me started on the stack traces, or lack thereof. If an exception was thrown inside a promise there's a 75% chance you're not getting anything useful out of that.


Try writing evented code in Python or Ruby. It’s very hard because most libraries in those languages block all over the place. There just isn’t much run in the middle between these paradigms.

Any time you spin off a new thread or process, it’s going to eliminate that nice continuous call stack (though there has been some work on this). That’s not a JS issue but more an issue of async programming in general.

I’d love to see JS unify web workers with BEAM-style lightweight, managed actors since they aren’t incompatible. In the meantime, I’ll take the minor pain around async for the massive performance boost relative to other scripting languages.


It's not that they loved writing Javascript on the frontend, it's that they can't write anything else on the front end. So why use two languages when one will do?


>I'm still astonished that someone was writing Javascript on the browser and found the experience so great that he thought "I also want to do it on the backend!".

I'm currently playing and learning Microsoft Blazor server side. i like the appeal of writing the whole app that feel like a Single Page Application in one language. whether that's good or bad is another matter.


Most Node devs are JavaScript monoglots, who just recently learned frontend, and figure "Let's use JavaScript for everything!"


The article focuses on server-side Node, rather than its role as a CLI tool. Compared to V8, Ruby, Python, and PHP are not performant. Node’s libuv network library was highly concurrent and Node built single-core concurrency into the runtime and libraries; multi-core concurrency, however, is not best-of-class.

IMO, Ryan Dahl made two fundamental mistakes in both Node and Deno that make them uncompetitive for simple server-side 12-Factor Apps: 1. No web Gateway Interface like WSGI and Rack, and 2. No DBMS API/SPI like JDBC. Express was supposed to be a Microframework like Sinatra and Flask but developers have to spend most of their time defining a robust web server. Since the http server and app code are tightly integrated, 12-Factor Apps or JSON API Microservices are more complicated than they need to be in Node. The lack of a JDBC-like API/SPI hurts many languages included Node and Rust.

The GIL languages are not performant nor concurrent but they have good httpd/dbms interfaces. All the popular server-side language/runtimes are flawed along some fundamental dimension. Server-side REST shouldn’t be this complicated.


> No web Gateway Interface like WSGI and Rack

It uses http, which is much more standardized, used and accepted than Rack or WSGI. I can chain together http servers, use graphql http redirectors, etc. With WSGI and Rack you’d need a fronting web server to terminate and translate.

> No DBMS API/SPI like JDBC.

Covered this in another comment.

> Express was supposed to be a Microframework like Sinatra and Flask but developers have to spend most of their time defining a robust web server.

Doesn’t this mean it is a microframework? As opposed to something like Django that has everything included?

> Since the http server and app code are tightly integrated, 12-Factor Apps or JSON API Microservices are more complicated than they need to be in Node.

If you are serving up an express app directly with no fronting load balancer then perhaps they are tightly integrated. Otherwise they are not. Express does not and should not handle load balancing.


> No web Gateway Interface like WSGI and Rack

Node.js wasn't created in a vacuum. Its http lib, and express.js which extends it (and Sencha connect before that) implements part of the CommonJs API created by earlier SSJS frameworks such as Narwhal, Helma, etc. and was inspired by Ruby's Sinatra.

In fact, a common portable API and standardized language that isn't going away anytime soon (JavaScript) is what draw me to Node.js, and I still find it's an excellent framework for lightweight web servers and b4f approaches. TypeScript and Deno, not so much.

Edit: also, while Node.js had its share of quirks in early versions (especially around the Streams 1/2/3 API), its core API is super-stable (and it better be with some 100s of thousands of packages making use of it out there).


It was my understanding that Node never implemented JGSI [1]. I've never compared the two APIs so you may be correct. Regardless of the path taken, the middleware should apply to a separate web server module that is functional by default. The tight coupling that emerged was too hard to master and the defaults are too attack prone to inspire quick public deploys. It resulted in copy-and-paste server-code between projects rather than reusable server modules.

This may be a property of all embedded network libraries but I think my observation still holds; server-side Node did not outpace the alternatives despite its superior performance/concurrency relative to the dominant GIL platforms.

EDIT: AWS Lambda (i.e. Function-as-a-Service) can be considered an extremely simplified service gateway interface though it is rarely thought of in those terms.

[1] https://en.wikipedia.org/wiki/JSGI


> middleware should apply to a separate web server module that is functional by default. The tight coupling that emerged was too hard to master and the defaults are too attack prone to inspire quick public deploys. It resulted in copy-and-paste server-code between projects rather than reusable server modules. ... did not outpace ... GIL ...

I honestly have no idea what you're talking about. The beauty in Node.js' http lib is that you can write middlewares against the core API and run the same code under express.js with additional middlewares for sessions, routing, etc. Python, whatever its strengths, certainly isn't used for web apps more than Node.js, let alone is dominant. Re modules and reuse, I guess if one side is complaining about too much modularity (leftpad) while the other doesn't see enough of it, Node.js got it just right ;)

Edit: CommonJs directly references JSGI [1], and if you compare it to Node.js' http core or express.js' API, you can see the correspondence quite obviously, can't you?

[1]: http://wiki.commonjs.org/wiki/JSGI


For clarification: PHP never had GIL. And it's a magnitude faster than Ruby or Python.

It's a common misconception to group PHP with Ruby and Python in terms of performance.

Even more now that PHP8 is JIT'ed. Plus async/await is coming with Fibers RFC and already present as modules.

People have been using async PHP in production for years. See Swoole.


Modern PHP frameworks (Workerman, Swoole, Comet) rank as top performers of Techempower Benchmarks. Much higher than Python / Ruby / Node.js and head-to-head with the Golang based project performance.

Disclaimer: I'm the author of the Comet:

https://github.com/gotzmann/comet


> The lack of a JDBC-like API/SPI hurts many languages included Node and Rust.

I always wondered what is a use case for Rust service that talks to a (non-local)database. First people pay borrow checker tax, get a chance to have an amazingly responsive system in return and then blow it on the round trips to a database. What am I missing?


Users who use Rust in this way report that these services tend to be extremely robust and have orders of magnitude less resource usage, which in a world of cloud computing translates directly into an improvement on the bottom line.


Why would a DBMS API need to be built into the runtime?


Because otherwise you end up with thousands of incomplete and incompatible implementations all fighting to be the best or running out of steam after a month or two. Node is a hellscape from a db perspective


Most people have moved on from ORMs and the like, since most DBs are not SQL anymore and supporting all of the different DB types is just too hard.


I'm not sure I could disagree more.

I'd phrase it more like "ORMs are as popular as they always were, most DBs today are SQL DBs, same as always, and the only recent change in the last 10 years is people have stopped speculating that NoSQL is going to replace SQL DBs, and accepted they will be, at most, specialised tools for niche uses.".

Obviously that's just my subjective opinion, but the Stack Overflow 2020 dev survey (https://insights.stackoverflow.com/survey/2020#technology-da...) has it MySQL, Postgres, MS SQL Server, SQLite, MongoDB. DB Engines (https://db-engines.com/en/ranking) has it as Oracle, MySQL, MS SQL Server, Postgres, MongoDB, and in both cases there's a STEEP falloff after the top couple entries.

I'm not sure either of those is a super reliable source, but I don't know of any better ones.


"Most DBs" by what metric? If you count them as products, you're probably correct. But I seriously doubt that is true when adjusted for usage - say, number of requests per second, globally, across all deployed databases worldwide.

(in fact, I suspect that SQLite would win that contest handily.)


SQLite is in every android phone, it wins by a mile. I'd even go as far to wager that MSSQL instances alone outnumber Mongodb or Dynamo by a good margin.


SQLite is also in every Win10 install. And in every Firefox install, IIRC?


Sqlite is really good, though often overlooked as a serious data persistence option. Especially in web developmemt.


I think that statement should be strongly disputed. Most businesses still have a SQL database at the core of their stack. NoSQL is still definitely the exception rather than the rule.


I agree with some points author is making, but overall I wouldn't put them either on Node.js runtime, or Node.js ecosystem.

a) Erratic max latency

IME Node.js performs well if you are not pushing its limits. If you have a service receiving 25k+ requests per second, you should better benchmark it, regardless of the language. Horizontal scaling should be configured based on the results of the benchmark.

b) Cognitive overhead

Yes, there is a bit of an additional overhead when mixing async and non async functions, but same goes for every other language that supports promises, C#, Python, Rust, Scala, etc.

Typescript should fail if you try to access a property/method of a promise, without resolving it, so I'd say it's not such a big deal.

Personally, I rarely work on Node.js, but when I do, I'm really grateful for await/async and typescript.

You know what's an even worse cognitive overhead? Languages without strong typing, like Elixir.

c) `pg` connect & query

I would say the issue lies with how the library is implemented. I'd definitely expect `query` to fail if it's not connected to the database.


(This may have changed since I used it in anger last).

NodeJS does not handle multicore servers well, and I imagine many people don't realise this and have a huge amount of unused CPU capacity.

You need to run multiple nodejs processes (one for each core really) to get the most of it. But this then requires load balancing even on a single machine, so a lot of people don't do it. There are other solutions but often require json strings passed between the workers, instead of objects, which can end up stalling the whole thing (more time spend with json (de)seralization than actually doing work on complex models).

.NET (core) handles this extremely well. It's trivial to do a parallel async foreach, and it's easy to scope services (eg database access) to avoid threading problems. It also automatically uses as many cores as you have without any extra problems for handling multiple requests at once.

This is even more of a problem on mobile apps in React Native, which I've used a lot of. It is virtually impossible to use multiple CPU cores efficently on a RN application, which is pretty crazy considering how many cores mobile phones have. For 99% of projects it's not a huge issue, but on some it becomes a real killer, especially as the workarounds involve sending json strings between processes, which can end up being the blocker itself with complex state you want to pass between threads.


Node has worker threads and shared memory buffers to share data between them.


Do you have a link to the shared memory buffers? I can't find much about them. I've found some stuff about worker threads but can only pass strings.



This is sort of what I mean though. The SharedArrayBuffer seems very limited. It's fine if you are sharing binary content for processing, or some numbers. But I can't see how you could easily send a standard JS object through that?


> Languages without strong typing, like Elixir.

Have you ever actually worked with elixir? In practice the (strong) dynamic typing of elixir is not a problem. I have not written a type error that has made it to prod in tens of thousands of lines of code. The last type error I made only showed up extremely rarely (once a week or so) and it didn't matter because the supervisory tree restarted the process that threw it. Guess what? I didn't fix it. No one will die and $0 will be lost.

There's very little ambiguity about falsely values and you can't arbitrarily coerce from one value to the next. That in and of itself cuts down on the type confusion, and also the fact that there are only about ten types. Even struct field names are compile-time checked for you, and I get little squiggly lines warning me about statically checked type errors in elixir-ls. Sure it's not perfect, but saying it's a cognitive burden is just... Silly.


How is refactoring? I was considering moving from Node to Elixir, but now that I’m using TypeScript it does feel like I would be giving something up.

It’s a breeze to make broad changes knowing the compiler will prevent me from missing a code path, etc.


Write your tests, gradually make red go to green. Super rewarding.

Code coverage reports are a thing. You should be writing your tests async (including tests that leave the VM) in elixir, that will give you a chance to catch and raise asserts on race conditions which will stress ouct timings in your vm, etc.

https://youtu.be/hvgdWQuriB4


Thanks for sharing this, looks like a really useful series of videos :)


Having spent a lot of time in both it's hard to argue it's as easy in Elixir. You can do it reasonably with a lot of tests but it's nowhere near the same experience as TS


I'm only just starting with Elixir, but my impression so far is that I will be refactoring a lot less, due to the way Elixir encourages simple and elegant approaches to many of the common problems a programmer faces.

In node for example, I'm refactoring constantly (or, at least, should be...) because I keep backing myself into cognitive and developmental corners by using a kind of brick-by-brick approach that is perhaps encouraged not only by the language but by the eco-system.

I'm not finding this with Elixir, possibly because many of the more important architectural decisions have already been made for me at the language level, and apparently been made very well.

The hurdle becomes more understanding how to best do something, rather than figuring out how to do something at all.

A big part of this also may be Elixir's REPL. It has help for everything built-in. Whereas in node for example I find I'm constantly searching online and piecing together things from many different parts, in Elixir I almost never am, it's all just right there.


Do you have any concrete examples you can link to show these differences? I'm unsure what you mean by architectural decisions made at the language level.


I am porting a cryptocurrency arbitrage trading platform from Node to Elixir and the difference is extremely refreshing. The actor model lends itself naturally to abstractions like producer, consumer, batcher, dispatcher, etc. which are like stations on an assembly line. “Pulling the chain” when problems occur at any point along the line is simple and clean since the actors are not tightly coupled in the first place.

I think the key “architectural decision” here is that code and execution (i.e. the process) are bundled together by design.


Maybe a good example could be the recursion pattern in Elixir. This is considered almost a base element of the language. Typically in Elixir recursion and "guards", are used instead of things like for-loops and if-else statements in other languages.

Take a basic factorial function in Elixir:

  defmodule Math do
      def factorial(0), do: 1
      def factorial(n), do: n * factorial(n - 1)
    end
There is almost nothing there that isn't directly representative of the base math equation itself (which, by convention, treats the result of factorial(0) as equal to 1). The function order is important in this case, the first is a "guard" that prevents the second from being executed when the firsts case is met. At this point the module exits out of the second function by multiplying the (silently) accumulated result by 1 and returning it.

Versus, in JS:

  function factorial(n) {
    if (n == 0) { 
      return 1;
    } else {
      return (n * factorial(n - 1));
    }
  }
It's not too bad, but the if-else, comparison, multiple return statements and nested brackets at the second return are all done away with in the Elixir version.

Further, recursion is not something many programmers reach for first when working in many other languages, perhaps out of habit, or maybe of the concern that extending such implementations later can become difficult. As such, most programmers might implement the above as more something like:

  function factorial(n) {
    if (n === 0 || n === 1) return 1;
    for (var i = (n - 1); i >= 1; i--) {
      n *= i;
    }
    return n;
  }
Compared to the Elixir code, many steps are required to read and understand this. When this type of laboured patterning is expanded out into a larger project, with many interlocking parts, it may quickly become difficult to work with, and can become necessary, and necessarily difficult, to refactor into something simpler, which then may require rethinking the entire process.


That's a good example, but I would not want to overstate the need for recursion. Most people look to the standard patterns in https://hexdocs.pm/elixir/Enum.html unless they are doing some custom control flow.

Pattern matching is really helpful, though. It turns complex nested logic into simple truth tables and acts like Eiffel's design by contract, just pattern match on valid inputs, then handle the errors.

The same functional patterns repeat at different levels in the stack. For example, in the Phoenix web framework, processing a HTTP request can be considered as pattern matching on the expected inputs (validating them), then a series of transformations (making a db request, taking the result and rendering it into HTML via a template), then returning the result.

From a concurrency perspective, the interesting part is that each HTTP request runs in its own separate blocking process (green thread). So when you are doing the programming, you don't need to think about concurrency and async stuff. You just do your thing. This makes it easy to think about and debug.


That Elixir part is pretty slick. But to be fair to JS, you could also write that JS function like this:

  let factorial = n => n === 0 
         ? 1
         : factorial(n - 1) * n;
But I do wish JavaScript had some more powerful pattern-matching syntax. For example, both Rust and C# both have nice switch/match expressions that can really simplify code like this.


You can use @typespec and @behavior to type parts of elixir.

https://elixir-lang.org/getting-started/typespecs-and-behavi...


> You know what's an even worse cognitive overhead? Languages without strong typing, like Elixir.

I see this mentioned many times, but when I do my impression (which might be wrong) is that you haven't written Elixir or Erlang for any thing more serious than tutorials or docs examples. Besides, elixir is technically (?) strongly typed - but not statically typed. Regarding async/await, compared to the things you get in the BEAM - that part specially is like comparing a stone wheel car moved by oxen with a space shuttle that has the same costs (or less), to launch and operate. It might sound snobbish, but sincerely I don't know what to say when I read that.

Although not compile time, pattern matching allows you to define as strict, and in some cases, stricter conditions for functions than many typed languages, without any additional cognitive overhead or complex type definitions that you need a PhD to understand.

def do_it(<<a::binary-size(2), ":", b::binary-size(4)>>), do: IO.puts("the argument is a binary string, of the form xx:xxxx")

Even conditions where it depends on more than the types, like with complex types such as maps, structs, tuples, lists, where parts of those conform to a certain pattern, in any combination needed.


I've been using Elixir for five years and I think it's sorely missing a better type system. I've done fine without it and have gotten entirely used to it but that doesn't mean things wouldn't be better with one.

I only really started noticing as I started to build more complex Typescript systems how much it helps. My teammate shipped a bug in Elixir yesterday that would have been caught by a static type checker


I know there's plenty of people working with Elixir (and Erlang) that feel that way. I can also imagine that in very large codebases dialyser might not be enough and it would be great to have it. Also that obviously, outside web and telecom there's definitively software which I wouldn't want written in them, but not in TS either, and probably it would have more tests than code, property and fuzzy and all the remaining things.

I also won't say that it doesn't happen but I do believe if the same effort that goes into writing typed typescript and tests, aka, using dialyser and writing tests, that there shouldn't really be any difference? And when you include things like guards, explicit pattern matching, casting your data at the entry points of your system that it provides a much more robust experience?

But that's all optional - what I've seen is that theoretically there's nothing you can't express with the base syntax (+ecto) that you could with most (as in the common ones) type systems, perhaps the difference is that since in a typed language it won't compile you are forced to do those things, define all enums, all their variants, exhaustively match, etc?


Green threads are great, right up until the moment you have to interoperate with something written in another language / using another VM. Then it's a mess (see also: Go).

The nice thing about async/await is that, because it's all just a bunch of syntactic sugar over callbacks, any language that supports some kind of callbacks with state, can be mapped to async/await - even C! On WinRT, for example, such interop works across C++, C#, and JavaScript.

This is largely why languages that push this model tend to form rather tightly closed ecosystems, IMO. Which, in turn, leads to their lower adoption - and thus, no particular implementation of green threads can become a de facto standard.


I think I might be missing what you mean? You can write an async/await implementation from scratch in a few dozen lines of elixir if wanted, but you also have Tasks that provide that already written for you (with support for being included into supervision trees, etc)?

How does JS interoperate with something outside V8?


A JS library that wants to interop with something outside of its runtime (which is not necessary V8 - it isn't in WinRT, for example) can do so through FFI facilities. So long as said FFI supports all the same things that C does, it supports callbacks via function pointers. And if you can pass a function pointer + data pointer to some API, you have a stateful callback - i.e. a promise/future/task. And you can map any such abstraction to the same in another language.

But the moment you introduce green threads, the caller and the callee both have to be aware of that particular implementation of green threads. So even if you can do C FFI, and whatever you're trying to call can also do C FFI, you can't interop async calls across that boundary without some kind of callback arrangement.


Here I'm a bit out of my depth.

I think with Erlang you don't need to know >that particular implementation of green threads, but you need to implement the required specifics of the VM FFI (which in practical terms might be what you meant) - but this has a reason, in that, for the VM to provide its guarantees it needs to be able to count "cycles", refs, etc in order to preempt the execution of any function/process at given points, so that no single process can block the scheduler. It also needs to transform things from and into data types usable inside the VM.

I think you can also use "dirty" nifs, but here you might crash the VM so you really need to know what you're doing (and you still have to translate things to and from the vm).

And there's also ports, and C Nodes. These have a higher sync cost but can be treated by the VM as if they were processes/erlang nodes.

Lastly, you can also just shell from the VM to an OS level process, or use sockets, etc.

But I doubt this is anything 99% of the people writing JS do in JS? One of the things people using Elixir or Erlang is exactly because of the guarantees and programming model of the VM.

Again, might be missing what you mention because it's not an area I have explored in any meaningful way.


That's the thing - it works for Erlang/Elixir, because it tends to live in its own ecosystem with its own libraries etc. If you are working on something where there's an existing large ecosystem in, say, C++, you'll be reinventing the wheel. Or jumping through hoops with a multiprocess implementation (pipes, sockets etc), and having fun synchronizing those.

Now, not all projects are like that - but reusing libraries from other languages is common enough in large projects. Thus, languages that can't accommodate that use case, don't become truly mainstream. Within their niche, they can be much more pleasant to work with, though. So I don't think there's anything wrong with Elixir per se, and for some tasks, it makes perfect sense - but it's not a very general-purpose tool.

Note, by the way, that I'm not talking just (or even mostly!) about JS, but rather about async/await in general - e.g. also in C#, where that syntax originated, or these days in C++20. On Windows, if you write "modern" (UWP) apps, regardless of the language used, they make a lot of async API calls for stuff like UI - and the implementation is all in native code, running in the same process as your app.


Yeah, it kinda limits the scope of adoption/interest in a way and I think everybody working with it would like that it wasn't the case but I don't think that gap can be worked out easily (or in a practical way) because without the code being written to accommodate the requirements of the VM it can't do what it's meant to do.

If you link a piece of code that can crash the whole VM or steal the processors schedulers then the value in writing supervision trees, restart strategies, compartmentalising your runtime concerns into processes, plus the tradeoffs made in the vm/language design themselves go down, because a single invocation can throw it all out of the window.

(and note, this is not to say the "outside" code is of less quality or anything, is just that when writing "inside" the vm, if you don't account for an unknown problem that happens only sometimes but place it under a proper chain of supervision (and this is much easier to do than writing 100% bug-free I've covered all cases including heinsenbugs code), it will be contained and not bring down the entire VM along with everything it was doing at the time, like all open client sockets or work it was doing, or impact other users when it happens)


Elixir/Erlang has some of the most mature ways of interoperating with external code of any language.

It's literally what Erlang was invented for 30 years ago, building telephone switches where the main logic was on the Erlang VM, interfacing with logic in C which interfaces with the switch hardware.

There are multiple ways of doing it, each with different tradeoffs of safety and performance. You can write FFI code using NIFs, but the Erlang culture of reliability frowns upon it unless you have real performance needs. A good example would be doing cryptography. If the FFI code needs to handle concurrency itself, it can, communicating with Erlang/Elixir via messages. You don't need to care about the threading model. But that's generally not a great architecture. You should rely on the VM's processes to handle concurrency and make the FFI code just be libraries.

It's very common to write performance critical code in a compiled language such as C++ and talk to it over a "port", which spawns an OS process and communicates with it over stdin/stdout. An example is high frequency trading, where Erlang is used to supervise low level code.

You can also write a "C node" which allows a standalone program written in C or Java to talk the native Erlang network distribution protocol. So you just send messages to it.

This kind of thing is very common in Elixir embedded systems, and a joy to work with. See https://www.cogini.com/blog/elixir-and-embedded-programming-...


> Green threads are great, right up until the moment you have to interoperate with something written in another language / using another VM

You mean like this? It's not a problem in elixir.

https://www.youtube.com/watch?v=l848TOmI6LI


I don't know why green thread based languages do not export a pooling interface on their FFI.

Pooling much nicer interface to implement than callbacks, and a quite easy concept to plug one interface into another.


I'm not sure I follow. Can you explain how two runtimes, each with its own green thread implementation, would interoperate in this manner to make a call across the boundary? Say, X (written in A) wants to asynchronously call Y (written in B) and wait on it, passing it Z (also written in A), which Y needs to call several times to perform its task.


X does not wait on Y. It starts the task, and adds the polling interface for the task into its internal polling list. When something is available, calls the continue interface of Y as many times as it needs, just as it does with the OS data polling.

The same happens for Y calling Z.


By "wait" I mean logical wait for the result of the async operation - e.g. to use it as an input for another operation - not a blocking thread wait. What you describe sounds like callbacks to me?


Polling loops are completely different from callbacks.

What I described uses functions to read partial or complete results, and functions that indicate if there is any result to read (and yeah, there must be some blocking thing to indicate whether there is any result at all available). It's something much closer to async/await, except that there is something to tell you if await will block.

It is much more complex than callbacks, but it enables much higher parallelism and somewhat higher single-thread throughput. Also, runtimes with green threads already have it implemented anyway, because it's the preferred OS (any high performance OS) interface for async.


Async/await is just syntactic sugar over callback-based continuations, though.

And in .NET at least, you can easily tell if await will block or not, by looking at Task.IsCompleted.


It’s really hard to read such posts. The tool is only as good as the contributor, that’s all there is. Great things were built in all languages and frameworks. For god sakes, stop blaming the tool by doing naive comparisons with other languages/framework. Nothing’s perfect, that’s a given, but these kinds of post smells so much like entitlement. You don’t need to trash your previous framework/language and discourage others. If you struggled using node don’t fool yourself that it was because of the tool and try to do the hard work of introspection and understand why you failed.

Disclaimer: In case you think I’m bias toward node, I’m more of a python fast API guy, but I have tremendous respect toward NodeJS performance and async elegance (and even more when mixed with typescript)


I get where you're coming from - the post meanders around some topics every programmer has to learn by hard experience, such as the value of being able to reason about code / systems and the importance of data structures.

That said my feeling is Node gives you multiple pump action shotguns you can easily shoot yourself in the foot with. For example https://nodejs.org/en/docs/guides/dont-block-the-event-loop/

> One common way to block the Event Loop disastrously is by using a "vulnerable" regular expression.

If you're leading an engineering team, just this should be giving you panic attacks every time you hire someone new into the team.

What makes this worse is it's very hard to detect problems at scale, if you missed it in code reviews. If you have one "vulnerable regex" in your event loop that only exposes itself on certain request inputs, it means that only every now and again - when given "bad input" - it's going to block the event loop.

But then how to you detect issue? It's likely going to manifest itself first in _other_ requests that got blocked taking an excessively long time to respond.

From an operational point of view, that's a nightmare scenario. Every now and again you're going to see slow requests in your logs but when you dig into it you find no problems. And that's going to make you very uncertain on how you're doing when it comes to scaling up; instead of a gradual slow down across all requests, you're going to see erratic spikes and jams.

...which actually occurs to me it's not unlike how traffic jams occur - https://www.youtube.com/watch?v=azmcu1cn2vg - https://traffic-simulation.de/


I understand all the things you're talking about, and I agree, you can block the main thread (and elixir does not use the OS processes so it's awesome...). I do disagree with the take on await/async but that's a different discussion.

Is the article insightful? Mostly, yes.

Did Node and JS have to be painted this way? probably not.


From the title, I was expecting the article on something along the lines of how, where and why developers can move from Node.js. However the arguments are not so solid and there is no migration path. We all know Elixir is great but the adoption and maturity is low as compared to JS.

Yes, Node is not perfect but (with TypeScript added) tell me a development platform that can run under 100 MB of memory, handle most requests under 50 ms, can spin up a http service in a day or so and still perform well under load (and scales horizontally with ease).

On top, any organization can hire JS devs with relative ease. And the same devs can also use the same skills to build a frontend.

I am not a JS/TS/Node fan but unfortunately it seems we do not have a better story to sell right now and we are stuck here for a while.


> tell me a development platform that can run under 100 MB of memory, handle most requests under 50 ms, can spin up a http service in a day or so and still perform well under load (and scales horizontally with ease).

What about Python?

Flask will happily return responses in hundreds of microseconds or low single digit milliseconds if you're just rendering HTML templates. Throw in some DB queries and it's really easy to keep responses under 50ms.

Scaling horizontally isn't too bad because it's just stateless web servers and scaling vertically is easy because popular WSGI servers like gunicorn and uwsgi support the notion of internally load balancing X number of processes and from your POV all you need to do is tell it how many to use (1 config setting) and it does the rest.

Asynchronous workloads are no problem because you can spin up Celery. While it's not the actor model of Elixir, it gives you a nice level of abstraction because you send a task to the worker and it feels like synchronous code from your end. The mental model is great.

A gunicorn web server (a WGSI server in Python) will use less than 100mb of RAM too even for a moderately sized app (few dozen packages, thousands of lines of code, etc.) but this number will get higher based on how many processes you decide to run. However it's very predictable.

But I'm not sold on using 100mb as a benchmark because for $15 / month on DigitalOcean you can get a 2 core / 2 GB machine. 2 GB is plenty of memory to run a decently popular Flask app with a few processes + nginx + celery + postgres + redis. You'd have no problem serving hundreds of thousands of daily page views on a machine like that as long as you're not doing anything too out of the norm. Basically a typical site that's mostly DB reads but has a good amount of DB writes and you have good database indexes.

And for $20 / month instead you can double your RAM to 4 GB and now suddenly you can do the same with a Rails app including using an ActionCable service to handle thousands of concurrent websocket connections.

With the way hardware is nowadays life is amazing for web development because in so many cases you can be using any popular web framework and comfortably serve a huge range of web apps for $10-20 bucks a month all-in.


Or if you wanted async request handling, you could use Starlette.


Yes, Node is not perfect but (with TypeScript added) tell me a development platform that can run under 100 MB of memory, handle most requests under 50 ms, can spin up a http service in a day or so and still perform well under load (and scales horizontally with ease).

Its probably a rhetorical question, but I would suggest Go as another alternative.

Naturally, you wouldn't want to write the frontend in Go and it also has the "not perfect" quality.


> tell me a development platform that can run under 100 MB of memory, handle most requests under 50 ms, can spin up a http service in a day or so and still perform well under load (and scales horizontally with ease).

I'll plug Vert.x here but just about anything these days depending on your workload.

> can spin up a http service in a day

Surely this is a typo?


I know Vertex has a reputation for being lightweight but 100MB for a JVM webapp seems unrealistic to me.

For instance, a barely used Jenkins needs 200MB.


Why? Use NestJS, AdonisJS, or use an existing boilerplate. You may not have everything but a simple endpoint can be up and running pretty soon. Use Heroku and / or Lambdas and your prototype is deployed and running.


You can do this with any web framework.


You are right, pretty much any modern web framework. So my original point still stands :)


I'm not exactly sure what you are arguing. You asked us to give you examples of frameworks based on your criteria ... which just about anything will meet these days.


Your original point was that node.js has an edge because it has something nobody else has - low memory footprint, low latency, quick time-to-market, scalable. If the response is that "any modern web framework"can do it, then your point does not stand at all.


Sorry, communication gap. I was only referring to the part -

"> can spin up a http service in a day

Surely this is a typo?"

I thought we were contesting that point only.

I agree that a lot of frameworks can do that. On top Node/JS also allow developers to do web/native(ish with RN) and desktop apps with great tooling, libs etc. this combo is hard to beat.


It seems to me that a lot of frameworks can also do the other things! Even Java (though not for a lot of entrenched codebases) has frameworks sporting millisecond startup times and latencies nowadays, with very low time-to-develop.

I would also bicker that node.js is not that easy to scale horizontally, at least not notably easier to do so than any other single-threaded runtime.

IMO the only edges that node.js has are:

* some people know javascript

* for a dynamic language, the runtime is quite performant - you get more for your core, and that's cool

* I agree with the ideal that you can run the same language on the front and backend, but I have found in practice that two things are true. First, that they are actually different languages, with different runtimes and libraries. Second, the holy grail of e.g. serverside rendering your react app and having the browser take over represents a miniscule minority of JS deployments.

Nothing here means node.js is a bad platform, obviously, but it also is a platform I would only choose because I knew JS and was working with others who knew JS, or I was REALLY committed to the dream of isomorphic Javascript. It doesn't have any characteristics that would cause me to suggest non-JS developers go learn node.js.


I think the confusion is around the phrase "spin up," which usually refers to the starting of a process or machine. I think what you meant is that developers can write an HTTP service in a day--referring to the speed of development for Node.js.


Vert.x is flat out amazing. For node developers, I recommend them taking a look at using it with Kotlin. A Typescript developer can feel awfully "at home" writing a Vert.x/Kotlin web service.


.NET Core can certainly run in well under 100MB, handle requests under 50ms and get running quickly under load. You also have async/await and type safety, scales horizontally. It's not JS/TS, so the skills are different and that's not idea but if you're in TS and squint, you're looking at C#.

Also far from perfect but I do think it's now a viable option as a replacement for node.


Thanks for sharing this. I plan / hope to look into C#/.Net Core if life permits.


PHP by a mile.

It's performant. It has bad rep for its history. But every release makes it better. And it has a nice model of architecture where the whole app is setup and tear down with every request in most scenarios. Your whole app is a response object.

There are nice, mature frameworks like Symfony or Laravel that make it really pleasant to use.

Pair with FastCGI and nginx and you can have a real killer.


Uhm, CGI? Those are not at all impressive metrics.


ASP.NET Core does all that just fine. Uses ~83 MB of memory when idle but easily handles the rest


> tell me a development platform that can run under 100 MB of memory, handle most requests under 50 ms, can spin up a http service in a day or so

This is a strange set of requirements for a platform.


"And the same devs can also use the same skills to build a frontend".

This is such an overstatement. Knowing the syntax is just 10% of the story.

There is literally 0 common things in for example React + Styling + SEO + browser + knowing good standards for the web vs. backend expressjs api talking to some DB and knowing good practices for having scalable backend based on for example 12 factor practices.


Pretty sure all major stacks including php/python/ruby can get good performance. Having one good devops person is way more important than what language you pick nowdays.


PHP with any modern framework. Comet / Workerman / Swoole are capable to handle tens or hundreds thousands of requests with up to 1ms latency.


Compile a Java/Kotlin app with GraalVM and you can have <50mb app with 0 startup time which can outperform anything that node can do.


I benchmarked Node.js on V8 vs Node.js on GraalVM (Graal Node) and GraalVM performed worse due to floating point issues and used more memory.


How about Go?


Yes, pretty good alternative. But you will still need to hire "frontend" devs who can code in JS/TS. Go will perhaps have a more performant backend. So plus one and minus one. Also, not sure how easy / hard it is to hire experienced Go developers.


>And if only life were so simple:

Life is pretty simple, just add --experimental-repl-await:

> await Promise.resolve(42)

42


Right now I’m trying to unit test an API on node that does a database query.

I’m using express, mocha, supertest and typeorm.

Mocking the database is fucking black magic. So I gave up and simply use an in memory SQLite db and add some records in the test body for my API to fetch and return so I can test.

I’d like the db to basically get nuked and recreated after every test.

So I use “afterEach” to close the connection and create a new one.

No matter what I do, when I call the api endpoint using supertest it tells me my DB connection is closed.

You’re thinking maybe the previous test didn’t reopen it or something. But there only one test.

It’s like it’s closing it _before_ the test runs. As if afterEach is not waiting on the result from supertest (yes I awaited).

I spent a _whole day_ today trying to understand just what the fuck is going on.

Gimme goroutines _any day_.


Testing with the database has to be one of the things I love the most about Ecto. No need to mock anything, no need to have special branching logic for test versus prod. Just use the database like you normally do. It gives you a degree of confidence that is on another level. Check out https://hexdocs.pm/ecto_sql/Ecto.Adapters.SQL.Sandbox.html for more info!


Testing with the database isn't particularly efficient. If you have enough tests, it becomes worth it to mock whatever would touch the database just so the tests pass in a reasonable amount of time.


> Testing with the database isn't particularly efficient.

Anecdata, but my experience with Ecto sandbox + thousands of tests says otherwise. Mocking would be faster? Probably. But IMO not worth the effort, given it's already fast enough.

To be fair, I have worked with Elixir codebases that generated humongous amount of data per-test, only to test a fraction of this data. This kind of test was slow, but it would be much more efficient to generate only the data you actually need.


> But IMO not worth the effort, given it's already fast enough.

Not only that, but if you're testing with the database, you get to test your queries. That seems worth it being a bit slower.


Maybe not relevant to your issue, but sqlite in-memory databases can be tricky. IIRC closing the connection wipes it out. Creating a new connection means creating a new fresh database.

Tried it on-disk?


I'm just getting in to Node, coming from only having used PHP and then Laravel. Is it a bad time? Using Sails.js for rapid prototyping.


No. Node is very widely used, it does work, and of course: A thing nobody criticizes is a thing nobody uses. I think you should be fine.


Except the criticisms for Node are many and widely spread.


I think you’ll probably be fine with node too. Different people gel with different languages for all sorts of different reasons.

I had done c++, java, bash, python, but it wasn’t until javascript, and specifically NodeJS that I really started enjoying programming.

I would start out with a lighter web framework than sails if you are just starting out, especially if you are new to web programming.

I initially started with sails too but found it was always getting in the way, making it difficult to figure out what was going on. Since it used express under the hood, I switched to Express and it was immediately 10x easier to build stuff. That was my experience.

These days I would probably be totally fine using any framework, but I’ve been mostly happy sticking with express.


No you’ll be fine. Huge amounts of the internet are built with node.

I’ve learned the hard way, you never want to pick a more-obscure solution because it’s technically better. The size of the community is just as much a feature language syntax is.


What does Sails.js offer you over Laravel?


The thing I dislike about message passing vs async/await is that it makes it easy to understand each individual process (that send and receive messages), but harder to understand the whole system.

Assuming your CPU load fits on a single thread, a single threaded event loop with async/await seems easier to understand the whole system because:

- Using functions to compose computations keeps the connection between functions at both write time in your IDE (jump to def), and production time with stack traces.

- With message passing systems, you only get the stack trace from when you received the message.

- Having a single thread means you are not sharing memory between threads.

- You can combine results using Promise combinators, which form an await tree.


The author took way too many words to say message passing between processes is a more grokable form of concurrency than callbacks. Then goes on to say that Node is less grokable than Erlang because concurrency in Node heavily relies upon async/await which is just pretty callbacks. There are some weak analogies to the importance of data structures mixed in to the text which aren't helpful.

I think BOTH models of concurrency have their uses. The only problem I see is not being able to apply the appropriate one at the appropriate time because the language doesn't support it.


> What you end up with is a paradigm for concurrency that a human can easily map to and reason about. I always feel like I'm thinking about concurrency at the right layer of the stack.

Interesting. Is this a common view in the Elixir community? It's a hard problem to find a good abstraction for and I think it's only really possible to know if it holds up as a good abstraction after a fair bit of use. Can anyone corroborate this perspective? Is there something helpful about the module layer specifically?


Yes.

When I open a module I haven't seen before, and the first line that I read is:

    use GenServer
I immediately know how this module behaves in terms of (distributed) concurrency. If I can find where this module is started, I also know how it handles when things go wrong (via supervision trees).

GenServer supports three ways of communicating:

    * call -> handle_call (sync)
    * cast -> handle_cast (async)
    * send -> handle_info (async system messages, signals etc)
Here's an example:

    def send_greeting(pid, greeting, from) do
      GenServer.call(pid, {:receive_greeting, greeting, from})
    end

    def handle_call({:receive_greeting, greeting, name}, sender, current_state) do
      IO.puts greeting
      {:reply, "Hello #{name}!", current_state}
    end

    ------------------------

    {:ok, mikes_computer} = Greeter.start_link(:mike)

    response = Greeter.send_greeting(mikes_computer, "Hello Mike!", "Joe")

    IO.puts response
    > "Hello Joe!"
There's more ways to implement concurrency than by GenServers, but it's the combination of GenServers and Supervisors that sets the BEAM and OTP a head above everything else (in this very specific domain of soft realtime distributed servers).


I'm not a node.js programmer by trade, so maybe I have this all wrong, but I thought the main selling point of the language was that it is fast.

The thing is, everything in computer science comes with a tradeoff: the tradeoff in this case seems to be that, in gaining speed, now you are sacrificing a layer of abstraction that would normally be hidden by the operating system.

What I mean by that is that instead of just letting the operating system handle putting a thread to sleep when it blocks, and then waking it back up when it's ready (for example during a database query), now you have to start explicitly handling that blocking yourself in the form of callbacks.

It seems like the author is realizing that maybe that is not really ideal.


Yeah, I think that's not quite right. I think the main selling point of Node has always been that it's Javascript and thus you can use the same language, and people, to do both frontend and backend development. Node is not fast and it's not trying to compete with compiled language in that regard.


FWIW I have _heard_ the pitch that node.js is fast, here on HN and elsewhere, but most times it's fast compared to like... Ruby. Which is fine, but that's not the only competition in town for node.js.


In some years: I escaped Elixir.

Use the right tool to get the sht done.


So true. People pick new stacks all the time and then complain when after 10 years it's boring and there's shinier or more suitable tools out there. Now Node wouldn't be my first choice but I'm sure it's good enough for most purposes, easy to find people to hire etc. There's absolutely no reason to do a rewrite to java imo. I don't buy the "because types" argument.


Javascript these days is a very productive language, especially with Typescript. Node however, with its lack of a remotely passable standard library, is not doing it justice.

The async await is amazing and top level await works just fine in a browser console. The fact that node still has it behind a flag is... I mean, they surely have their reasons and I don't know enough to question the decisions of the maintainers, but it feels detached from modern times when using.

Node is not web, don't know why they hesitate so much before remotely breaking things. I'm not advocating for moving fast and breaking things, remind you, I'm just saying that the io.js kick maybe needs to happen again.


I think the author's argument wasn't that JS was a 'productive' language but that it was an overall poorly designed and bad language.


So my argument is that a significant portion of the poorly designed part comes from node.

It's also not a bad language. It's okay for its main purpose: Creating web applications. If people start using it to write distributed systems, async/await won't save you from any troubles you might have.

Next.js is a perfect example for what JS is good for. However, the super lacking standard library frustrates me to no end.


It's not bad. Would I rather use anything else if possible? Yes.


Funny story— Along with a friend who has at least a decade more development experience, I have been building a cryptocurrency trade execution platform. He started it in Node, and I picked up where he left off not even knowing what a Promise was. Fast forward a year, and after taking a distributed systems course that required me to use Elixir for all the projects I demanded that we abandon Node for Elixir.


>I have been building a cryptocurrency trade execution platform. He started it in Node, and I picked up where he left off not even knowing what a Promise was.

Sounds secure


Security = don’t leak the API secret. It’s not that hard.


oh dear


which course it that?


from the article "JavaScript is evolution". Exactly. At any moment there may be some alternative that is better in some ways, but unless it also evolves it is not a long run contender. I don't love JavaScript, it can be a huge and gigantic pain (this, ==, null, undefined etc). But it continues to get better and is still for me the most productive language I have used.


What alternatives are you thinking of that aren't evolving or improving?


The issues described here is so overblown and mostly non issue/nitpicking. But I'm not gonna write counter-points here, plenty of those already online. Some of the commenters here will also write it.


TL;DR: async programming does not work well from the commandline.


As of Node 14.8, you can use await at the top level. In previous versions (not sure for how long), it was hidden behind a flag. https://nodejs.org/en/blog/release/v14.8.0/


Yeah, that's also my takeaway. I've also ran into this when debugging stuff inside the Dev Tools console, but it's not really an issue when writing code.


And node.js approach to concurrency is a mountain of accumulated error.


> data structures

Spending too much time upfront thinking about the perfect data structure is a bad idea.

Long walks on the beach and fasting is not going to deliver your perfect data structure.

People (other than yourself) using your product and observing evolving requirements over time will reveal an optimal data structure.

Rapid iteration and real-world usage is the key. I can’t count the number of times I thought I had the perfect design only to implement it and not find it useful to myself - after having thought so long about it.

> node

Node is great. And it’s great at iterating quickly which should be the goal of any project.

The post comes off very naive and a classic grass is always greener vibe.

> And it sounds like the exact kind of algorithmic kludge a programmer would introduce if her underlying foundation had a few flaws.

Why use a gendered pronoun here? It’s a bit unfair that the only non-specific gendered pronoun relates to a bad programming practice.


> Why use a gendered pronoun here?

Alternating pronouns in the English language is a concept to emphasize the "randomness" of the gender. [1]

[1] https://english.stackexchange.com/questions/279494/do-any-st...


> Why use a gendered pronoun here? It’s a bit unfair that the only non-specific gendered pronoun relates to a bad programming practice.

Spoiler alert, but making mistakes isn't just a male issue. Real representation means taking the bad with the good.


I agree. Women also code, which means we also make mistakes.

Equality means no mercy




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: