Hacker News new | comments | show | ask | jobs | submit login
Vert.x (JVM async) vs Node.js Http benchmarks results (vertxproject.wordpress.com)
151 points by purplefox 1779 days ago | hide | past | web | 118 comments | favorite

Interesting results.

While it's not surprising to me that the pure java implementation was far and ahead better since: 1.) Java's much faster than Javascript, and 2.) Netty (which Vert.x is based on), has been doing high performance async IO for years before node even existed in extremely high performance environments.

What is surprising however, is that javascript on the JVM w/ vert.x is faster than V8 with node. In both cases I would assume not much JS is executed, since in Vert.x most of the executing code is still Java, with a tiny piece of JS scripting on top, and in the Node.js case, I was under the impression that most of the executing code would be the C code that powers its HTTP server.

Does anyone with knowledge of the node internals know what's going on here? Is Node's HTTP parser just slower? Is its reactor not as efficient?

I would also be interested in the internals. I'd expect JS + a small C event loop to outperform compiled JVM code + lots of async Java I/O libs (for a micro benchmark, at least).

But there's no question Netty is fast even on a non-benchmark. I'm using it for a long-polling server with upwards of 100K connections, and my CPU doesn't go much over 5%. Besides memory usage, the main limitation has been proper kernel configuration.

I was surprised. We built 'hellod' servers in numerous languages and with various runtime stacks[1]. The results are here and might surprise you:


[1] C/libev, Clojure/Aleph, Clojure/Jetty, Erlang, Go, Java/Netty, Java/NIO, node.js, JRuby, MRI Ruby

Very good point.

V8 does heavy code optimization and Node.js http sever code should be well optimized as well. And maybe the load balancer for the multicore Node test isn't optimized enough. Thus, these results feel a little shady. Anyway, Vert.x was the trigger that I am finally downloading the JVM (JDK) to try Vert.x (and maybe later Clojure).

But there's still one major drawback—the non-existant ecosystem. I know the hint to look for libs from the Java world but I need a concrete and precise guide how to do this. Let's say I want to plugin some Java lib for image manipulation. How? And who will guarantee that these libs will be concurrent and/or non-blocking as well? The lib developer or Vert.x? At the moment there is—except few out-of-the-box modules—nothing. No 3rd party libs, no module manager a la npm and no guide or documentation how to glue Java libs to this Vert.x thing. Nothing. Correct me if I'm wrong.

It's funny, the path software technology takes. A Java clone of a fairly basic web engine (which is a fairly simple application type to begin with) convinces people to try the JVM which is the most performant managed runtime around and definitely one of the most advanced pieces of software ever created. I'm sometimes amazed that and people coming from easy-to-use web languages have never been exposed to it, and if they have they still have their doubts.

Dude, I've been developing real-time military applications and we've moved from C/C++ to Java and got performance gains (I'm not saying that C can't beat Java doing specific computations - of course it can - but when building a large application with millions of lines of code and a large team of developers, Java would be a safer bet than C++ if speed is your concern). In other words, no other environment can beat the JVM.

And, yeah, you should try Clojure. It's a cathartic experience.

EDIT: The only doubts I have about vert.x is that it can consistently beat "old" JVM servlet containers under heavy, real-world loads. That remains to be seen.

I agree with you. But I think you misspelled Clojure - it starts with S - Scala.

(I'm not actually serious about the last bit).

Non-existant ecosystem is a harsh assessment especially given the massive amount of java code our there. Any jar file can be used so the Java ImageIO package can easily be leveraged. It's early on for this tool chain, but all you have to do is get a 3rd party jar file in the class path of the server, and you can directly call Java methods from Javascript.

importPackage(java.io); var file = new File('/blah/blah/blah.txt');

Other languages would be equally easy as well. It's burried, but here is some relevant information from the docs on integrating 3rd party libs:

-cp <path> The path on which to search for the main and any other resources used by the verticle. This is ignored if you are running an installed module. This defaults to . (current directory). If your verticle references other scripts, classes or other resources (e.g. jar files) then make sure these are on this path. The path can contain multiple path entries separated by : (colon). Each path entry can be an absolute or relative path to a directory containing scripts, or absolute or relative filenames for jar or zip files. An example path might be -cp classes:lib/otherscripts:jars/myjar.jar:jars/otherjar.jar Always use the path to reference any resources that your verticle requires. Please, do not put them on the system classpath as this can cause isolation issues between deployed verticles.

Thanks for the clarification.

Assumed I use this ImageIO package. Will it be nonblocking, concurrent and spread to all my cores automagically with Vert.x??

Since they are not limited by a single threaded VM, you would run those using the worker pools and communicate with them via an actor type model similar to erlang. So, modulo a small amount of verticle code to deploy your image processor, yes.

Not automagically, but java has some really good tools for helping. See the http://docs.oracle.com/javase/7/docs/api/java/util/concurren... package. Check the Executors section and you can easily distribute work via threads to all your cores. In fact, vert.x is using netty, which makes heavy use of threads along with async IO. You can do both.

You don't need to necessarily have non-blocking libraries to take advantage of the async I/O. Netty uses worker threads to prevent blocking the server event loop and I assume Vert.X does as well. You can do blocking JDBC queries, Redis, etc.

As far as concurrency, most libraries state in the docs what is threadsafe and what is not, so this is more or less "caveat emptor" and RTFM.

Vert.x has its own module subsystem which appears to package up .jar files and some glue code for the Vert.x-specific interface, but I assume you can call any included Java classes directly: http://vertx.io/mods_manual.html

(Interestingly enough, Vert.x uses Gradle for its build/dependency manager, but I guess wanted something more Node-like for its module system)

We know the java libraries are there for lots of things, but Node is all about async and handling thousands of connections. It does this by forcing the entire ecosystem to be async too (including things like database drivers).

By using a Java JDBC database driver you're completely losing any async support. Same presumably goes for redis or Mongo drivers. You can do some of the work with threads and pooling, but it's still not the same, and makes this another useless micro benchmark.

The value of this is far overstated. You can think of your server as being composed of async queues and thread pools, even in node. About the only thing in a stack that truly is async is connection handling via epoll. Mysql itself is a big threadpool, bounded by the number of cores and table locking.

The jvm has amazing threading support, doubly so if you use it with a language like scala or clojure. You can and should handle the connections asynchronously and use a thread pool for things like db access. It works well, people have done this with the jvm for years.

Node's API is async. Under the hood everything is done via threadpools, same as in Java or any other stack. Your hardware knows how to run threads; that's all. Whether or not that's what's exposed to you as a programmer is a different story.

You don't actually get a performance boost from Node being "async". Node's async abilities simply give you transparent access to threads that are otherwise unavailable with javascript, and it's the threads giving you performance.

I don't think this is true. Nodejs uses epoll/kqueue/select etc to multiplex access to multiple file descriptors from a single main thread.

The async API is actually a price to pay (spaghettification).

For example, the go language took a different approach: it created a cheap thread-like construct which doesn't incur in the biggest overhead of classical threading (namely pre-allocated linear stacks and context switching/sleeping requiring a systemcall; all this aided by the compiler), and a cheap mean of communication (channels).

Then, the whole core IO library was written using a multiplexing async model (epoll...), which communicates with the user part of the library via channels). The result is a blocking like API which under the hood behaves like an async implementation.

A similar goal is also met by http://www.neilmix.com/narrativejs/doc/ and other javascript 'weavers' which convert "sequantial looking" code into callbacks.

Yes, but at the end of the day, underneath it all, you gotta have threads because that's what the hardware understands. Even if you use hardware interrupts to detect IO, you still need a plain old thread to handle it. The only difference between various languages and runtimes is how you distribute tasks among the threads. Some environments provide green threads that have a partial stack, but even they are handed off to a thread pool (or a single thread) for execution.

It's been found that if you employ only a single thread (that can run any number of tasks) you get a performance boost over using a larger threadpool under some conditions, but a single thread wouldn't let you scale by taking advantage of all processors.

I feel that the cause of misunderstanding lies in the fact that "thread" in this context is usually means "thread based IO", which means that when a thread issues a IO request it remains blocked until the IO request returns, leaving CPU time to other threads. All this regardless how many processors you have; it works perfectly fine with single processors.

Async IO is different, it's a different patter of access to IO and as such it's orthogonal to any threading or multiprocessing that's going on in order to actually do stuff in response to that IO.

> It's been found that if you employ only a single thread (that can run any number of tasks) you get a performance boost over using a larger threadpool under some conditions, but a single thread wouldn't let you scale by taking advantage of all processors.

Indeed. Nodejs solution to this problem is to have a cluster of nodejs processes and a dispatcher process on top. So multiprocessing is done the "old way".

In that case, Java gives you both options: blocking IO and non-blocking, polling IO. Netty can use both, but most people use it with the non-blocking option. Experiments have shown that sometimes one is faster and sometimes the other.

Umm, hardware knows NOTHING about threads. Threads give you a very fake view of the hardware. Everything about threads is an emulation over the hardware layer, hence why they have a large memory overhead.

The CPU is aware of a thread's instruction pointer and stack pointer (that's how some CPUs are able to support hyperthreading). Perhaps it's possible that the OS could somehow manipulate that to implement threads that are not as heavyweight as "common" threads, but I'm not aware of any OS that does that. Threads are the only multiprocessing abstraction provided by the CPU and the OS (although now there are some new abstractions for GPUs).

Vert.x has a hybrid model.

It has both event loops and a background thread pool, so you can choose which to run your task on depending on what kind of thing it is.

E.g. it's stupid to run long running or blocking actions on an event loop.

> forcing the entire ecosystem to be async too

You say that like it's a good thing. I'd much rather have the choice between async and sync.

The problem comes when you mix the two. Try it. Try scaling it (to a hundred thousand connections). I've done it and it doesn't mix.

Sure programming-wise it's nice to have sync. No argument there.

> It does this by forcing the entire ecosystem to be async too (including things like database drivers).

Yes, that's exactly Node's main selling proposition everybody forgets when presenting their next Node.js

I would call it "node.js's largest implementation issue". It is not that JavaScript gives you another choice, while you make it sound like it was a principled decision.

Other platforms/languages have real concurrency constructs and don't suffer node's limitations.

Well, no, you could have done all of these things synchronously, and in fact JS would have preferred it because JS is intrinsically single-threaded. Ryan Dahl's stated inspiration for Node was that he struggled with a certain slowness in Ruby because it blocked for everything, so he tried to build an entire language that simply wouldn't let you sleep(). You can go listen to his talks; they're on YouTube. It was a principled decision.

I don't know whether making JS single-threaded was a principled decision -- if anything it was presumably the KISS principle at work. However, it was actually a ridiculously nice choice to offer a single-threaded-asynchrony model. It sometimes gets in the way rather obtusely -- Firefox can still (if very rarely) fail to introspect and then crash when some ad script on your page goes into an infinite loop! -- but on the whole, it is very nice to always know that while I'm in this function, modifying this variable, nobody else can interfere.

With that said, I also think that the lack of good concurrency planning is indeed missing, and that it will probably enter the language at a future time.

Actually that's not a "selling proposition".

At best it's "making a virtue out of necessity".

>We know the java libraries are there for lots of things, but Node is all about async and handling thousands of connections. It does this by forcing the entire ecosystem to be async too (including things like database drivers). By using a Java JDBC database driver you're completely losing any async support. Same presumably goes for redis or Mongo drivers. You can do some of the work with threads and pooling, but it's still not the same, and makes this another useless micro benchmark.

Sounds like you're quoting from the "Node.js Is Bad Ass Rock Star Tech" satire video ( http://www.youtube.com/watch?v=bzkRVzciAZg ).

Nothing inherently special about "forcing the entire ecosystem to be async too", especially since Node is more or less FORCED to do that, because javascript is single threaded.

Add the bad callback spaghetti implementation of async, and the main benefit of Node is easy deployment, and accessibility to the millions of javascript programmers.

As an async environment it doesn't offer anything either new or too compelling.

> As an async environment it doesn't offer anything either new or too compelling

That isn't strictly true.

If you focus on the traditional scripting languages as your competition: Ruby, Python, PHP, Perl: Then you start to realise that Node.js does offer a similar language structure (dynamic, no compilation, etc), with the benefits of thousands of concurrent connections (which those languages will do with certain modules) but while forcing all the libraries to also be async (which those languages DO NOT do).

At my last job I had to build an SMTP server capable of scaling to 50k concurrent connections. Building this in Perl was fine, except for any library I wanted to use - all of the libraries were synchronous. So now I wrote Haraka, which Craigslist are now using as their incoming SMTP server.

If you compare all that to Java you get slightly less performance but probably lower memory requirements. And that's OK. Different strokes for different folks.

No, not really. You can happily run blocking and non-blocking code on the JVM without problems. The inability of JavaScript to do that created the need make everything asynchronous, not the other way around.

> Let's say I want to plugin some Java lib for image manipulation. How? And who will guarantee that these libs will be concurrent and/or non-blocking as well?

What? What do you mean by non blocking in this context?

>Let's say I want to plugin some Java lib for image manipulation. How? And who will guarantee that these libs will be concurrent and/or non-blocking as well?

Well, this is the JVM, not a single threaded javascript engine. As the other guy says below:

"You don't need to necessarily have non-blocking libraries to take advantage of the async I/O. Netty uses worker threads to prevent blocking the server event loop and I assume Vert.X does as well. You can do blocking JDBC queries, Redis, etc."

And threadsafe libs --of which are tons--, usually advertise it on "the box.

The polling loop and I/O interfaces with Vert.x is integrated in to the VM, whereas with Node.js, you'll calling out to libuv/libev/etc. That gives the JVM an advantage. Really, how much actual JavaScript code is V8 JIT'ing at all?

Really what you are comparing here is the efficiency of the paths between the two JavaScript engines and their polling I/O libraries, more than any JIT work (and let's face it, even if we were comparing JIT work, the Sun JVM has had a lot more time to tune and tweak than V8 has). In Node's case, it's talking through libuv, which means it has to transition in-and out of the V8 engine's runtime and in to a C runtime. Even if the C code is super zippy, that's costly. For Vert.x though, all the I/O and polling is integrated in to the runtime/VM/JIT. That's kind of nice.

An interesting way to test what I'm suggesting would be to do the same test with unix domain sockets/named pipes as the transport instead of TCP. The JVM doesn't have a native implementation, so it'd have to call out through JNI. I'd wager even odds it ends up slower than TCP on localhost.

Ah, but is Vert.x web scale?

I realize you're probably joking (at least, I hope so) but just to talk to the scalability point, I've had much better success scaling Netty (which Vert.x is built on) than Node. I've been able to scale Netty to more concurrent connections and Netty performs much better under load than Node. Just my experience.

> Ah, but is Vert.x web scale?

Better write a Mongo-backed fibonacci generator to check.

If you read the docs we specifically mention the "Fibonacci" farce.

Vert.x (unlike node) does not force you to do everything on the event loop. It has a hybrid model.

For things like long running calculations (e.g. Fibonacci) or calling blocking APIs, we support running them on a background thread pool so you don't end up doing stupid things on an event loop which are not appropriate for it.

Using this benchmark: https://gist.github.com/2652991

nodejs beats Vert.x for example. People shouldn't be so quick to accept these half thought out microbenchmarks.

But be mindful though you always have to look very carefully at benchmarks.

I don't think these are speed tests.

Just number of connections that can be handled

I think you need to go back to the benchmarks and take a close look at the "Requests/Second" part.

These benchmarks are nothing but speed tests.

Ah missed that

Perhaps a dumb question: Are the headers the same? With so small files, the header layout is suddenly very important for your measurement.

I've seen my share of web servers which are very different in their compliance.

It's not a dumb question at all.

Netty has support for zero-copy file responses, I imagine that's at least a large part of the performance discrepancy.

A rigorous benchmark must seek to explain the difference in performance, not just hand-wave saying "such-and-such is faster". Otherwise, you have no way of validating (for yourself, let alone demonstrating to others) that it's not a misconfiguration or a flaw in your benchmark.

Just goes to show that when the node.js fad is over, the enterprise world will keep its high availability servers running on proven technologies.

Node.js is not a fad. It represents the first workable JavaScript-based server with mass appeal. The real story is that JavaScript is here to stay. There have been other server-side JavaScript frameworks in the past, but none of them have taken off like Node.js. If Vert.x wins over the JavaScript crowd, then that's great, because coders will be able to write JavaScript.

I think it's actually unfortunate that the first popular server-side JS framework is so tied to the async model. I'd be more likely to try it if it weren't.

>Node.js is not a fad. It represents the first workable JavaScript-based server with mass appeal. The real story is that JavaScript is here to stay.

Javascript yes. Node.js, not so much.

>There have been other server-side JavaScript frameworks in the past, but none of them have taken off like Node.js.

Javascript was not a fast language in the past, nor was it much used for anything more than the most basic dynamic html stuff (rollovers etc)...

> nor was it much used for anything more than the most basic dynamic html stuff

You seem to have missed Netscape Enterprise Server (1994), HaXe, Helma, AppJet, Aptana Jaxer, Narwhal/Jack, EJScript, RingoJS, Flusspferd, and ASP.

No, the industry seem to have missed them. None of those were any big success.

(With the exception of ASP. But in ASP, Javascript was just one of the languages you could use, and not the most popular one).

What do you mean when you say "fad"?

Put another way: What evidence could you potentially see that would convince you it's not a fad?

I wouldn't dismiss the node.js stack quite so easily. Those "proven" technologies you mention had to go through their own lifecycle of continued improvement.

I remember a time when my colleagues who were steeped in C++ had a good laugh at my expense because I was building server-side web applications with a new framework and a hot, new language. It woefully under-performed similar C++ applications in benchmark tests.

It was 1998, and the language was Java. I could write my applications much faster and in a more maintainable way than they could, but they didn't care. Their technology was proven, and Java was simply a fad.

Not really. It shows that this benchmark is crap (likely benchmarking disk io versus disk io + some caching). Read Isaac's comment for more detail. He sums it up pretty well. No profiling info, using a custom test, no analysis besides some pretty graphs.

I have a hard time believing the JVM is really 10x faster than v8 for such a simple server.

Why would it surprise you that a statically typed language like Java on the highly optimized JVM is faster than Javascript?

Any benchmark you care to look at should clearly shows JVM is much faster: http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

It would surprise me because there just isn't that much time being spent in JavaScript on this test. Do the math.

If a program spends 2% of its time in V8, 10% of its time in the network stack, and 80-something% of its time reading a file from disk, then how can you even consider that you can make it 10 times more effective by optimizing away the 2%? Even if the JVM was 100 times faster than V8, then you would expect to get faster by a factor of just slightly less than 2%.

Ie, if you were seeing 1000 requests per second before, and you're spending 2% of your time parsing and running actual JavaScript, and you make the VM go to literally zero latency (which is impossible, but the asymptotic goal), then you'd expect each request to take 2% less time. So, they'd go from an avg of 1ms to 0.98ms. Congratulations. You've increased your 1000qps server to 1020.4 qps.

On the other hand, if you take the 80% of time spent reading the file over and over again, and optimize that down to zero (again, impossible, but the asymptote that we approach as it is reduced), then you would expect every request to take 80% less time. So, your 1ms response becomes a 0.2ms response, and your 1000 qps server is now a 50000 qps server.

So, no, if you respond to 10x as many requests, it's almost certainly either a bug in the benchmark, or some apples-to-oranges comparison of the work it's doing. I called out one obvious issue like this, that the author is using a deprecated API that's known to be slow. But even still, it's not THAT slow.

You can't summon speed out of the ether. All you can do is reduce latency, and you can only reduce the latency that exists. Even if your VM is faster, that only matters if your VM is actually a considerable portion of the work being done. The JVM and V8 are both fast enough to be almost negligible.

The flaw in your argument is that the server is spending 80% of its time reading a file from disk.

It's more than likely that it spends close to 0% of its time in disk access since its serving the same file, which will be cached by the OS in memory.

About the deprecated API. Earlier on I updated the results so they don't use that API, and I also added results for using streams. The results are slightly better but not by very much.

I use node but I kind of felt that this sort of scenario should be pretty obvious before you use it. I never use node to serve up static files, I use nginx instead. Small static files will be cached by the OS, as you said, which makes subsequent reads really quick. Since this is a small text file, it compresses really well over the wire too, so the time to serve up the request is lowered too. There's simply not much I/O to be a bottleneck in this benchmark scenario.

I wouldn't say that this is an unfair benchmark. But then I don't use node because it's "web scale". I use it because using javascript on the server, client, and on the wire (JSON) is pretty damn slick.

I'm interested in checkout out vert.x. But, this goes to everyone,let's not let this whole affair degenerate. Right tool for the right job. This particular benchmark scenario is explicitly the wrong way to use node. I'd suspect that if you were to change the readFile into an HTTP request however, the numbers might change. I also wouldn't be butt-hurt if vert.x still came out on top. There are still a ton of things to love about node.

Let's get on with our actual work now, shall we?

The statement, "The JVM and V8 are both fast enough to be almost negligible" is flawed. OS file system caching pretty much makes disk IO negligible, and the time spent in JVM and V8 are the majority of the time. The benchmark is consistent with the system behavior, showing the difference between JVM and V8 is not negligible but substantial.

The discrepancy in this benchmark is almost entirely copying the results of read(2) into a userspace buffer. OS caching is important, but it's not the whole story here.

As some other commenters pointed out, if you pre-load the file contents, the discrepancy goes away. Also, if you actually limit vert.x to 1 CPU, it gets erratic and loses any performance benefits.

Is it possible that his JVM is setup to execute with different ulimit settings than Node? The Node numbers look too suspiciously close to multiples of 1024 (e.g., default file descriptor limit).

EDIT: whoops, saw 4096 for the stream but it's 4896.

> I have a hard time believing the JVM is really 10x faster than v8 for such a simple server.

Honest question, why?

I cannot comment for deelowe, but every time in the past I've seen such a wide difference for such a simple benchmark, there was some methodology problem. After all, for such a simple benchmark, most of your time is spent in the OS.

It's possible vert.x really is that much faster, but given history I'm reserving judgement. I.e. until profiles and root cause(s) become available.

This is consistent with a benchmark I did a while back comparing Netty vs. Node.js. Not surprising since Netty powers Vert.x. Netty is pretty amazing. 500K concurrent connections and not batting an eye.

According to the code, it actually does a file read on every request -- this is certainly suspicious because some implicit caching may significantly change the results.

It includes a test with file streaming on Node.js, that bypass reading the file and write to socket via the Javascript loop completely.

Also file IO are heavily cached by the OS. You can bet that one file is read from memory most of the time. Disk IO is pretty much out of the equation.

With regards to Vert.x, it seems like really cool stuff.

On the blog post about version 1 being released it mentions being able to mix and match several programming languages. Does this mean you can use different libraries written in different languages in the same Vert.x server?

Yes indeed. You can mix and match Java, JavaScript, Ruby and Groovy in the same app. We hope to support more languages going ahead (e.g. Java, Scala, ...)

Once you get Scala, I'll be heading your way. :-)

You already have Scala+Async IO+Netty, it's called play framework 2.0


I've used it. Not a fan of many (most) of their design decisions, though it's better than the current alternatives.

+1 for Scala

Might be nice to also see a FAQ about the differences between vert.x and Play 2

Hopefully Scala won't be too long :)

Yet another meaningless micro benchmark. There is no point in measuring a hello world http request in one framework vs. another.

There should be a larger application that even remotely resembles some kind of real world usage. Maybe some day we'll have some kind of a "standard" for a web framework benchmark, an application that actually does something so it's worth benchmarking.

Hardly meaningless. The problem is that the larger application benchmarks fall prey to accusations of "I wouldn't write it that way!". Their results are just as hotly disputed.

This micro-benchmark makes sense. It's benchmarking an HTTP server essentially. Any benchmark further than this would really be benchmarking the JVM and V8. While that would be interesting, I think in this case, this micro benchmark is OK as long as you know its limitations.

The reality is that most people's app code and databases are going to bottleneck long before either Vert.x or Node.js do. The main thing this benchmark clears up is that both of them are really, really, fast, and that if you do lots of simple to process responses you may want to go with Vert.x.

> This micro-benchmark makes sense. It's benchmarking an HTTP server essentially.

Kind-of true. But to give any results that can be applied to the real world, it would have to use a much larger payload than a hello world.

I agree the "micro" benchmark isn't something people should look to as a definitive answer, but I don't think they should be outright dismissed either. If nothing else, they should be a jumping off point for real testing.

I agree that micro benchmarks are kind of useful in spotting anomalous, bad performance in the very simplest of cases.

I'd upvote this more than once if I could. The beauty and efficiency of a web framework must be measured as a trade off between the amount of instructions executed to check for corner cases etc. Anyone could write a framework that executes hello world faster than a popular one if they just assume for instance that no one will ever make a POST request or use a query string. That wouldn't be a very useful framework, but it could definitely execute GET '/' fast.



These micro-benchmarks aren't even a good kicking off point for comparisons, as the things that yield a trivial benchmark win are often the things that yield significant performance troubles at scale.

Very exciting initial results. The JVM simply is the most optimized runtime available right now. And Java is the dynamic language with best performance. Can't fail to notice Ruby is the slowest even on JVM. If you continue on this path and refine your APIs to be more user friendly, this would be the next big asynchronous server out there!

In the test, the node code is actually calling an asynchronous function fs.readFile on every single request:


Even with os caching there is still quite a bit of overhead there. Would be interesting to see the benchmarks run on the corrected code:


The streaming node.js example he wrote uses a blocking call. This is not the node way, and would cause a definite slow-down.

If you're worried about your programs containing rogue & misbehaving code like this, I recommend you use https://github.com/isaacs/nosync

I had the impression that "Node.js (readfile)", and possibly "Node.js" meant the blocking call but that "Node.js (streams)" meant he was using something like fs.createReadStream(). But you're right, I don't see that anywhere in the posted source.

I've tested several combinations of blocking, non blocking, readFile, streams (pipe) and using chunked transfer encoding.

Results vary a little but all way below the Vert.x results.

See blog post for the stats.

Fair enough. After I wrote that I remembered that readFile wasn't blocking anyways, but it was too late to edit at that point.

I'm interested to know how vert.x compares to industrial-strength "traditional" servlet containers. My guess is that vert.x would outperform them under certain conditions, but all in all they would scale better.

I believe servlets are still the most scalable web stack out there.

I imagine you mean that servlets are the most performant web stack, not the most scalable web stack. Scalability != Performance.

Well, I'm sure it's easy to beat servlets with a load that requires only one thread with a simple enough computation model. Servlets (and any truly multithreaded solution) trades single-thread performance with the ability to scale with the number of cores. That's what I meant.

So I would think that given a heavy load on a many-core machine with interesting enough computations, nothing could beat servlets' performance.

My apologies - it's clear you were talking about scaling performance with additional cores, which is entirely the right usage of the term.

Don't serve static files with node. Use an nginx reverse proxy for your html/js/css assets.

Came here to say this. While microbenchmarks are fun for all, in the real world you would have a completely different reason for choosing node.js that has nothing to do with this kind of performance. So just use the best tool for what is being benchmarked here: nginx.

I'm going to ask a dumb question as someone who is just learning node. What's wrong with serving assets out of public/ in an Express app? Why would someone not want to do this?

For the vast majority of sites that ever hit the web, yeah, it will probably be fine. So go ahead and have fun learning node and do it however, you can always fix it later if the site gets overloaded.

At some volume though, you want to do what the grandparent recommends -- front your application server with nginx, apache, or something. Otherwise serving up static assets is taking away resources that should be spent on the application. Consider if you have a page with 5 images, 5 JS files and 5 CS files for each page created. That means you're serving 15 static assets for every 1 dynamic asset you'd really want to use Node for. While Node can still be "good enough", it is not spending its time doing the stuff you really want to use it for in the first place. A dedicated piece like nginx goes much further.

And also use a CDN :p

So a quick verification. The io is the difference between these two. The JVM is doing some caching somewhere, whereas the v8 engine is not. Making a small change to both (in order to ensure that both are using the exact same logic):


and then I get the following results:


39890 Rate: count/sec: 3289.4736842105262 Average rate: 2958.1348708949613 42901 Rate: count/sec: 2656.924609764198 Average rate: 2936.994475653248 45952 Rate: count/sec: 3277.613897082924 Average rate: 2959.610027855153


38439 Rate: count/sec: 4603.748766853009 Average rate: 4474.62212856734 41469 Rate: count/sec: 4620.4620462046205 Average rate: 4485.278159589091 44469 Rate: count/sec: 4666.666666666667 Average rate: 4497.515122894601

Making that change so they both store the file in memory and nodejs is 50% faster than vert.x.

This is using an m1.small instance on EC2, and both vert.x and nodejs only using a single core.

The JVM does not doing any caching.

And artificially crippling Vert.x to a single core does not prove anything. Anybody who cares about performance will be using more than one core.

Yea. There is no way java/rhino could beat c/v8.

In any case even if it was faster, it wouldn't be an order of magnitude faster. That kind of a difference should be an indication that something is wrong with the benchmark.

I wanted to know the benchmarks including luvit. Vert.x vs luvit vs Node.js https://github.com/luvit/luvit

luvit, love it!

Why are they measuring requests/sec? Any server can accept connections at a high rate but what matters is responding in a timely manner.

I doubt the requests number too. Writing a dummy socket server (evented, threaded, ...) that just returns "HTTP/1.1 200 OK" will not get you anywhere close to 120k requests/sec. The system call becomes the bottleneck.

Requests per second implies downloading the complete request for each request.

It's labelled badly, what is actually measured is req/resp per second.

I.e from request to corresponding response and how many of those it can do per second.

If you doubt the numbers please feel free to run them yourselves, all code is in github

Could you clarify that? Are you saying that if the response is sent within the same second that the request came in that it contributes to the metric?

Or would a response that is sent 30 seconds after the request came in contribute to the metric too?

It doesn't matter whether a request straddles a second or not in a throughput measuring benchmark when you saturate the system. A client would only count a request when its request call has returned. Runs it for N minutes, count up how many requests have completed, then divide the total with the time and you got req/sec.

Besides the benchmark has run for a minute. I doubt each request lasts 30 seconds.


The system is in steady state, i.e. queues of requests/responses aren't growing. Therefore it doesn't actually matter if you count the requests or the responses.

Are there any easy cloud platforms or whatever that support easy deployment for Vert.x?

People have already got Vert.x running on OpenShift and Heroku, and CloudFoundry support shouldn't be too much longer.

Testing A against B. A is found to be faster.

People in the B community come out in outrage saying that the testing is flawed, that microbenchmarking is useless, that this and that. Rinse and repeat.

Quite interesting. Is the Vert.x API compatible with CommonJS and/or Node modules? If not, I think it will suffer from the same chicken/egg problem that WebOS had with apps.

Seeing these numbers (although as the author states, the data needs to be taken with a grain of salt since he didn't try to set up a more "proper" test environment) really makes me want to take a serious look at vert.x, but what you mention here is my main concern.

Node is a joy to use because of the NPM, and all the available libraries. I'd hate to lose that.

That said, so long as it has a solid websocket api, and I can do routing in it like I would express, I'd give this framework a shot

> Node is a joy to use because of the NPM, and all the available libraries. I'd hate to lose that.

Maven's not bad at all (from a consumption perspective--from a dev perspective, it's a bit of a pain in the ass), and it looks like pretty much any Java library that doesn't do anything too insane should work just fine. (Disclaimer: never used vert.x myself, but I did a quick scan of the code.)

Cool, thanks for the tip

Vert.x leverages the JVM so I don't think library support is really going to be much of a problem. Far from it.

Agreed, it would be good to have CommonJS support.

However, it's unlikely that node modules will work as is with Vert.x, since the API is different. (Unless someone writes a translation layer)

JVM beating the shit out of Node.js? I guess that's not even slightly surprising to anyone outside the Node.js bubble.

Java much faster than Javascript. News at 11...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact