Hacker News new | comments | show | ask | jobs | submit login
To boldly go where Node man has gone before (jgc.org)
131 points by jgrahamc on May 16, 2012 | hide | past | web | favorite | 115 comments

If there is one thing I've learned its to ignore benchmark tests on relatively established technologies.

You'll drive yourself into the looney bin if you hop from one thing to the next based on these comparisons that inevitably popup every month or so.

If you enjoy programming in JS with Node use it - if you enjoy programming with Go use it. Your enjoyment will far outweigh performance differences that will be minor in 99% of real world cases.

I use Express, because I can make and host a fast web page in 3 minutes, and if I ever come across those fabled unsolvable "scaling problems" I can abandon whatever needs to be rewritten without a second thought. Admittedly this is usually for promotional media and assorted small projects, but it still works great.

Yes, exactly.

Also it is often external influences that will drive performance improvements e.g. Nginx, Redis, Memcached, Javascript caching etc.

Choosing Node.JS isn't just a performance / scaling decision for me -- one of the main draws to Node is the huge community and wealth of awesome modules. Socket.io, Now.js, cradle, redis, and dozens of other modules abstract away the tedious work and let me build functionality _really_ fast.

Go is really promising but it won't win over the majority of casual developers until there's a community around it. I might as well write a webserver in C if I'm only concerned with performance

> one of the main draws to Node is the huge community and wealth of awesome modules. Socket.io, Now.js, cradle, redis

Really? In my experience, all but the most mature modules are hardly beta-quality and in constant flux. Find a good module for your job? Too bad it only runs on 0.4. Find another to get around that; oops, too bad your version of gcc needs to be patched and re-built. The language and its entire package ecosystem is so immature, I'm blown away people trust it in production at all.

"So fix it and submit a Pull Request," you say? That doesn't help me when I'm trying to deliver on a deadline. I love contributing, but I often have more pressing things on my plate.

In my 6ish months of programming in Nodejs full-time, I have yet to run into a library that only runs on 0.4 only or requires my version of gcc to be patched and re-built. Which Node libraries were you trying to use?

I'm using cluster, crypto, express, memcached, mysql, and mysql-pool in production and they have worked flawlessly for a node deployment that serves a pretty busy JSON API (200-300 reqs a sec on average).

As a counterpoint, I built a relatively large system using Node (this was a couple months ago, to be clear) and had issues with several modules. mysql: cannot handle binary blob columns (this is just now being fixed in an alpha version of the library). mysql: very slow parsing of large responses. aws*: many half built / half broken libraries -- nothing that met our modest needs. request: (http request library) found several issues.

Node.JS has a great community that is writing many great modules, no doubt. But the community is very young and almost by definition many of the modules are immature.

This can be very enjoyable from an engineering perspective (you get to write and hack on things that you would not otherwise), but can also slow down the process of building things since you do end up having to reinvent the wheel at times.

In our case, it isn't a large system. Our node production environment serves up a smallish set of API/Ajax requests that are heavily used. With that, we've been able to eliminate more than a few Web servers that otherwise had to load the entire Apache/PHP stack just to serve a small 20-100 byte request.

With your response in mind - I don't see Node being mature enough (yet) to solely support an end-to-end large scale Web property. But, for serving up API content from Memcache/MySQL - it is blazing fast with minimal footprint. On our stack - we run Node right on our existing Apache Web servers (behind haproxy).

As we are commenting here, the community is assembling by the minute... Go allows you to import "github.com/user/package" and will download and compile the code, so creating a new project is even easier than with node, no need to "npm install ..." first.

For a taste of what's out there, see: http://go.pkgdoc.org/index.

The GitHub thing was neat the first time I saw it, but then I thought about it some more, and I'm not sure it's such a good idea.

I couldn't find any way to specify a specific tag or revision, so it's basically always getting the bleeding edge developer build of the library code. That might be nice in some situations, but for getting work done I almost always want the most recent stable release.

Am I missing something?

if the repository owners tags a commit with the tag Go1, it is used by Go. If there is no tag to specify which commit, the tip of master is used. In practice, most maintainers just keep master clean and do all development in a dev branch.

I hit this a while ago. The first thing I tried to install failed, because it depended on something that depended on something that moved its repository location (renamed itself from foo.go to foo).

It's a convenient hack but nothing more.

If I wanted a specific tag or release, I would just download that revision in the GOPATH directories, and refer to the package directly, instead of through the github URL.

Kind of a clunky way to handle versioning though. In Lein (for example), adding

    [noir "1.2.0"]
ensures I'm always grabbing the same version. In Go, appears the only way to guarantee this is to check in the full source of the library alongside your own in your repo.

I wish Leiningen would also handle bleeding edge dependencies as easily as rebar eg:

    {deps, [
      {quoted, "1.0.3",
        {git, "git://git.corp.smarkets.com/quoted.erl.git", {tag, "1.0.3"}}},
      {proper, ".*",
        {git, "git://git.corp.smarkets.com/proper.git", {branch, "master"}}}

This plugin[1] looks like what you're looking for.

[1] https://github.com/tobyhede/lein-git-deps

"You will also need to manually add the checked-out project's dependencies as your own (the plugin simply checks out the code, it doesn't recursively resolve dependencies)."

I've seen this plugin before but its not particularly useful without dependency checking. It's a start though. I'll have to look at leiningen to see if it exposes hooks for that sort of thing.

I also consider it a huge plus that Go produces a single, statically linked native binary. No fiddling around with directories of libraries and search paths required.

npm is awesome, this direct import doesn't seem like a feature, more like a lack of a feature.

It works very well for me. If I want someone to install a Go program I've developed, in Ubuntu 12.04 they can just:

  $ sudo apt-get install golang
  $ go get github.com/pauek/garzon/grz
That's it, and it takes a few seconds.

That's great to quickly show someone a program, but not so great if they try it a few weeks later and get a non-stable version of your program, otherwise that definitely won't be it, they'll have to spend some time figuring out what went wrong.

If Go adds some method to specify a tagged version that could be pretty nice though.

I hope it goes without saying that there's no way this kind of hack (neat as it is) is a substitute for a proper package management system like npm (which is a very good package manager).

This is a nifty feature and npm can do something very similar:

    npm install git://github.com/substack/node-optimist.git

How's that any faster than

  npm install express
Not only do npm handle versioning for you, you also don't have to remember the host or username. Obviously Go is a younger community, and could make a package manager some day, but I'm puzzled by how you're comparing this positively with npm.

It is faster because you don't have to install npm, the case I was explaining involves someone with no developing skills.

And I'm not saying this feature of Go is better than npm, I also use npm and I like it very much. What I like is the fact that Go comes already with this feature and I don't need anything extra.

I believe that npm is now included as part of all node distributions.

Oh I get it, you're talking about from a package developer's standpoint. Not having to register your package with a central directory does save a bit of time for you I guess.

But when installing packages, I have to say, npm is fantastic.

You don't have to do this with npm either. You can provide a URL to a tarball or a git repository and npm will install from there.

And npm ships with node, so this is no different.

Furthermore npm can install git URLs directly too, including tags and branches if you need a particular one. Just specify the url as the "version" in your dependencies section of package.json. Totally trivial.

You might like Perl too.

Edit: haha I love how people downvote this like i'm being sarcastic. Down with Perl, that horrible language with a community and modules 100x larger than Node!

I don't understand the attitudes either. While I enjoy writing Javascript for Node, when other Node users tell me they hate Perl, I can do nothing but give them a blank stare.

So you want an environment that is easy to program in a functional style with closures and first class functions. Has a large ecosystem, including hundreds (thousands?) of packages to do asynchronous IO. Is easily extensible and can fall back on code written in C. Finally, has a large and active community ( Madison NAPC is sold out )?

And Perl isn't even on your radar?

For the curious, I recently took some simple examples I'd seen in node.js


and translated them to Perl.


If you're interested in functional programming, I've learned a lot from the parts I've read of Higher-Order Perl.


It's not hip and exciting anymore and generations of poor Perl coders have everyone convinced Perl code must always look ugly.

I'm a fan of C and Python myself, but I always chuckle at all the Perl hate.

That should read: YAPC::NA

Go has a growing number of packages for things that you might want to interface with: http://go-lang.cat-v.org/pure-go-libs and http://golang.org/pkg/ There are already packages for most databases, data interchange formats, crypto, etc. that most people want to use.

Another draw of Go that you didn't mention is the language and toolchain design itself. I know Javascript better than Go, but would rather write servers in Go than JS.

> one of the main draws to Node is the huge community and wealth of awesome modules.

If you think Node has a huge community and wealth of awesome modules wait til you to discover the older and more mature languages/platforms, like Python, C, Java, etc.

"What I like about Justin Bieber is the massive history of his career, spanning decades, and the broad range of his musical style across several genres." :)

It is funny because I have chosen Node for one my project because I have not found easy way to do that in Python. I must say I really love Python - it is my favorite programming language. I could do the same project in Python but node.js looked slightly better and faster in this case.

Programming language is just a tool - one from many tools you can choose to make your projects and node.js have done some basic things better than Python (e.g. packaging). I have learned to love JavaScript and it is really good language now. Still there are projects I wouldn't choose node.js as my tool.

Node is popular not only because it's very fast (yes, it's still pretty darn fast even if there might exist faster alternatives), but because it's JavaScript.

That's an enormous advantage for startups as (a) the JavaScript hiring pool is much larger than most other languages (particularly in comparison to nascent ones, like Go), and (b) the ability to write client- and server-side code in the same language means small teams can be far more integrated and collaborative than otherwise (and if the startup has a single coder, the point is even stronger).

Assuming these benchmarks are accurate and translate into similar results for more complex applications, I'd gladly pay the 10-20% performance penalty to get the above benefits. Node is fast enough for most startups.

I've heard the JavaScript developers are easy to find argument before. However, good JavaScript developers are really hard to find. Especially when you need them to take those skills and run them on the server.

There is a big difference between a jQuery hack and a good JavaScript developer.

Exactly, I just see the PHP problem repeating itself. Yes, you can get a PHP developer for £7/hour, but are they going to write good code? Unlikely.

This. What attracted me was the fact that Javascript on the server. Javascript for the client, and Javascript for the database (MongoDB) seemed much more appealing than 3 separate languages. (Normally a PHP/MySQL dev)

I think any person who can write an application in Node could also do it in Go, plus if your having trouble attracting good developers, [edit] Go, not node [/edit] should make the position more attractive.

I consider myself a pretty conservative developer when it comes to the tools that I use, and mainly stick to the core LAMP stack for my deployments - MySQL, Apache, PHP etc. I've ventured down the path with MongoDB and currently use it for one special use case where the data served is static.

But for Node - I've really come to the conclusion that it is a great tool for back end API work, such as serving very short requests, Ajax calls, etc. Couple the ability to develop in Javascript while serving JSON makes Node a really nice place to develop in.

Plus, performance is off the charts for my API serving applications. On a dev instance I can serve close to 5500 reqs/sec for one of our APIs where PHP was at about 1000 req/sec. And the difference in memory usage is fantastic.

BTW, our production Node deployment is Node, with the memcache, mysql, mysql-pool, cluster, and express modules. I've been thrilled so far.

Out of curiosity, how were you running PHP?

My favourite thing about posts like these is the little comparison snippets between a framework I already know (Node) and a language I want to learn (Go). This is very educational, and the benchmarks are the cherry on top. Thank you.

I'm sure the reason why users pick node is probably similar reasons why one picks ruby/python -- it's not for the speed.

The language of the web is javascript. Until web browsers allow Go as the embeddable language, node will remain useful. I seriously doubt people would pick Go over node because of a micro-benchmark.

> The language of the web is javascript. Until web browsers allow Go as the embeddable language, node will remain useful.

Do people really get that much code re-use between client and server that using Javascript on the server provides a large benefit?

I'm currently working on the next major version of my JavaScript game engine[1]. The main new feature will be real time multiplayer. Being able to run the same game code server and client side is invaluable.

[1] http://impactjs.com/

For small apps, no.

For complex ones yes. The actual code shared may not be that much, but consider the advantages in sharing validation code between the client and server -- you never accidentally forget one requirement on the server side, leaving you vulnerable to an attack.

consider the advantages in sharing validation code between the client and server

Is there a mature implementation of this yet?

One that is not tied to an entire immature framework (meteor) or programming pattern (nowjs)?

Code-reuse was the big promise of node. Yet in reality I've never seen it executed beyond brittle experiments.

Where is the form_for_model() function that emits code for both the client and the server?

Template compilation being able to happen on either the client or the server is a huge win for anyone writing webapps, and it's trivial to write view code that can be compiled in both Node and browsers even without a framework.

In other languages, even if you write completely logicless views in a templating language that can compile in the browser like Mustache, you'll end up duplicating your presenter code if you don't use a language that browsers can understand.

and it's trivial to write view code that can be compiled in both Node and browsers even without a framework.

Nonsense. It's far from trivial to have the same code get binding and validation right on both sides. That's why it needs to be wrapped in a thin abstraction, which apparently nobody in the node-community has been capable of writing.

Instead we see dozens of rails-clones (oh, exciting..), and a handful of very half-baked full-stack universes that are nowhere near production ready.

Alternatively, there's an argument that one should not validate on the client to avoid this exact scenario (split-brain validation).

Why not validate on both? Validate on the client to give a nicer and quicker notice if something is wrong (good UX) and then validate on the server for actual security.

To avoid a split-brain validation scenario.

Update your model? Don't forget to update your validation in both the client and server. Otherwise, you'll validate something on the client but not the server (bad UX), invalidate something on the client but not the server (bad UX), or corrupt your data slowly when validation entirely fails (bad UX).

Since it's a pain in the ass to remember to update two places when you make a change, you're seeing people make ridiculous leaps of programming ingenuity by choosing a server-side language that allows them to not have to update two places at once, and Javascript absolutely sucks for server programming. Every time I have to do any client-side Javascript I, quite literally, hate my life. People that love Javascript and want to apply it to everything have arguments about the stupidest things, like using semicolons, which is telling about how awful of a programming environment it can be.

Easier: Just don't validate on the client, and make a round trip. If you're doing mobile or on a high-latency link, there's an argument for doing client-side validation but then you just need the discipline to update both sides at once, which hopefully integration tests should help with.

That's a straw man if I ever saw one. This isn't a debate as to whether JS should be a server-side language or not. The fact is that if you're using Node and you're validating data, you can reuse the code to validate both on the front-end and the back-end. Front-end validation will likely improve the way your users perceive the app.

Proper unit testing makes split-brain validation a non-issue. Client-side validation does feel more responsive, it also saves bandwidth, and prevents impatient users from spamming the server with invalid requests.

That is why the guy is using the same code to do both validations.

In a medium sized web application, 50-70% of server-side code can be re-used verbatim between server and client.

Well, whatever "medium sized" means, I work on a high-traffic Web application that reuses 0% of code from the client; in almost everything I've worked on, I've found reuse between client and server nearly impossible, enough that they're almost always segregated repositories. There's an argument for models existing on both sides (which is reiterated below), but I have a hard time seeing models as 50-70% of any application.

Just not my experience that this is the case. Though I might be old-fashioned.

For what it's worth, here's one counter-example:

I've recently been working with a team that is developing a content management system for which the display side of things (responsible for routing requests to models and views, fetching model content, rendering, etc.) is about 500 lines of non-comment CoffeeScript code.

Nearly 100% of the server-side code is re-used on the client, and perhaps 80% of the (non-library) client-side code is shared with the server.

In this case the exceptions are:

- Things like HTTP-level request handling and reading from local files rather than over HTTP are used on the server-side only.

- Things like DOM-manipulation and interacting with browser events are used on the client-side only.

I wonder if a significant factor may be how much functionality you're actually replicating on both the client and server. In our case, browser-permitting, the entire display engine runs equally well on the client or on the server. If we had a lot of JQuery kind of stuff happening in the client-side JavaScript the ratios might be a little different, but "also-run-the-app-on-the-client" is a good example of a use case that leads to a lot of reuse of the server side code.

The amount of traffic has nothing to do with it.

A crud app would properly share relatively little code between the server and the client. A multiplayer game would share much, much more.

And which do you think is more prevalent in the world, a CRUD app or a multiplayer game?

"medium" means I don't have personal experience of how the percentage scales to large applications.

as for sharing: data sources, domain models, utilities and templates can be shared. its the io handling (HTTP and dom) that can not be shared.

Check out stuff like meteor or firebase. They barely even differentiate btwn client and server code.

Rolling your own framework based on the same ideas is not hard.

You can compile Go to js with gwt. You can reuse code that way. I personally even reuse Java code between Android, front end Web and server. Because of gwt.

I don't think there are many people who choose js because they like it. It's usually because they feel forced. Javascript is a terrible language, I always prefer one of the compilers.

Dart can't come fast enough.

Go is not related to java in any way. It has it's own runtime. Applications are compiled to native binaries on each platform.

Is there a separate branch/fork of GWT with Go support? That would be absolutely awesome in fact but it doesn't seem to be true.

You can compile Go to js with gwt. You can reuse code that way.

This makes me wonder.. and this is totally off the cuff, non-researched pondering.. could there be merit to a JavaScript dialect that would compile to Go?

Where is this GWT compiler for Go? From what I understand, GWT is for Java translation only, and it only works well on code that doesn't reference any third party libraries. It doesn't work that well on code that makes heavy use of annotations either.

So does Go's net/http module really provide basically exactly the same programming model as Node? If so, why use Node?

One thing the author didn't mention is that Go will use multiple cores in this example, whereas Node is single-threaded. Right?

I don't program in Node very much or in Go at all, but I think there are reasons to use Node that aren't "Node is fast". I think the "Node is fast" mantra was originally really meant as a comparison to Ruby/Python/PHP, not as a comparison to lower-level languages like Go. (I personally wish Node would just abandon speed as a selling point altogether--I think it'd lead to many fewer misunderstandings.)

In any case:

Node lets you reuse the same code on both the client and server. This can be very valuable if you're writing a JS-heavy webapp. In my experience, the most useful bits of code are the in-code data representation (models in an MVC world). Being able to have identical functionality client- and server-side can save a lot of time.

Node also has Socket.IO, which is very useful if you want to use websockets. (Go has a library for it, but it hasn't been updated in a long time.) The vast majority of the Node ecosystem revolves mostly around building interaction- and communication-heavy webapps. I don't know Go or its community at all, but I don't believe they have the same kind of single-minded focus that Node has. If you're trying to do the things Node does well, you will likely benefit from the community support.

There are, of course, many downsides to Node as well, but much as I dislike Javascript, I still think it's one of the best choices for writing highly interactive webapps right now.

Yeah, I agree, I'm using node right now to build a big web game and there are huge benefits to being able to run the same code on the sever and client.

I wouldn't use node if it wasn't for this because I don't particularly like javascript.

All my other games have used Python on the back end and you can really feel how clunky and painful javascript is when you have to switch between the two five times a day.

It's true that Go can use multiple cores, but I was doing these tests in a Ubuntu VM that was restricted to using a single core so that Go's inherent 'multicore' advantage was dialed down. I'll add a note to the post.

That's a(nother) pretty big real-world advantage for Go then, no? I know the solution for Node is to just run one instance per-core, but it's automatic with Go.

I'd love to see a second set of benchmarks of the same code on multiple cores.

Yes, Go will do that automatically. Plus, you can write your code in a completely linear fashion, instead of the typical Node.js code with lots of callbacks.

> So does Go's net/http module really provide basically exactly the same programming model as Node? If so, why use Node?

Well, for starters, Go is much more awkward for working with JSON than JS is (which is to be expected since JSON is literally a subset of JavaScript). More importantly, Go lacks the ridiculously vibrant ecosystem for Web development that Node has — no NPM, no Connect, no Jade, no Stylus, etc.

So what exactly do you think is awkward about Go's way of working with JSON? Here's a code example from one of my current projects:

    type UploadProgress struct {
        Progress int `json:"progress"`

    // variable progress is of type Progress
    if json_data, err := json.Marshal(progress); err == nil {
        w.Header().Set("Content-Type", "application/json")
json.Unmarshal works the same way. IMHO, not exactly more awkward than using JSON.parse and JSON.stringify.

Unless there's something I missed, json.Unmarshal works the same way iff you have a struct that matches the JSON data very closely. If the JSON data is a little more free-form, you're stuck with a map of interface{}. Having to muck about with types is just a little tedious compared to JavaScript where there aren't really any types and anything can concisely be converted to a string. (I did a little project involving JSON a while ago in Go, but I'm hardly an expert at the language, so if I missed something, I apologize for the misinformation.)

In retrospect, I think I may have overstated it a little bit, and I doubt this problem would crop up too often for most apps.

If your json doesn't represent something that already follows some known structure, you have an opaque data structure that you have to hand write a parser for in any language.


>you need a number but an external datasource might hand you a number or a string containing a number.


the label on the field lets you denote when that will happen, so... it's not quite as difficult as it may seem; you don't have to write a single method for that case.

The type system is a feature, not an obstacle. Yes, you have to specify a type, but as a result, the compiler can perform a lot of checks that, in dynamically typed languages, would either be something you'd handle manually, something that would cause undefined behavior, or something that you would write a unit test to test for. Yes, you could just wing it, unmarshal some data, have an unsafe reference to some field and hope that it performs the way you want it to, but the reality is that things like that are easy in JavaScript because they're wrong. All data inherently has some kind of type, and different languages deal with that in different ways.

but yes, that particular type of thing does come up a lot, and the topic of handling bad json is one that has woefully too litter literature surrounding it (I've had it stump me before on my own projects). Things like "they're using the wrong format for the timestamp" or "that is sometimes string, sometimes null" are stumbling blocks when coming from dynamically typed languages (I'm coming from Python/JavaScript), but in my experience writing Go, the way the language works makes writing bad code much more difficult, and my Go programs have been much more stable, easy to refactor, and easy to maintain forward progress on over a long period of time. After using it for some time, I've come to the conclusion that these types of tradeoffs have been worthwhile. The significant reduction in runtime errors that I've seen in my Go applications compared to my Python applications have more than outweighed the amount of time I would normally spend debugging and testing my Python code, which is tedious and boring.

'npm' is covered by the Go tools themselves. You can import packages using their github/bitbucket/code.google.com URL.

That is really interesting. Thanks for pointing that out. Can you specify specific version dependencies like you do with a full-blown package manager?

After 'go get'ing the package, you can use git/hg/bzr to checkout the specific version you want. From there, all of the Go build tools will work as normal.

I don't think so, but I could be mistaken. (btw, someone pointed that out as a drawback on another thread.)

What I've done sometimes is just clone the repo within a directory that appears in the GOPATH, maybe checkout a specific revision, and that solves the problem.

Writing http servers with Go really is pretty easy; here's one I wrote while at SC11: https://bitbucket.org/floren/goblog

It's for an older version of Go, and has only been tested on Plan 9, but it shows a static content + blog server, with JSON config files and templated HTML for the blog pages.

No, the model is completely different. In Node, you pass in a callback to establish what should be done following the return of a longrunning I/O operation. In Go, you use goroutines to manage the concurrency.

Basically if you say this:

The Go runtime will execute the function in its entirety and wait for a return value.

If you say this:

    go fn()
The Go runtime will execute that function in a new goroutine (the return value is discarded) and then immediately proceed to the next line in the current goroutine.

So if you have some callDB() function that performs some long-running i/o, in node, you might do something like this to perform that operation without blocking the surrounding code:

    callDB(someparam, otherparam, function (result) {
        // do something with result here
while in Go, you would do something more like this:

    go func() {
        result := callDB(someparam, otherparam)
        // do something with result here
which winds up being more like this

    func superGreat() {
        result := callDB(someparam, otherparam)
        // do something with result here

    go superGreat()
With the benefit being that if you want superGreat to run concurrently, you use the go keyword, and if you want it to be run in a synchronous style, you just call it normally.

The problem with Ruby/Python/etc is that it is awkward to take something that is blocking and make it nonblocking. The problem with node.js is that node.js says "blocking is bad; therefore, never block". Go takes a different approach, in that it makes it obvious to know how to write something so that it does or does not block, so that the developer is free to use whatever I/O paradigm fits the problem at hand.

The net/http package allows you to register a handler, which is a function that takes and http request and writes a response. There is a main request loop that, when receiving a request, will create a new goroutine for each request and execute its handler in its own context. So within your handlers, you mostly just write blocking code, because you're already in your own isolated goroutine, and the only thing you'd be blocking is the processing of the current request. If you want to do something in the background, you run it in a new goroutine. goroutines are very cheap, so you can be quite cavalier about their usage.

If you want to use a callback-passing style, you can do that in Go (because it has function literals and closures and first-class functions and all that), but that's not idiomatic by a longshot.

The absolutist position that node.js takes by saying that "all i/o must be nonblocking" is no better than our previous options of "all i/o is blocking". Sometimes the simplest and most readable solution is simply to block. There is a yin to the yang of blocking.

Important point there seem to be missing from your goroutine description is it's scheduling with respect to system calls -- all interactions with underlying OS is asynchronous in Go even though it looks like a blocking API. E.g. networking internally uses polling specific to given OS. So if one goroutine is waiting for a system call to complete it will not block the rest of goroutines.

I doubt it does. net/http looks like a basic http listener? As such, the comparison looks biased to me. Mind you, I don't know much about node.js or Go.

Isn't Node's http.Server also a basic HTTP listener? What's the distinction you're trying to draw?

I think that actually the Node example was significantly simpler.. there were no pointers, and no types needed to be specified. Also I can use CoffeeScript with Node. The only way Go is easier to write is if you know Go but don't know Node. I say its easier to learn and use Node/Javascript.

I am surprised that Node performed so well.. I would like to see that same Go benchmark with 4 CPUs vs. Node and also vs. Node with a clustered web server and the code to see how much more code that involves for clustering with Node.

Also I would like to see some Go code and benchmark that reads a file and/or database in the request and works efficiently.

Actually the code for a Node cluster that would use all of the available CPUs for http in CoffeeScript would look like this:

    cluster = require 'cluster'
    http = require 'http'

    numCPUs = require('os').cpus().length
    if cluster.isMaster
      cluster.fork() for i in [1..numCPUs]
      app = http.createServer()
      app.on 'request', (req, res) ->
        res.end "hello world\n"
      app.listen 8000
Maybe someone could take V8, change it so it compiles (immediately) to static code and uses CoffeeScript, and add a way to optionally specify types (in an unambiguous and readable way) and pointers only when you have to.

Just want to point out one obvious thing that I found very interesting: that the V8 JavaScript engine which makes Node exist in the first place comes from the same company which makes Go.

Google's Robert Griesemer, who worked on the V8 VM, is one of the main contributors to Go.

Are we now going to have a run of posts that tweak a stupid micro-benchmark?

This test shows Node handling 100 requests simultaneously just as easily as handling them separately.

However Go's memory usage increases by about 5X.

So what happens when you reach 1000 simultaneous requests? Which will perform better then?

I think this is called "selection bias."

Not only that, but Node actually got faster as simultaneous connections increased while keeping a completely stable memory footprint. The benchmark showed the opposite of what the author claimed -- Node scaled effortlessly. Its performance was completely stable and predictable under load, whereas Go's wasn't.

Not to knock Go -- it's a beautiful language, and absolutely faster than Javascript for most purposes (and will no doubt increase in stability and speed as time goes on). But this post had some pretty flawed analysis of the benchmark presented.

FYI, the original article has been updated with graphs showing reqs/sec, virtual, and real memory for both using 100, 500, and 1000 simultaneous requests. Aside from virtual memory (do we care about that?) it seems to show Go as continuing to perform better.

Ah, great. That's a good update. Go does end up responding well at 1000 concurrent requests for real memory. But a spike in virtual memory means it's paging all that extra data, right? Doesn't that normally slow things down? what's interesting is Go still maintains a similar requests per second.

of course, for a front end guy, Node is still the most exciting thing on the block simply because it is server-side javascript. The fact that it's not much worse than the latest greatest compiled language is pretty exciting to me.

I don't think the spike in VM means it's paging all that. That just means it has that address space reserved, not that it's using it. I'm not aware of any real cost to having a huge pile of address space reserved, aside from that some monitoring utilities will penalize you for it (unreasonably). So, no, I don't think it'll slow anything down by thrashing, unless the locality of the memory it is accessing is really bad, but I'm no expert on Linux virtual memory or how Go uses it.

Static typing + no callbacks make Go vs Node a no-brainer, IMO. I'm doing mainly iOS work these days but I've been brainstorming ideas that require a server component just so I can use Go for something.

What do you mean by "no callbacks"?

You can do non-blocking I/O without the nested callbacks you have to use for everything in Node. You just fire off a block to a goroutine and then code sequentially there.

No, and no, respectively. Hacked together libraries to sort of support rewrite it at the wrong level of the stack (the compiler should be doing the continuation transform, anything else will lack performance due to having to jump through a lot more hoops) and merely having generators is not the same thing. And neither solution you propose will give you true preemptive multitasking. Do you know how you fire off a long computation in Go or Erlang? You just do it. No marshalling, no blocking, no dealing with OS processes, no fuss.

Neither of those two things are new ideas. Python has had generators forever. If it solved the underlying problems, Go would never have been written.

Both look useful, but aren't exactly well-supported by the library ecosystem. In Go, faux-blocking I/O is standard, and all the libraries support it; it's not something bolted on as a handy afterthought.

The point of Node is that it uses JavaScript, and browsers only speak JavaScript. You have three choices: (1) use Node, (2) duplicate code client- and server-side, or (3) use server-side code that compiles to JS. Right now 3 is the least popular (and 2 is the most) but I think it will end up being the accepted solution in a couple years.

Or use a plug-in in the browser that speaks your language of choice.

"Easy to build, fast, scalable."

So which part of the test demonstrated that it was hard to build, slow, and/or not scalable?

If you're not impressed that JavaScript reached 80% parity with a statically typed, compiled language – well, sorry dude.

Anyway, nobody is begging you to be impressed. 99.999% of successful projects are built with technology that is completely unimpressive.

I can personally vouch for the maturity of Node especially Socket.io. We're building a web-based email client that relies heavily on real-time communication with a Node-based email importer. See http://philterit.com. I've found Node to be faster and more lightweight than Rails 3, which we initially used for our prototype. With JavaScript being the lingua franca on the client-side, specializing in it as a team has permitted us to master it and offer a far better user experience.

Nobody is comparing Node with Assembly. sigh

I pick Node because it's easy to move code from backend to frontend and back. I find this very comfortable while creating and testing software.

If you consider something to be fast and scalable (e.g. Go), then something with ~80% of the throughput (e.g. Node in this test) is probably also pretty fucking fast and scalable. Scalability and performance don't walk fine lines like that; there's a reason we talk about orders of magnitude.

Any language faster than Javascript would be fine if it worked in the browser.

Good God man, your iPhone theme is awful! I tried to pan the screen to view all the results and it navs to the next blog post?!?

you know what both platforms are great, I'll use them both thank you.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact