
Go as an alternative to Node.js for Very Fast Servers - meirish
http://techblog.safaribooksonline.com/2013/02/22/go-as-an-alternative-to-node-js-for-very-fast-servers/
======
jbert
I played around with a go server to do some simple scaling numbers - looking
at possibly using go to implement a large-number-of-idle-connections
notification server.

I found the (good) result that I could spawn a new goroutine for each incoming
connection with minimal (~4k) overhead. This is pretty much what you'd expect
since a goro just needs a page for it's stack if it's doing no real work. I
had something like 4 VMs each making ~30k conns (from one process) to the
central go server with something like 120k conns.

I found one worrying oddity however. Resource usage would spike up on the
server when I shut down my client connections (e.g. ctrl-C of a client proc
with ~30k conns).

Reasoning about things a bit, I _think_ this is due to the go runtime
allocating an OS thread for each goro as it goes through the socket close()
blocking call. I think it has to do this to maintain concurrency. So I end up
with hundreds of OS threads (each only lives long enough to close(), but I'm
doing a _lot_ at the same time).

Can anyone comment:

\- is this guess as to the problem likely to be correct?

\- is this "thundering herd" a problem in practice?

\- are there ways to avoid this? (Other than not using a goro-per-connection,
which I think it the only idiomatic way to do it?)

My situation was artificial, but I could well imagine a case that losing, say
a reverse proxy, could cause a large number of connections to suddenly want to
close() and it would be a shame if that overwhelmed the server.

~~~
aaronblohowiak
> I think this is due to the go runtime allocating an OS thread for each goro
> as it goes through the socket close() blocking call. I think it has to do
> this to maintain concurrency

I _highly_ doubt that it is creating a thread per goro on client disconnect.
If you have a minimalish example of this, the golang mailing list would be
_very_ interested in working with you to identify what went wrong and create a
patch if it is an issue with the Go implementation.

~~~
tptacek
Blocking system calls spawn OS threads in Go, which can be cached and recycled
for new goroutines. You don't see this if you code to pkg/net because it
multiplexes i/o with a select/kqueue goroutine, but you'll see it right away
if you code directly to the syscalls.

Close isn't a blocking call, though.

~~~
_stephan
Close is annotated as a blocking syscall in
<http://golang.org/src/pkg/syscall/syscall_linux.go#L810>

Is there a guarantee on Linux that close can't block? (I suppose it depends on
the file type and the definition of blocking.)

~~~
tptacek
Only if SO_LINGER is set.

------
jgrahamc
_While JavaScript drags the scars of its hasty standardization around with it,
Go was designed very thoughtfully from the beginning, and as a result I find
that it’s a pleasure to write._

This is very true. Go is a pleasure to write. In fact, it's such a pleasure
then when you hit something that wasn't really well designed it's horrid.

~~~
papsosouid
Can I ask what language you are used to? I hear how nice go is as a language a
lot, but coming from haskell, go is hideous in comparison.

~~~
chimeracoder
Go is going to look hideous to you because you're probably expecting
functional things like list comprehensions (which Go doesn't have) and a very
intricate type system allowing for things like generics (which Go doesn't have
either).

Go code will look a little less DRY to you as a result, which is a fair
criticism, but it makes up for that by being incredibly opinionated (that's a
good thing), being incredibly easy to prototype in, _and_ being incredibly
easy to refactor painlessly.

~~~
papsosouid
>Go is going to look hideous to you because you're probably expecting
functional things like list comprehensions

Nah, list comprehensions are just syntactic sugar and not used much in
haskell. Python seems to encourage their use a lot, but you hardly even see
them in haskell code.

>a very intricate type system allowing for things like generics (which Go
doesn't have either).

That is definitely one of the big problems, but I take issue with the
characterization of that as needing "a very intricate type system". Parametric
polymorphism is very simple, and has been a completely solved issue for a very
long time. There is simply no excuse for a brand new language to be decades
behind on something so easy to do right.

>being incredibly easy to prototype in, and being incredibly easy to refactor
painlessly.

Those are actually two of the other big issues going from go to haskell. Go is
harder to prototype in, and it is easy to add bugs when refactoring because
the type system is so poor.

~~~
chimeracoder
> Parametric polymorphism is very simple, and has been a completely solved
> issue for a very long time. There is simply no excuse for a brand new
> language to be decades behind on something so easy to do right.

Go _has_ addressed this; their approach is Go's interfaces, which combines the
best of all worlds: duck typing with static type checks and type inference.

> Go is harder to prototype in, and it is easy to add bugs when refactoring
> because the type system is so poor.

This is where we'll have to agree to disagree. It's definitely _not_ harder to
prototype in - and I say this as a functional programmer - and if you find the
type system to be inadequate when refactoring, it sounds to me like you're
trying to write idiomatic Haskell in Go. Go's type system, by design, stays
out of the way - if you're writing Go idiomatically, you really shouldn't be
thinking very much about the types as you write them.

As for generics, this gets beaten to death on _every_ single Go post on
HackerNews. Yes, Go would ideally have generics. Yes, there are tradeoffs
involved. Yes, those tradeoffs have been explained by the Go developers at
length. Yes, they would be open to including them in the future, if somebody
addressed the existing concerns. No, nobody seems to mind that they're missing
from the language as-is, given those tradeoffs.

~~~
papsosouid
You are contradicting yourself about parametric polymorphism. First you
acknowledge it doesn't exist, then you claim a limited workaround solves it.

>It's definitely not harder to prototype in - and I say this as a functional
programmer

Have you used haskell to make the comparison?

>if you're writing Go idiomatically, you really shouldn't be thinking very
much about the types as you write them.

I don't understand where you are coming from here. I am not thinking about
types, that is why I need the compiler to point out when I mess them up. The
problem is go has such a limited type system, that you have to change much
more code when you refactor, and the type system is inadequate for catching
many errors, in particular dealing with error handling. The combination makes
go worse for refactoring than haskell. It is certainly much better than python
for example, but you seem to be convinced that go is the top of the spectrum
and nothing can exist above it.

~~~
dman
I have been wanting to learn Haskell for some time. Is Real World Haskell
still the best book for the language? Could you point me to some well written
libraries/projects that are considered idiomatic haskell? Would appreciate the
pointers.

~~~
gtani
There's lots of books: Hutton, Hudak, Thompson, Richard Bird. I have a ugly
draft/link dump for learning

[http://isthishaskell.blogspot.com/2013/02/tips-on-
learning.h...](http://isthishaskell.blogspot.com/2013/02/tips-on-
learning.html)

------
btown
_> The biggest promise that Node makes is the ability to handle many many
concurrent requests. How it does so relies entirely on a community contract:
every action must be non-blocking. All code must use callbacks for any I/O
handling, and one stinker can make the whole thing fall apart. Try as I might,
I just can’t view a best practice as a feature._

Nonblocking I/O isn't just a "best practice" in the sense that consistent
indentation is a "best practice," it's a core tenet of the Node ecosystem.
Sure, you could write a Haskell library by putting _everything_ in mutable-
state monad blocks, and porting over your procedural code line-for-line. It's
allowed by the language, just like blocking is allowed by Node. But the whole
point of Haskell is to optimize the function-composition use case.

The Node community has the benefit of designing all its libraries from scratch
with this tenet in mind, so in practice you never/rarely need to look for
"stinkers" unless they're documented to be blocking. And unless they're using
badly-written blocking native code, you can just grep for `Sync` to see any
blocking calls.

------
jacobmarble
Node: Everyone knows JavaScript, there's a massive community, there are tons
of libraries, and you get very good performance

Go: No one knows this language, there's a small-but-growing community, there
are enough libraries to get a lot done, and you get even better performance

Java: They are paying me (money!) to write in this language

~~~
mseepgood
The Go community on Reddit is already bigger than the Node community:
<http://www.reddit.com/r/golang> (3730) <http://www.reddit.com/r/node> (3581)

~~~
Flow
And the Google+ Go community is growing fast and is already larger than the
sub-reddit. <https://plus.google.com/communities/114112804251407510571>

~~~
sigzero
You do see a correlation right?

------
burke
I've been using Go a lot lately. It's difficult to overstate just how much
simpler it makes writing highly-concurrent server-type programs. Entire
classes of bugs, issues, and puzzles just vanish.

~~~
chimeracoder
I've been using Go a lot recently as well, and it's rapidly become my go-to
language (no pun intended) for a lot of problems, even when concurrency is not
involved.

The biggest thing Go gives me is that it's _really_ easy to manage code bases
that grow organically - refactoring a project that grows from 50 LOC to 5000
LOC is almost painless in Go - no other language that I've seen has dealt with
this aspect of code development so well.

~~~
alec
What about Go makes it easy to manage and refactor large codebases? I don't do
much Java, but whenever I've watched someone use Eclipse for refactoring, I
question why I'm still using vim, because it is just magic and does everything
for you.

~~~
chimeracoder
I should probably write a blog post about this, because it's a combination of
a number of things. Primarily, the compiler is incredibly strict and
opinionated, so it's impossible to make certain small errors like assigning to
lvalues that are never used, importing packages that are never used, etc.

Secondarily, gofmt makes code very standardized and easy to skim. It takes
Python's 'only one (obvious) way to do it' one or two steps further, by
forcing _everyone_ to to write their code the same way. This makes refactoring
a lot easier because you don't need to read as much code each time in order to
understand what's going on (or at least, you can mentally parse it much
faster).

Finally, the context-free grammar combined with a strong, static type system
means that migrating code from an old (incompatible) version of Go to the most
recent one can be done painlessly with the 'go fix' tool. This isn't your
py2to3 tool - this accepts valid (old) Go code as input and reliably produces
valid (up-to-date) Go code as the output.

That last bit isn't actually used in refactoring manually, but I make note of
it because very few languages give you anything close to this level of
reliability with code modification, which speaks volumes about the design of
the language's grammar.

~~~
crypto5
But go ecosystem still doesn't have anything even close to refactoring
abilities of Java/C# IDEs. Say, move Java class which is in use in 100s places
from one package to another is just few mouse clicks in Eclipse and a lot of
pain in case of Go.

~~~
4ad
Not really. Go comes with code rewriting tools, unsurprisingly since they were
used to update the standard library and 3rd party code when the language
changed before Go 1.

I don't want to use a language that doesn't come with a parser in the standard
library anymore.

~~~
crypto5
So, is it possible to move/rename class/method/field smoothly using those
tools? Extract code block to new method?

~~~
zemo
the gofmt tool has a replacement feature:

`gofmt -r 'InitialName -> FinalName'`

~~~
crypto5
Documentation looks quite incomplete(<http://golang.org/cmd/gofmt/>). Will it
change all imports and usages of class/method/field? Will it resolve all
signature polymorphic calls?

------
stcredzero
_> There’s no arguing about whether to use semicolons, or putting your commas
at the front of the line — the language knows what it wants. It’s a built-in
hipster suppression mechanism._

Major point for saving man hours right there.

------
SeanDav
The fact that Node.js is being used in this equation says a lot about how much
impact and penetration it has achieved in a rather short while.

Personally I hope that Go does just as well, if not a lot better. I am a bit
of a fan of both.

~~~
weego
I think it says more about the bias of the writer, he seems to assume that
Node would be the default choice and that something like
erlang/scala/clojure/go would be alternatives. That may be true for someones
sideline project.

~~~
jff
If he's a HN reader, he's probably assumed that Node is the default choice
these days, based on the articles that come up.

------
islon
Clojure is another nice alternative for fast servers, and using a concurrent,
immutable and functional language is a huge win. http-kit is a good example of
such server: <http://http-kit.org/>

------
jamwt
Here's a haskell comparison (hint: it does very well).

<https://gist.github.com/jamwt/5017172>

Haskell was ghc 7.6.1 with ghc --make -O2

Go is go1.0.2 with "go build".

~~~
e12e
As noted above - with ridiculously large values for ab this one crashes
(although I didn't compile with O-parameters). I think this (and the other
haskell solution) ran out of resources.

Both the go and nodejs versions completed without problems.

I was a little disappointed -- I was actually hoping I'd see comparable
performance -- even if it is a silly test.

I think it is interesting that simple, idomatic code in go and nodejs didn't
crash -- not sure what assumptions might be "wrong" in the underlying haskell
code (I'm guessing if anything should be "fixed" it is in the web server
libraries used).

------
hrwl
I have never understood the focus on speed as a selling point for Node. It may
well be very fast, but it seems to me that the primary selling points would be
the ability to share code between client and server and that you can start
coding server side without learning a new language if all you know is
JavaScript.

~~~
pcwalton
Compared to Python and Ruby, node.js is quite fast by the simple virtue of
having a JIT (in the most common implementation anyway; of course there are
JITs for Python and Ruby but they aren't the mainline implementation).

~~~
codygman
Which is a shame, since PyPy is really an excellent project.

------
mjijackson
I was curious, so I actually ran both of the servers from the article on my
little MacBook Air. The results are below.

First, go:

    
    
        $ ab -c 100 -n 10000 http://localhost:8000/
        This is ApacheBench, Version 2.3 <$Revision: 655654 $>
        Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
        Licensed to The Apache Software Foundation, http://www.apache.org/
    
        Benchmarking localhost (be patient)
        Completed 1000 requests
        Completed 2000 requests
        Completed 3000 requests
        Completed 4000 requests
        Completed 5000 requests
        Completed 6000 requests
        Completed 7000 requests
        Completed 8000 requests
        Completed 9000 requests
        Completed 10000 requests
        Finished 10000 requests
    
    
        Server Software:        
        Server Hostname:        localhost
        Server Port:            8000
    
        Document Path:          /
        Document Length:        1048576 bytes
    
        Concurrency Level:      100
        Time taken for tests:   10.085 seconds
        Complete requests:      10000
        Failed requests:        0
        Write errors:           0
        Total transferred:      10489017384 bytes
        HTML transferred:       10487857152 bytes
        Requests per second:    991.62 [#/sec] (mean)
        Time per request:       100.846 [ms] (mean)
        Time per request:       1.008 [ms] (mean, across all concurrent requests)
        Transfer rate:          1015729.90 [Kbytes/sec] received
    
        Connection Times (ms)
                      min  mean[+/-sd] median   max
        Connect:        1    2   0.8      2       6
        Processing:    21   99   5.6     98     137
        Waiting:        1    3   2.7      2      41
        Total:         25  101   5.6    101     139
    
        Percentage of the requests served within a certain time (ms)
          50%    101
          66%    102
          75%    103
          80%    103
          90%    105
          95%    106
          98%    108
          99%    112
         100%    139 (longest request)
         

Secondly, node.js:

    
    
        $ ab -c 100 -n 10000 http://localhost:8000/
        This is ApacheBench, Version 2.3 <$Revision: 655654 $>
        Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
        Licensed to The Apache Software Foundation, http://www.apache.org/
    
        Benchmarking localhost (be patient)
        Completed 1000 requests
        Completed 2000 requests
        Completed 3000 requests
        Completed 4000 requests
        Completed 5000 requests
        Completed 6000 requests
        Completed 7000 requests
        Completed 8000 requests
        Completed 9000 requests
        Completed 10000 requests
        Finished 10000 requests
    
    
        Server Software:        
        Server Hostname:        localhost
        Server Port:            8000
    
        Document Path:          /
        Document Length:        1048576 bytes
    
        Concurrency Level:      100
        Time taken for tests:   15.765 seconds
        Complete requests:      10000
        Failed requests:        0
        Write errors:           0
        Total transferred:      10487558651 bytes
        HTML transferred:       10486808576 bytes
        Requests per second:    634.31 [#/sec] (mean)
        Time per request:       157.653 [ms] (mean)
        Time per request:       1.577 [ms] (mean, across all concurrent requests)
        Transfer rate:          649639.92 [Kbytes/sec] received
    
        Connection Times (ms)
                      min  mean[+/-sd] median   max
        Connect:        0    1   1.7      1      11
        Processing:     2  156  34.7    159     272
        Waiting:        1   47  29.7     42     136
        Total:          2  157  34.7    161     273
    
        Percentage of the requests served within a certain time (ms)
          50%    161
          66%    174
          75%    182
          80%    187
          90%    198
          95%    209
          98%    221
          99%    227
         100%    273 (longest request)
    

Not only does go serve the traffic more quickly, but it also has a much lower
standard deviation between slow and long requests. Impressive.

~~~
oinksoft
I'd be interested to see your numbers benchmarking with `siege' rather than
`ab'.

~~~
pooriaazimi
I really love 'siege'. I wonder if you can get it show you response-time
percentiles (or whatever they're called). I mean this (from `ab`):

    
    
        Percentage of the requests served within a certain time (ms)
          50%    161
          66%    174
          75%    182

------
dpweb
I run node/express for most of my web servers and each takes up about 10-15mb
RAM. They're very basic no fluff. Anyone know what comparable mem footprint in
Go?

~~~
codygman
Here's a quick paste from a server of mine:

    
    
        ps aux | grep api
        ubuntu   15720  0.0  0.1 107024  6136 pts/0    Sl   15:13   0:00 bin/api_server

~~~
jff
IIRC ps displays the resident size in KiB, so you're looking at 6 MB for a Go
process. Not terrible for a high-level language.

~~~
codygman
Yep, just ran ps_mem.py (<http://www.pixelbeat.org/scripts/ps_mem.py>) and
here's my entire server. Not a busy one, but it lets you see what a running
api server and accompanying programs look like.

    
    
        Private  +   Shared   =  RAM used	Program
    
        184.0 KiB +  31.5 KiB = 215.5 KiB	atd
        240.0 KiB +  55.0 KiB = 295.0 KiB	cron
        240.0 KiB +  68.0 KiB = 308.0 KiB	upstart-socket-bridge
        304.0 KiB +  72.0 KiB = 376.0 KiB	upstart-udev-bridge
        392.0 KiB +  79.0 KiB = 471.0 KiB	sudo
        696.0 KiB +  26.0 KiB = 722.0 KiB	dhclient3
        604.0 KiB + 189.0 KiB = 793.0 KiB	getty (6)
        940.0 KiB +  49.0 KiB = 989.0 KiB	dbus-daemon
        660.0 KiB + 366.0 KiB =   1.0 MiB	udevd (3)
        1.0 MiB +  71.0 KiB   =   1.0 MiB	rsyslogd
        1.1 MiB +  35.5 KiB   =   1.1 MiB	redis-server
        1.0 MiB + 122.5 KiB   =   1.2 MiB	init
        964.0 KiB + 733.0 KiB =   1.7 MiB	polkitd
        1.4 MiB + 823.5 KiB   =   2.2 MiB	console-kit-daemon
        2.5 MiB +   1.1 MiB   =   3.6 MiB	nginx (5)
        1.3 MiB +   3.2 MiB   =   4.6 MiB	sshd (5)
        5.4 MiB +  75.5 KiB   =   5.5 MiB	api_server <==== Go Program
        13.8 MiB + 963.0 KiB  =  14.7 MiB	bash (2)
        22.5 MiB + 314.0 KiB  =  22.8 MiB	mysqld
        41.9 MiB +   5.1 MiB  =  47.0 MiB	python2.7 (2)
        79.4 MiB + 542.0 KiB  =  79.9 MiB	java
        ---------------------------------
                            190.3 MiB
        =================================

------
tferris
I find this post paired with this thread confusing. Yes, Go is tempting and
I'd like to try it since a lot of people get quickly into flow with Go, the
"package manager is so great" and "everything is just a breath of fresh air".

But what I don't like: the negativity against Node and omitting some facts. In
the replies of the orignal post a guy tested two (!) times Node and once it
was significantly faster (v0.6) and once it had same speed (v8.0). So, why has
mjijackson such different results in this thread at the top?? And maybe we
should test it on real servers and not on a MBA. Moreover, we have here some
micro benchmark which possibly doesn't reflect reality well. Don't get me
wrong, I appreciate any benchmarking between languages but then please do it
right and make no propaganda out of it. Further, Go's package manager seems to
be nice but it does NOT have version control. How do you want to use this in a
serious production environment. Maybe version control will come (but then tell
how without loosing its flexibility) or not but this is something serious and
definitely not an alternative to any server environment except for some mini
services.

EDIT: downvoting is silly, propaganda and won't help the Go community in
getting more credibility, better do some further benchmarks; otherwise this
post/thread is full of distinct misinformation and should be closed

~~~
cmccabe
Go's package manager does have version control. It looks for specially named
branches (different ones depending on the version of go you have). The
upstream authors can provide a different version of the software for different
releases of Go.

If you want to lock down the versions of all the software you're deploying in
your organization, that's easy to do too. Just "git clone" all of the
libraries you use to some internal server (and/or github repos), and change
the URLs to point to that server. You control when everything gets updated.

Golang builds static binaries anyway. So if you test a binary and it works,
you just copy it to all the servers you care about and you're done. If you're
in a small and informal shop, maybe you don't need to mirror every repository.
Due to the nature of git, if the upstream repo ever gets deleted, you can fall
back on a local copy of it anyway.

This is all very much in contrast to languages like Java where keeping around
the proper version of every jar and carefully deploying that version (and only
that version!) on each server is big deal (and despite OSGI, still very much
an unsolved problem.)

~~~
grey-area
The go import tool does version go std lib packages, but I think the worry is
more about external packages - later when the go ecosystem matures, and I'm
relying on lots of packages from different sources (say for a web app), with
interdependencies, and I _must_ update them regularly for security updates,
but don't necessarily want the latest master for all, I have no way to specify
versions without rolling my own personal package management, downloading the
packages and changing urls etc. If I'm in an org and sharing this with others
I'd basically end up reinventing a package proxy internally in order to
control what software versions are used to build.

This is a small flaw in the go packaging system (and it is a flaw) which I'm
sure they'll fix, either with a convention to always use branches for versions
(which I guess is doable, but only if it becomes widespread), or by changing
import to take an argument for the version. So something like:

    
    
        import "github.com/gorilla/mux/1.0.3"
    

or

    
    
        import "github.com/gorilla/mux", ~> "1.0.3" 
    

which would let you specify any minor updates say and possibly other
permutations, and thus might be more flexible. I believe a few things have
been suggested on the list but I haven't followed the conversation, not sure
what the outcome was - perhaps just that it needs further thought.

At present the first solution is possible, BUT it needs to become a convention
which everyone follows in order to be useful (involving named branches or tags
at the repo). The other advantage of this is that it states requirements fully
in the package concerned - otherwise all we know is that this code requires
github.com/gorilla/mux, not which version or when it was last tested/imported,
whether it will work with the latest release or not, etc. But then it would
let you run into conflicts by accidentally importing two versions of the same
package with different paths.

The current convention of no version certainly puts more onus on the package
maintainers to maintain backwards compatibility, or on users to maintain their
own library of packages which they periodically update, and I see the
arguments for it.

Of course none of this matters if you're just one person using packages to
build an app at one moment in time, but if you yourself supply packages/apps
to others, or want to setup members of an org over time with the same set of
packages in the same state, and rely on other packages to compile yours, it
can become more complex.

Other package managers tend to have a central point to adjust package
dependencies, and make sure packages are always available forever at the same
uri, so it'll be interesting to see how go manages this when go packages
become more complex and widely used, start to be removed by maintainers, and
start to have complex chains of dependencies on specific versions, or whether
the much simpler go system in the end leads to a saner situation and puts the
onus for this problem back on end users.

~~~
tferris
Great having one bringing a balanced view on Go and showing that Go is still
not ready for production when used with external libs. A refreshing take in
this overhyped thread.

~~~
grey-area
I'm not sure I'd agree that it is not ready for production, but there are some
kinks (relatively minor I feel) which will become more apparent when it is
used more widely than it is currently in production. Just as Ruby (for
example) has gone through a trial of fire since becoming popular because of
Rails, and has been much improved because of it.

------
WhaleFood
One advantage of node that wasn't mentioned is the ability to share server
side and client side code. Avoiding discrepancies in the same form validation
written in two different languages can often be more important than
performance gains in server applications.

~~~
papsosouid
>One advantage of node that wasn't mentioned is the ability to share server
side and client side code

You can do that with lots of languages, using one of the langX->javascript
compilers.

>Avoiding discrepancies in the same form validation written in two different
languages

What form validation are you doing in javascript? I can't actually think of
any client side validation that isn't just a regex.

------
dgudkov
>There are also some officially maintained repositories outside of the stdlib
that deal with newer protocols like websockets and SPDY.

Does anybody from HNers use Go with websockets? What package do you use?

------
stesch
What do you use for persistence? I know what's available, but which database
and library could be recommended for a low to medium traffic web site?

------
tferris
How long does take the compile time with a medium sized typical web project.
Something which gets annoying or still bearable?

~~~
grey-area
If you mean go, compile time is very fast, even with say 10 kloc you'll still
probably be under a second.

------
babuskov
Is there a socket.io equivalent for Go? In terms of ease of use and at least
providing the same functionality.

~~~
mfenniak
Looks like there are a couple packages that would solve this problem, but
neither appear to be production ready or up-to-date.

<https://github.com/madari/go-socket.io>

<https://github.com/igm/sockjs-go/>

------
logn
I wonder how Go compares to SilkJS, since SilkJS is much faster than Node.

~~~
robert-zaremba
SilkJS uses mostly the same libraries and relay on the same VM as Node.js: V8.
So it can't be much faster. The note how it outperform Node.js http server,
you can read from start page of its github repository, are misleading since it
uses multiple processes (SilkJS http server forks itself).

