

Benchmarking Go and Python Web servers - supersillyus
http://ziutek.github.com/web_bench/

======
sigzero
"As you can see Go implementations of Web application wins in almost all test
cases."

That is pretty much what I would expect from a compiled vs non-compiled
language.

------
japherwocky
I'm mostly pumped that tornado is holding up pretty well against a compiled
language. :)

~~~
dchest
Don't forget that Go still has a lot to improve wrt code optimization (no
function inlining yet, simple goroutines scheduler, for example), and that its
regex implementation is still simple and not optimal (I heard they plan to
replace it with RE2 later).

~~~
hendler
Wondering if Go is going to take 5 more years to be "ready" - or at least at a
stage where there are no caveats with benchmarks.

Couldn't Google fast track development? Or is it that it was built for
internal purposes and it's not a priority to mainstream it?

<http://golang.org/doc/devel/roadmap.html>

------
dchest
Here's profile of http.go
<https://github.com/traviscline/web_bench/blob/master/http.go>:

    
    
            287.36%	runtime.mach_semaphore_wait
             80.46%	syscall.Syscall6
              2.30%	runtime.cas
              2.30%	runtime.memmove
              1.15%	MCentral_Free
              1.15%	MHeap_FreeLocked
              1.15%	bytes.*Buffer·WriteString
              1.15%	crypto/sha256.Init·
              1.15%	hash_remove
              1.15%	runtime.MCache_Alloc
              1.15%	runtime.MHeap_LookupMaybe
              1.15%	runtime.breakpoint
              1.15%	runtime.mach_semaphore_signal
              1.15%	runtime.mallocgc
              1.15%	runtime.xadd
              1.15%	syscall.Syscall
    

It probably shows that the test is measuring goroutines scheduler and how long
it takes for Go to do a syscall.

------
supersillyus
I kinda expected Go to lose. It's HTTP library seems optimized for simplicity,
and I get the impression that very little performance tuning has been done on
the scheduler. I'd like someone to do a Shootout-style multi-language
continuous benchmark for network-type apps to help judge the general
suitability of languages to IO and memory-management heavy tasks.

------
masklinn
Stop the press, news are news, a compiled language built for concurrency is
better at creating performant web servers than an interpreted language with a
GIL.

Next in line, is an Nginx in-module application faster than Rails on WEBRick?
We're not sure yet, but we're going to tell you. Stay tuned.

~~~
vegai
I think it's some sort of interesting how small the gap is.

------
jgfoot
It looks like he used the built-in webpy cherrypy server, but if I recall
correctly that server is recommended only for debugging, not for production.

------
udoprog
It's interesting how you can predict bias depending on ordering of the title.
Throw in native event loops with twisted and compare.

------
traviscline
Added a basic gevent example here that is apparently about 10% better than the
http.go wrt transaction rate.

<https://github.com/traviscline/web_bench>

(notice: I realize this is a highly synthetic benchmark and reading siege
results like this is not good benchmarking)

~~~
dchest
I don't see the use of regexp in gvent.py.

~~~
traviscline
Good call, added it, initial run didn't show a huge difference.

edit: here are numbers from a few runs: gvent,http.go 361.43,386.68
354.69,397.15 388.96,377.69 424.81,430.34

http.go averaged 26% utilization gvent.py averaged 15% utilization

I'm embarrassed posting this because of how grossly unscientific this is..

Not quite as fast as I thought initially but likely faster than the other
python competitors.

~~~
dchest
Good. Next, there's a check for "GET" method and if request contains
parameters at all (which returns 404 if not) in the Go version. Though, I
don't think it will make a big difference.

------
BrianLy
I don't see how benchmarks like these are really useful to anyone other than
the author. I care more about reliability than raw performance on a simple
test, so I would rather see a fairly complex application of average quality
being run under load.

------
mnazim
Stuff like this is the reason why people are saying HN quality is going down.

~~~
supersillyus
As submitter, I mostly agree. Microbenchmarks are the celebrity gossip of the
tech world. They are mostly meaningless, but they get more attention and
controversy than they deserve.

And yet, I submitted it anyway. I saw a link, found it interesting, and as
such posted it. I'm a little surprised it made it to the front page, but if
there were more interesting content available, I assume it wouldn't have.

~~~
mnazim
My comment was not directed to you only. Such things rising to the front page
is, in equal part, our(voters/commenters) fault also.

A year back HN front page was all I needed for my daily dose of interesting
tech stories(I hate Twitter). Now really good content is buried away in later
pages most of the time.

------
TillE
I don't see the version of Python that was used.

If possible, try it with PyPy. If it works at all (ie, no C dependencies), it
should be significantly faster than CPython and may use less memory.

~~~
riobard
PyPY typically uses 2x more memory than CPython.

------
wladimir
I wonder why he didn't compare against gevent-based servers. In the
comparisons I saw between Python-based http servers, those came out highest in
perf.

~~~
traviscline
See my other comment, gevent appears to outperform in my initial test.

------
petdog
My experience with siege is that it actually measures siege's terrible
performance.

------
st3fan
This is not really fair. Compare Go to Java instead.

~~~
vegai
Yep. And Haskell shouldn't be compared to anything :P

