Hacker Newsnew | comments | show | ask | jobs | submit login

TFA (or MFA for My FA?) asks why Pike emphasized Go's concurrency performance when it was almost certainly worse than Stackless Python's, then showed why I inferred that it was worse.



On speed: Rob Pike's video demo compiled 120kloc RTL in 8 seconds (~15kloc/sec). Your C compile took 7 seconds for 25kloc (3.6kloc/sec) or 2.75 (9.1kloc/sec).

On concurrency: you compared a parallelized CSP implementation, with a single-threaded one, where the test was dominated by communication costs. Single-threaded communication is much faster than potentially contended parallel communication.

On benchmark hygiene: there are a plethora of different machines, different performance profiles and different numbers flying around here. The kloc/sec numbers don't mean anything unless they're on the same machine; similarly, the performance numbers are dependent on degrees of hardware parallelism and the kind of workload (proportion of per-task vs communication vs task startup, all the different variables in cost). Several different benchmarks would need to be run to actually tease out these different variables, to really figure out which one is better.

-----


Hmm. I think I may have gotten the compile numbers from another video about Go, and as I'm reaching the end of my stamina for this topic, I'll just assume I got that wrong.

On concurrency, I thought Go right now was configured by default for only a single kernel thread running goroutines, which would make both of them single-threaded implementations.

A few people now have reported benchmark numbers on their machines, with both Stackless and Go. In every case so far, Go has been slower than Stackless for the 100,000 tasklet/goroutine case. Of course, if Pike had a MacBook Air with an SDD then the compile times are absolutely not comparable.

Agreed also about how to tease out the different variables. Still, recall that I mostly want to know why Pike, for example, stressed that there were no tricks going on underneath the covers to make the performance fast, with the implication that people wouldn't quite believe how fast it was, when the performance does not seem exceptional compared to other similar languages.

-----


I quote from the stackless mailinglist:

"Go is stackless where Stackless is not. Its goroutines use allocated stacks (starting at 4k in size) and can continue running on different threads where Stackless tasklets cannot. In fact, when a goroutine blocks on a system call, the other goroutines in its scheduler are migrated to another thread."

So stakcless is not really concurrent at all, given that, it is no wonder performance is different given that the functionality provided by stackless 'microthreads' is in no way comparable to goroutines.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: