Hacker News new | comments | show | ask | jobs | submit login
Rust programs versus Go (debian.net)
144 points by chatmasta 4 months ago | hide | past | web | favorite | 202 comments



These benchmarks have many flaws.

Consider this thread: https://www.reddit.com/r/golang/comments/51mhzv/on_the_binar...

The binary-trees benchmark would fare significantly better in Go if it were allowed to use an arena allocator like other implementations. But its not.

This despite the fact that the Rust version literally uses one: https://benchmarksgame-team.pages.debian.net/benchmarksgame/....

I guess because its not in the stdlib in Go?

Course its not in C either, so does C use malloc? Nope. It uses a memory pool too: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Its sad that these benchmarks get used for anything. They tell you next to nothing significant.


There are many predictable things in life: the tendency of up-and-comer languages to blitz these benchmarks due to careful optimization, and the tendency of people whose pet language has lost on a benchmark or two to get on here and write off the idea that anyone should ever benchmark cross-language.

Some of the benchmarks are pretty ridiculous and flawed. Others look pretty straightforwardly comparable. I have trouble looking at n-body (just fr'instance) and seeing any major difference or cheat on the Rust side (or for that matter, the much more mature C/C++ sides). Leaving aside the obviously ridiculous (the regex one is a absolute festival of 'who brought a fast regex library to the gunfight'), the trend is pretty clear: Go's code generator is pretty ordinary (it's not actually aimed at achieving ultimate code quality at any compile-time cost, so it's working as designed) and GC isn't free.


I find microbenchmarks mostly useless. There are just too many variables in play.

Also they tend to focus on the developer who optimizes code, not the journeyman developer in your company who actually writes code. C or Fortran looks fantastic on benchmarks, but if you forced all your developers to use C I doubt you'd see a similar outcome.

They seem to push developers in the direction of coming up with simple rules of thumb like "Go is 2x slower than C" or "Python is 10x slower than C" which is complete nonsense. It really matters what the code is doing.

In particular for server programs developers come to these conclusions without considering the multi-core environment they're working in. ie, redis is a really fast key-value store that will use 1 core on a 32 core machine.

And as it turns out all the tips and tricks to improving performance in these cases works in all the other languages too. So just what are we to conclude from them?

I suppose a nice simple math benchmark can demonstrate that LLVM or GCC is better at optimizing code than Go?


I have a lot of sympathy for your viewpoint, as I think it's abundantly clear that there are major productivity differences between languages. I doubt that anyone would be seriously running around doing an end-zone dance because C beats, say, Python. "Yeah, suck that, Python losers!".

However, I think the fairly simple benchmarks where there looks to be a rough 1:1 correspondence between the structures of some similar Rust, Fortran, C/C++, Go code might demonstrate exactly the point you suggested (facetiously? I'm not quite sure how you are suggesting we compare compiler frameworks to a language, especially a language that can use both frameworks with alternate backends...)

I don't know why this is controversial. The Go optimizer wasn't written primarily with utmost code quality in mind (it's pretty much a lift of the Plan 9 C compiler, which I worked with in 1994). It's designed to produce code quickly. It's also not rocket science that GC can be expensive despite years of claims to the contrary (it's been 'about to be the same or faster as manual memory management' since the mid-90s at least). Leaving aside the wacky benchmarks (binary trees and regex) the results seem pretty consistent with this.

These rules of thumb are OK for what they are worth (not very much). Eventually we do have to think about how much performance can be extracted from a language if we really need it. The irritating thing about this whole benchmarking problem is that there is no natural population of programs to draw from, and there is constant thumb-on-the-scale cheating (who can intrinsic the hardest for Grandpa's ancient Core 2 duo, but no later?) - agreed. Essentially we're seeing a real-life version of the saying "the plural of anecdote is not 'data'". Nonetheless... it feels like we can say some programming languages are faster than others.


I think you'll find the performance per gzipped line of code is a decently robust metric for a performance/productivity tradeoff, although it may have a wide variance.

Performance optimizations tend to involve a lot of specialization, which means more code, which makes this ratio worse. The gzipped size is a proxy for the minimum description length/Kolmogorov complexity, which measures language expressiveness.


APL and co kind of undermine that metric, wouldn't you say?


Kolmogorov complexity is "ideal compression", in that it bundles the axioms and the expression to reproduce the desired output using those axioms. So it really is a perfect measure for this type of thing.

Gzip is an approximation of this metric of course, and the only flaw in the benchmark suite is that it should really include the size of the runtime which provides the programming language's axioms.

APL is also a good example of why it's important to include various programs in the suite for a valid comparison. Some APL programs are longer than more expressive equivalents when APL is ill-suited to the problem, and once you include the axioms.


I know what Kolmogorov complexity is, I was trying to get at something else: removing boilerplate is one thing, but the other extreme isn't all that helpful either. When discussing information theory, one should always remember that white noise has the maximum amount of information, so the theoretical best scoring programming language of your measurement would be indistinguishable from white noise. Hardly productive, right? One of the key things about higher programming languages is that they add structure and patterns - in information theory terms: redundancy - to make it easier for programmers to follow what is going on.

I mentioned APL because programs in it usually consist of a handful of symbols with very little repetition. The symbols/performance ratio is really good for APL. Or to be even more on the nose: think of code golfing languages[0].

[0] https://en.wikipedia.org/wiki/Code_golf#Dedicated_golfing_la...


> so the theoretical best scoring programming language of your measurement would be indistinguishable from white noise.

The compressed program is necessarily shorter than the output it's trying to reproduce, unless the output is itself pure noise. Therefore noise and programs must be distinguishable.

> One of the key things about higher programming languages is that they add structure and patterns - in information theory terms: redundancy - to make it easier for programmers to follow what is going on.

Structures and abstractions don't introduce redundancy, they are actually compression. A while loop is clearly more compressed than an infinite sequence of if-statements repeatedly testing the termination condition. An abstraction, like a list or monad, eliminates significant boilerplate that would be needed to reproduce the output of the list or monad.

Adding redundancy makes things harder to follow because you need to keep more state in your head. Adding the right kind of compression makes things easier for humans to follow (obviously there are bad kinds of compression, like alpha conversion to random strings).


I always find this argument really strange. How is making a complicated decision (which language is best for this task) easier using less information (the benchmarks are only one facet of the decision)?


The name of the website used to be the language shootout; it was subsequently changed to be called the benchmarks game to emphasize that its not a rigorous comparison between languages.

Benchmarking is hard. You have to identify workloads that are realistic and representative, large enough to be sure you're hitting the steady-state performance yet small enough you can run them on a regular basis. And, of course, you have to make sure that you're not embarking on a journey of overfitting your benchmark suite. Microbenchmarks, where you look at a single kernel in isolation, are extremely hard to get right, especially because it can be easy to accidentally test how fast your computer can do nothing.

Benchmarks can be incredibly useful, but an uncurated, gamified set of benchmarks are not going to be in the category of useful benchmarks. There is a risk that by turning a bad benchmark into a programming mantra, you can seriously harm the performance of your code in the future (Duff's device and object pooling are two "optimization" strategies that are now more harmful to your performance than not doing them).


+1 for Duff's device. It's really not good now (as compared to the unrolled loop plus a peel) approach, as all those in-edges to a loop all through the body paralyze the compiler's ability to make good choices.

An anecdote I often share is that we had a hotshit performance programmer who often enclosed his simple C versions of the code with:

#ifdef I_DONT_CARE_ABOUT_PERFORMANCE

The punchline was defining this macro and getting the C version usually improved performance over the giant festival of overly-Pentium-4-focused inline asm.


Do the results of the game differ from your experience when you squint at them from a distance? They line up with mine pretty accurately, but I only have experience with a handful of those languages. Python is horrifically slow even with numpy in my experience. About 11x slower than native for an actual project. Java is 2-3x slower on everything I've used that had native and java versions of the same software. c# has always felt faster than these benchmarks indicate though. I'm interested in other peoples gut feels on this who have experience with go and maybe some who are better at using python? I really wanted to like it.


nit-pick -- The name was changed after the Virginia Tech shooting, because the search results were mostly about gun deaths and porn. Not a bright happy start to my day.

Not "to emphasize" just about the benchmarks game, but more generally about cross-language comparisons -- and I had to pick a name ;-)


>> the regex one is a absolute festival of 'who brought a fast regex library to the gunfight'

Yes performant libraries are important.


> I guess because its not in the stdlib in Go?

I think the premise of these specific benchmark is to use language idiomatic approaches instead of hacky optimizations. Is arena allocator idiomatic for go? What library people usually use? I tried to do google search, and found this one, but it is last updated 3 years ago.. https://github.com/couchbase/go-slab


A while back, there was some stuff about Rust and HashMaps and things. It ended up with this comment: https://www.reddit.com/r/rust/comments/5rwwrv/chashmap_effic...

I don't know if that policy has changed in the last year, but last I heard, that's the rules regarding this.

My understanding is that the intention is to use what's usual in each language.


>> Go if it were allowed to use an arena allocator

Go is allowed to use an arena allocator.

>> I guess because its not in the stdlib in Go?

A popular third-party library implementation would be fine.

For example, couchbase/go-slab except that it doesn't seem to be a live project and there's no indication that it is actually used by the Go community.

>> so does C use malloc?

C using malloc -- https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


So, we're basically looking at the difference between malloc and an arena allocator?


> I guess because its not in the stdlib in Go?

It's a game, and the sources are available. Nobody is stopping Golang enthusiasts from posting the results with a slab allocator or some other third-party packages.


> Nobody is stopping Golang enthusiasts from posting the results with a slab allocator or some other third-party packages.

You've never witnessed the "game"'s maintainer interacting with people have you?


What is it like?



Yes the maintainer is. People have tried.


Which popular third-party Go slab-allocator or arena allocator was used?


In that case you can set up a separate site with the results. They will be of more value since they'll be more complete.


Oh no, not The Benchmarks Game again :(

This whole thing is meaningless and deeply flawed.

The sources of the various solutions vary a lot in quality: different libraries are used, different settings (in Go vs Java the use of threads is almost never the same).

I suspect the only true end of this benchmarking exercise is to make people comment vigorously on HN and similar forms.

Can we stop posting them?


I’m the OP. I figured this post would either get tons of upvotes or flagged off the first page. The title really is a firebomb on HN lol.

But honestly I find it genuinely interesting. I’m planning to take some time to learn either rust or go this month, so comparisons like this are helpful for me. And I also appreciate the contrarian comments and discussion of the merits (or lack thereof) of the benchmarks.


I really, really like Rust. Yet, I use Go.

This is mainly do to firstly, Go just fits way way better for my shop. The transition from my fellow employees from Python to Go is pretty straight forward. Secondly, Go is just nicely setup for ease. I had so much hell in Rust cross compiling a simple https server.. it was mind numbing. It might be better now, but I was shocked at how difficult it was.

Finally, Go is just more nice to write in when you're.. well, writing it out. I guess I tend to write as I think about data structures, abstractions, general design. I don't draw it ahead of time or w/e, I just write code. Go is very minimal, and is pretty forgiving to quick rewrites/etc. In Rust I found myself writing myself into a corner constantly when all I was trying to do was get an idea out of my head.

With that said, I vastly preferred Rust's feeling of knowing exactly what my program is doing. No question on if something is nil or not, you know it is or isn't (or know it's unknown haha).

I think for me, in another 2 years I'll push again for Rust > Go in my shop as I imagine the Rust tooling will have improved to the point where it's easier and more quick to write things. Rust was experiencing a lot of cargo package churn, I spent hours debugging which packages worked together, it was a headache. Even understanding errors was a bit of a headache, despite the compiler trying really hard. Rust is making great strides though, and even though I use Go to get shit done, Rust is probably my favorite language.. that I don't use heh.

edit: Oh yea, and green threads, god do I wish Rust had green threads. I found Rust to be similar to Go in multithreading ergonomics, but I couldn't use them in the same manner as with Go - ie, fire off as many as I want. There are several libraries in development that should take care of this though, I believe.


> Oh yea, and green threads, god do I wish Rust had green threads.

They still don't? I thought they were working on that years ago, no?


They had green threads, but removed it. To my knowledge, there are multiple libraries that effectively implement green threads, but it's still a work in progress. Might be production ready, not sure, but I believe at the very least the ergonomics are a work in progress.


Hmm, sounds like the kind of thing that is hard to get right then. Maybe it's for the best that it isn't in the standard library until the design and implementation stabilises then.

Sucks for early adopters though!


I doubt Rust will ever get green threads - it turns out that OS threads are good enough, and futures (async/await) is how IO will get dealt with at scale. Green threads don’t help compute-bound code, so with a solution in place for IO, they’re unnecessary.


It's quite unlikely that Rust the language will get green threads, yes. We are getting stackless couroutines ("generators"), which are related but not the same thing.

There's a ton of options in this space, and they're all slightly different, with different tradeoffs, so it's really hard to make good comparisons. You have to deeply understand exactly what each language is doing.


1. long ago, we used to have green threads and a runtime

2. we decided to eschew the runtime, and therefore, the green threads.

3. Since then, various libraries have implemented green threads in various ways

4. Tokio is basically green threads today. It really depends on what you mean exactly by "green threads".; but it schedules N tasks on M OS threads, so...


Thank you for clarifying!

> 2. we decided to eschew the runtime, and therefore, the green threads.

The most important bit I guess, regarding the question of whether Rust will ever have green threads/fibers/trails/whatever (answer: no, because no runtime).


Not built into Rust, yes. But since Rust has no runtime, if you want those things, you can bring them along, as a library. The danger there is splitting an ecosystem, but in general the community has been sensitive to this, and once Tokio was determined as the way forward, the other projects stopped to throw their weight behind it too.


Did you try to spawn lots of threads? OS threads scale better than you'd think.


Right tool for the right job.


I used to like Go a bit, but never did a full blown application in it, just did enough to get a feel of it. Having tried Rust recently (after not liking the idea of not having classes like in Go) I have to say I prefer Rust for several reasons.

For starters the tooling for Rust is well done. If I want to install Rust on any machine I'm on I just go to https://rustup.rs/ and go from there, then I can use rustup to update my tooling or add features to the compiled.

The other thing is the wealth of packages[0] that expand upon Cargo (Rusts build system and package manager) namely the cargo-deb package which lets me build my Rust projects directly into a Debian package.

Continuing on about tooling I have to mention that for IDE's / Text Editors: CLion w/ Rust Plugin is fantastic, but if you want something fully free the VS Code plugin is also amazing for Rust. There's a great effort towards building the Rust Language Server, which thanks to Microsoft is a generic spec that once implemented for one language can be used by any editor / IDE that supports langserver[1].

I will say depending on what you want to do with Rust there can be a bit of a learning curve. I'm still just bleeding my way through some parts because I don't know what everything means in Rust just enough to try different things.

I will say Rust is nowhere near where it could be, but it's at a stage where I think it's definitely usable. They definitely took their time to make sure they got things right, and they got plenty right.

[0]: https://crates.io/ [1]: http://langserver.org/


You mention tooling and IDE where Go is better than Rust in that space and as for Cargo just use go dep problem solved.


Honestly, the tooling in go seems very poor to me. There are lots of potential solutions, none of which remotely compare to the efficiency and ease of use of cargo. Golang needs a project independent manager, a la carton,bundler, or pipenv (although cargo is the best imho).


What tooling do you prefer in Go and why? Just curious!


I disagree that the comparison as presented on this page is useful for you. If there were any context or discussion of the results and what they mean in terms of differences between the languages, then maybe it could be. But raw benchmarks collected with no indication of why or how is not. In the regex test, for example, all the patterns are very similar to each other and don’t really cover the breadth of what a regex engine can do (specific patterns may influence the results greatly—there are some very simple patterns that send the default Java regex engine into O(n^n) fits). And regex in particular is a very well understood problem space. So something very interesting is going on there for the golang result to be so far off. What is happening? Well, we’ll never know from this page. Oh well?


Agreed... which is why I posted it here and it got 80+ comments. So there is a discussion now!


>> Well, we’ll never know from this page.

Apparently, you now know that there's "something very interesting" to investigate! So, investigate?


No reason to learn Rust or Go - learn Rust and Go. I use Go for most "network service" type stuff at my job. I would probably not choose Rust for those, even if I was as comfortable with it as I was with Go (which I'm not, since Go is much, much simpler).

There's many variables when considering what language to pick for a problem and if speed was the only thing we cared about, we'd just do raw assembly always.


We're even seeing production deployments that combine the two directly; Conduit and Dropbox both come to mind.


> suspect the only true end of this benchmarking exercise is to make people comment vigorously on HN and similar forms.

I suspect you are right and some of the posters suffer from “watching the likes roll-in” addiction


Benchmarks are important actually. Maybe they are not 100% accurate but they do tell you that rust is faster than go. Since rust is also more complex than go, you have to decide if it's worth complexity and learning curve for the speed.

Without benchmarks, we all would use python for everything right? :)


Well, no. Some of the presented benchmarks prevent the Go solution to be as efficient as the Rust one by their rules, e.g. prohibiting object pools in Go, but not in other languages. Same with hash tables. They benchmark the default Go hash tables vs. custom implemented ones for the benchmark in C.


Which, at least for hash tables, seems completely reasonable and representative of most actual experiences with both languages?


No. Standard libraries are a good thing, but if you want to write performant code, you definitely should consider a more specific implementation which is tuned to your actual problem. This is good practise in any programming language.

But the rules of this benchmark page explicitly prohibit that, so languages, which don't have hash tables in their "standard" library have a strong advantage, which isn't a good representation of the speeds of well writting programs in those languages.


In terms of performance, languages which don't have has tables in their standard library have an advantage in real life too, not just in the benchmark. While what you say is true, in great majority of the real life cases, if there's hash table implementation in the library, the developer will use that. If there's none, then they will start looking for one, making it more likely that they will choose a more fine-tuned to their needs implementation than if they already had generic one provided in the standard library.


Or, more realistically, the first answer on Stack Overflow that, as we all know, is the wrong one. The right one is always under the fold, with more upvotes but not selected as right by the asker.

A standard library for a language will most probably get more attention from developers than any external library.


No, a "custom implemented" hash table is not used for the benchmark in C.

A generic hash table from a third-party library is used -- something written to be generally useful, not compromised to look good on a toy benchmark.

https://github.com/attractivechaos/klib/


Yes, benchmarks are important, which is why we should use proper ones. The Benchmarks Game is not one of those.


>> in Go vs Java the use of threads is almost never the same

Seems like you might be suggesting the Go programs don't use multicore but the Java programs they are compared against do use multicore, or vice versa, or …?

Currently just 1 comparison (reverse-complement) shows a sequential Go program and a threaded Java program.


Probably going to get downvoted for that, but could you recommend me some similar forums? :)


Reddit can be decent IMO (e.g. reddit.com/r/golang)


I expect Rust to be faster than Go as a rule because of its somewhat lower level nature and cost-less "unmanaged" abstractions but I didn't expect it to be a full order of magnitude faster than Go for some benchmarks. Are the Go snippets not properly optimized or is there something else going on?

I think pitting Go against Rust is somewhat inflammatory but if you look at the Rust vs. C++ benchmark the results are much more similar: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... so clearly it's not Rust doing something right, it's the Go implementation doing something wrong. Actually even Java fares (mostly) better: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Explanations for the few largest offenders:

1. regex-redux

Rust has it's own regexp engine, that lacks a few of the bells and whistles of PCRE, but is much faster than it. (And has no sheer perfomance cliffs where a carefully crafted target string or search expression can DOS the app.) Also, I think the rust implementation can do less copying.

GO just uses PCRE. (Which is weird, isn't RE2 a google project?)

2. binary-trees

The benchmark needs to build a massive binary tree. The Rust implementation allocates it's tree in a TypedArena, which is basically a bump allocator for objects that are all dropped simultaneously. It requires very minimal metadata or work per allocation. (Not zero, because it does properly call all destructors.) The GO implementation has to use the platform allocator. The results shouldn't be very surprising...

3. Manderbrot

Not sure, but I think the Rust version is designed for autovectorization while the Go one is scalar.

After that the Go runtimes are within 2.5x of the Rust ones.


> The GO implementation has to use the platform allocator.

So does the Java implementation, and still it fares much (3x) better than Go.


Because the Java GC has better throughput than the Go GC. The Go GC makes enormous throughput sacrifices by prioritizing latency above all else.


With respect to your comments about regex, could you please share with me where you got your information? I would love to know so that I can go bug whoever it is to correct it.

> Rust has it's own regexp engine, that lacks a few of the bells and whistles of PCRE, but is much faster than it.

The only true part of this is that Rust's regex engine lacks many features found in PCRE. Mostly, these features revolve around things that are either impossible or not-known/difficult to add to a linear time implementation in an efficient way. The most popular features among these are backreferences and various types of lookaround.

However, it is completely unsubstantiated to say that Rust's regex engine is faster than PCRE. At best, you can say that they are competitive with each other, where one regex engine will do better than the other on various types of workloads. Many such cases are not at all related to the feature sets of regex engines, and probably more related to missed optimization opportunities. (Anyone who has implemented a regex engine knows that the size of the set of missed optimization opportunities is unbounded.)

There are of course some of those cases where it comes down to pathological differences. For example, since PCRE uses a backtracking engine even when it isn't necessary (they do have a so-called "DFA" matcher, but it must be invoked explicitly), that means it is susceptible to exponential behavior when Rust's regex engine isn't. But Rust's regex engine has pathological behavior too. The only difference is that Rust's pathological behavior takes the form of large constants, where as PCREs takes the form of exponential growth in the size of the text being searched.

> (And has no sheer perfomance cliffs where a carefully crafted target string or search expression can DOS the app.) Also, I think the rust implementation can do less copying.

If you're talking about PCRE vs Rust's regex, then I see no reason to claim that one does more or less copying than the other. I suspect both do exactly as little as possible. Certainly, neither of them will copy the search text.

> GO just uses PCRE. (Which is weird, isn't RE2 a google project?)

No. Go has its own regexp engine: https://golang.org/pkg/regexp/

The benchmark showcased in the OP for Go uses PCRE, and that's presumably because Go's PCRE variant is faster than Go's native regex engine. See https://benchmarksgame-team.pages.debian.net/benchmarksgame/... and compare `Go #2` (PCRE) with `Go` (Go implemented regex engine).

Go's regex engine in the standard library is written in pure Go, and was written by the same person that wrote RE2. (I wrote Rust's regex engine, but it was heavily inspired by RE2.) Go's regex engine does not have all the same optimizations as RE2, and in particular, it suffers from some very high constant factors in many more cases than what RE2 and Rust's regex engine. While Go's regex engine doesn't exhibit pathological behavior in the form of exponential runtime in the size of the search text, it also can be handily beaten by PCRE in a number of non-pathological cases.


I love how incredibly fair your posts always are. I think I have learned something from every single one. :)


> Not zero, because it does properly call all destructors.

I believe that there is an optimization that allows this to be completely avoided for types with only trivial or no destructors.


>> GO just uses PCRE

Currently, there are 4 Go regex-redux programs and 2 use PCRE.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

>> The GO implementation has to use the platform allocator

… because there doesn't seem to be a Go third-party or stdlib arena allocator.


   binary-trees

   source    secs
   Go        28.80
   Lisp      8.49
You know what I would choose if I needed a garbage collected language.

Btw,the lisp version uses no fancy tree allocator, just plain standard lisp lists.


Before you do that, you might want to research GC 'throughput' vs GC 'latency'. Go chose to optimize for latency, not throughput. And for very good reasons.


And yet Go is much faster than Lisp, you shouldn't look at that specific Benchmark.


You should look where the time goes in your app.


> Rust vs. C++ benchmark the results are much more similar

Not really... Try compare sources of those programs. For example spectral-norm uses OpenMP and other things that make source look really different from the one you would write in real life. Rust program looks much simpler and closer to real life code.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> > the results are much more similar

Parent is talking about performance, not source code


True, but that performance gained by writing code in a way which most devs wouldn't do.


I very much doubt that you actually know what most devs would do.


The JVM has a ton of work put into it, it wouldn't surprise me if Java is faster than Go for quite a few things.


C/C++ are cheating by using Intel specific instructions, like _mm_cvtpd_ps. I.e. they are with hardware-specific acceleration. If standard library will be used instead, or if Rust code will be updated to use same instructions, we will see different results.


That instruction is SSE2. SSE2 instruction set is guaranteed to be supported on all x86-64 CPUs, be it Intel, AMD or VIA.

Similarly, the intrinsics are supported by all modern C and C++ compilers for decades already. Some compilers even generate SSE code by default for code like double x = 42.0 * y, in place of older x87 floating point instructions. x87 code is slower, and has multiple numerical stability issues.


A lot of this depends on the rules. For example, rust (rightfully) cannot use nightly, and unstable features. If it could, you could use these too. Rust will have these in the next stable release[1], so we’ll see after that ships!

These intrinsics aren’t part of the C or C++ languages, but compilers provide their own, so it’s sorta like a language extension. Since they don’t use the same stability model Rust compilers do, it’s legal for them to use them.

This generally means that C and C++ have a bit more leeway to use less stable/standard things. I think that’s a fine rule, given how the different languages work, but it’s a good example of how tricky all of this is.

[1]: https://doc.rust-lang.org/beta/core/arch/x86/fn._mm_cvtpd_ps... and https://doc.rust-lang.org/beta/core/arch/x86_64/fn._mm_cvtpd...


> C and C++ have a bit more leeway to use less stable/standard things

Here’s a link to the GCC header from 2002 that includes that _mm_cvtpd_ps intrinsic: https://github.com/gcc-mirror/gcc/commit/d3ceaee1b851570b269...

If you think something that hasn’t changed in 16 years is unstable, I’d like to hear your definition of stability.

About standardization… I don’t think the fact that ISO has standardized some parts of the language but not other is hugely important. For example, by the time when ISO has adopted large parts of STL into the new standard, there was already a consensus in the industry that STL is the standard library.


> If you think something that hasn’t changed in 16 years is unstable, I’d like to hear your definition of stability.

That's not what I'm saying, to be clear. What I'm saying is this:

> I don’t think the fact that ISO has standardized some parts of the language but not other is hugely important.

This is very true for C and C++, but less true for Rust. This means that, depending on how you define the rules, certain things are allowed or disallowed. Rust provides that exact same header, yet isn't allowed to use it. Rust is effectively limited to "what's in the standard," whereas C and C++ are not.

I think that the rules the way they are makes perfect sense. I'm not trying to complain one or the other is hobbled here. I'm trying to point out how language differences can matter when trying to come up with good benchmarks.


> This is very true for C and C++, but less true for Rust

Right, Rust has essentially a single implementation, and its standard is written by the developers of that implementation. C++ standard is set by huge international standard-setting body with headquarters in Switzerland.

That’s why authors of Rust specs are doing much better job keeping the spec in sync with the actual language and its libraries.

IMO this difference is mostly about quality of the language standards, but is almost irrelevant to the actual language.


Agreed 100%.


By the way, if you’re going to add SIMD intrinsics into the Rust standard, I think you better contact Intel and ask them whether they OK that.

Or seek legal advice.

According to google patents, Intel has 4537 patents related to SIMD. The reason why GCC is OK with their intrinsics, in 2002 they were put there by Intel employees. Intel obviously wants people to use them because this helps Intel sell their CPUs. But having them standardized can potentially decrease Intel’s profits by undermining these patents and enabling hardware and software emulators.


Thanks! I am not a lawyer, nor was I directly involved in this, but I'll pass it up the tree.


I’m not a lawyer either, I’m a software developer; but I’ve been doing this long enough to develop certain degree of paranoia regarding patents and IP rights legal BS.


I hear you! Basically same here. I had not come across this one...


I haven't either. FWIW, myself and one of your cohorts at Mozilla had a phone call with Intel (a while ago at this point), and while there were no lawyers on the line, the engineers we talked to sounded very happy about what we were doing.


That’s good to hear.


If you use a LLVM compiler, _mm_cvtpd_ps just translate to LLVM IR and is not Intel specific anymore.


They're using g++ though, afaik that's just using GCC is it not? But good catch altogether.


It’s less Go doing something wrong and more just that they do different things. Go has a garbage collector which will naturally slow things down somewhat, while Rust handles memory (de)allocation at compile time.


The problem is, that some of this benchmarks rules prevent the Go solution from using the same efficient code than its competitors, so the numbers are just not valid.


> Rust handles memory (de)allocation at compile time

You are saying that Rust is doing its runtime memory allocation at compile time? Are you sure you don’t mean compile time management of memory? Two very different different things.


Isn't one of those impossible? Ie, there's only one reasonable interpretation of that sentence?

Not saying being explicit isn't important, just trying to understand you.


Yes, what poster commented is in fact impossible.


I'm still puzzled by the continued belief that a tracing GC will "naturally slow things down somewhat". Why do you think that's the case?


You’re forcing the program to rediscover facts about itself at runtime. This is not free.

That doesn’t mean a GCed lang is always slower than a lang where memory is managed manually. It means that the theoretically optimal implementation using GC has to do more work than the theoretically optimal implementation that manages its own memory.

In the case where the theoretically optimal implementation needs a GC, the unmanaged language could always implement that anyway, but that’s vanishingly rare — typically a GC gets its speedup by using pools, which are also available to unmanaged code.


Nothing in these results should really surprise anyone. Go is a good tool for writing networking services, but it is not (and does not intend to be) a systems language.

For another perspective, look at Go versus Python: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Go is a compiled managed language; unsurprisingly performance is similar to Java or C# (or, say, Haskell or OCaml). There was a lot of frankly dishonest promotion of it as a C-like systems language for performance, but reality wins out in the end.


Being “compiled” (which in reality means “statically-compiled”) doesn't give you magical gains you know. It can even be the opposite, since you can't benefit from JIT optimizations.

What makes C, C++ or Rust efficient is not just that they are statically compiled, it's that they have massively-tuned optimizing compiler spending a lot of time in compile-time optimizations. Go's compiler is way less aggressive with optimization because they wants to keep the compile-time low (which is probably a good idea in their niche) and because it has been much less of a focus than for LLVM or Gcc.

Also, the lack of generics and the pervasive use of dynamic dispatch (with interfaces) also prevent many optimization but it keeps compilation fast, which is what Go developers want. (Please note that this is not an argument in favor of Go developers not adding generics to the language, they could totally add generics but keep the dynamic dispatch to keep the compile-time as it is right now).


Given the way it is being used in Fuchsia for writing critical system components like TPC/IP, WLAN, disk management, update infrastructure, it wasn't really dishonest.

C code in the 80's was about 80% inline Assembly on 16 bit machines, so surely there is plenty of room to improve Go's optimizer if they so wish.

Of course Go will never fit into those PIC micro-controllers with 16 KB of RAM.


Go makes writing networking services like a dream for sure, and Rust also seems have it's own plan[0] on network service field. It could be awesome to see they compete with each other and come up something good.

[0] https://internals.rust-lang.org/t/announcing-the-network-ser...


Rust is still figuring out its "async" story. Go figured it out, and went full steam ahead before the language was even made public - day 0, basically.

I'm not holding my breath for Rust dethroning non-perf-critical Go projects anytime soon.


That's true.

I'm also use Go primarily to write any application that may needs parallel (Not just network service).

For me, Rust is a exciting new language that I could love to use if they can finally put all the async/await things together.


to be fair, it was explicitly designed to be a systems language, and its specification says "Go is a general-purpose language designed with systems programming in mind."

This should be removed, and I would say that it gave up on its design goal of being a systems language.


"Systems programming language" has multiple, equally-valid definitions. This no-true-Scotsman rules lawyering about language classification is the least interesting possible discussion about a programming language.


you're right it's not an interesting semantic distinction, but it's weird to have people going around saying "but it is not (and does not intend to be) a systems language" when it's an explicit design goal for Go and explicitly mentioned in line 1 of its specifications. OK, not a very interesting discussion.


It's not intended to be a [Rust-style] systems language [in which one might implement a resource-constrained operating system].

It's absolutely intended to be a [Google-style] systems language [in which one might implement a large distributed data system].


The TPC/IP, WLAN, disk management, system update stacks in Fuchsia sound very like systems programming to me.


There’s that kubernetes thing of course ;)


I think Go is an excellent systems language, but these benchmarks are not a good measure, for the reasons mentioned by others in this discussion.


They're sorted by highest delta, which makes it look so significant in the first moment.

Also, though one of golangs target was to get close to c-like performance, they never intended to be the "performance-killer" since the focus is also strong simplicity and productiveness.

More info: https://golang.org/doc/faq#Why_does_Go_perform_badly_on_benc...


Another angle where Rust is doing well is in peak RAM usage. Its memory usage is generally (although certainly not universally!) much lower in these examples.


Yeah, no doubt! As already written somewhere above, the GC is not cheap.


Before getting heated about the meaning of these results, remember that benchmarks are good for one thing: measuring performance.

Performance is important, but there is laundry list of other factors that programming languages optimize for. Because factors such as development pleasure, mean time to programmer error, and mean time to grok code are not easily quantified and compared, it's difficult to put two languages head-to-head on these often more important factors.

These results clearly show how Rust and Go compare on a small set of performance problems. And if you're trying to solve any of these problems using either of these languages, and performance is what you're optimizing for — Rust is a clear winner. I wouldn't try to extrapolate these results to mean anything more than that.


  simplicity

  source      brain load
  Rust        95%
  Go          23%


It's true that performance is not always the main metric to consider but in this case I'm not really sure that Go makes that much of a difference. Take the "regex-redux" benchmark for instance, compare the two versions:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

I don't find the code complexity obviously worse for the Rust version, yet it runs a full order of magnitude faster in this benchmark.


Go seems to perform really poorly in the regex-redux. Against both Node and Python Go is way faster (as one would expect) apart from with the regex-redux..

https://benchmarksgame-team.pages.debian.net/benchmarksgame/... https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Does Go just suck with it's regex performance? Would this be easy to fix by just pulling in a C regex library?


I disagree with the people saying the benchmarks game is useless. It mostly seems to conform to my own experiences with the various languages involved.

The exception is the regex benchmark, which has little to do with the languages and everything to do with the regex library used by the language. If the dynamic scripting languages were required to use regex engines implemented in pure Python/Perl/PHP/JS, their performance in the game would collapse on the regex benchmark.

Go's standard library provides a regex library that provides certain O() guarantees, but has a significantly larger constant factor than PCRE-like regexs. Benchmarks tend to hit cases where PRCE is faster, but it's not hard to construct a benchmark where the Go standard library will be arbitrarily faster than some of the other languages, because that's a characteristic of big-O differences.

(In general, one should always be careful of claims that some block of code is X times faster or slower than another; the comparison only has meaning if they're in the same big-O class, or at least de facto in the same big-O class, e.g., O(n logn) and O(n logn logn) are basically the same class in practice. It's also important to make sure the problem being benchmarked is large enough to get past the constant factors that may be involved. I'm not saying those aren't important, but if you want an "X-times faster" sort of comparison you need to shrink the constant factors into irrelevance first. I've seen cases where people blithely claim some bit of code is 100,000 faster than some other, but really, it's a big-O difference.)


> Go's standard library provides a regex library that provides certain O() guarantees, but has a significantly larger constant factor than PCRE-like regexs.

While that is true, Rust's regex library makes the same guarantees.

I'd also like to push back on the idea that perf comparisons are meaningless if one implementation has degenerate cases. Pathological behavior in a backtracking engine happens very rarely, and backtrackers typically come with features that are not supplied by O(n) engines. Someone in the PCRE camp might complain that the benchmark isn't fair because go's engine can't handle backreferences. The different approaches come with different features (backreferences vs garenteed O(n) running time), but they do have a place where their problem domains overlap. It is not useless to examine how they perform in that area.


"I'd also like to push back on the idea that perf comparisons are meaningless if one implementation has degenerate cases."

That's not what I said. I said, you can't use "X times faster" comparisons if the O() of the two algorithms in question are not more-or-less the same. That does not prevent you from still characterizing performance differences. You just can't do it via "X times faster" statements, because those are only well-defined for things separated by linear factors, and practically defined for things separated by practically linear factors. f(x) = x^2 is not "2 times bigger" than g(x) = x, or any number of "times bigger", whereas h(x) = 3x + 5 can be reasonably said to be "3 times bigger" than i(x) = x + 50, even though it is not the case that h(x) = 3i(x).


>> The exception is the regex benchmark, which has little to do with the languages and everything to do with the regex library used by the language.

Perhaps that's "a truth" about regex :-)

>> If the dynamic scripting languages were required to…

Then we wouldn't be using them as scripting languages!


> I disagree with the people saying the benchmarks game is useless. It mostly seems to conform to my own experiences with the various languages involved.

I think it can only tell you about the performance cieling, and even then it can't tell you that for languages like Go where fairly obvious optimizations (like arenas) are prohibited while all manner of clever tricks are permitted for other languages.

Your remarks on performance comparisons were really insightful.


>> obvious optimizations (like arenas) are prohibited

No, not prohibited.

Which widely-used third-party Go library do you suggest people use to implement Go binary-trees programs?


> No, not prohibited.

Yes, prohibited

> Which widely-used third-party Go library do you suggest people use to implement Go binary-trees programs?

What does this have to do with my post?


>> What does this have to do with my post?

You falsely claim that Go arenas are prohibited.

Show that there is a widely-used third-party Go library that provides arenas, which could be used to implement Go binary-trees programs, to support your claim.


The benchmark is using some third party PCRE lib. This is probably necessary because Go's RE2 lacks some features, but it's not a good comparison.


> This is probably necessary because Go's RE2 lacks some features

No, it's because PCRE is faster than Go's standard library regex engine on this particular benchmark. There is another entry for Go (called just `Go` I believe) that uses Go's "regexp" package.

The regexredux benchmark itself does not require any "fancy" features. It can be satisfied by pure regular expressions.


What stuck out to me about the regex-redux benchmark is that PHP is faster than both of them! Is that because PHP is effectively a scripting language over C, especially for well-defined tasks like regex matching?


I don't know personally. In Oct 2017, the PHP version was slower: http://web.archive.org/web/20171027211857/http://benchmarksg...

Today, it is faster: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

As far as I can tell, the source code of the benchmark didn't change at all. The only difference between them is that former uses PHP 7.1 and the latter uses PHP 7.2.

So... I would start at the differences between PHP 7.1 and PHP 7.2. I would also first attempt to reproduce the result locally, to rule out changes in environment in the benchmark game (speculatively speaking).


You aren't really going to experience Rusts complexity when your program is essentially just a single function, so it probably isn't really a fair comparison..


When programs grows in complexity, Rust shines. Extra guarantees are very handy when code refactored. It hard to compile Rust code, but when it compiled, it usually works (or quickly points out to problem).

I actually use it in parallel with C/C++ legacy code, and I just trying to avoid C/C++ as much as I can, because I newer saw segfault in safe Rust code so far.


Especially when you're just reading some pre-written code. I've rarely (never?) seen any complaints about how reading Rust is hard ... but lots which say that writing the code itself can be frustrating


  more benchmarks

  source    generics
  Rust      100%
  Go        0%

  source    if err!=nil return { nil, err };
  Rust      0%
  Go        100%

  source    undeserved hype
  Rust      100%
  Go        100%


source: good generics implementations

Haskell: 100%

All Other Languages: 0%

source: flawed generics implementations

Haskell: 0%

All Other Languages: 100%


source: good generics implementations

Coq: 142%

Haskell: 91%

All Other Languages: 0%


There's a "matrix moment" where a programmer familiar with Rust no longer sees cascading green characters but rather the program itself.


How about using a language where you can start at this moment?


Sure. Use python.


Sidenote, appending to that list, my NodeJS "brain load" was insane. When I switched from Node -> Go ages ago it was night and day.

Funnily, in some scenarios, I find Rust to be less of a brain load than Go. In others, Rust is way way worse. As I mentioned in another post, we use Go to get shit done. I hope Rust can take that mantel some day.


I try to get shit done, but end up spending my time writing code templates for //go:generate.


For toy programs you are correct, but Rust guarantees way more than Go in terms of correctness. If you need to reason about, implement and debug such correctness, I don't think the difference is a factor 4. In fact, Rust's proposition is that it makes it easier to ensure (less easy to forget about) correctness.


Also, in readability Go wins. Because you don't have to put single quotes and HTML tags after every variable.


I think syntax ends up being mostly superficial. The real readability gains come from being able to map().filter().sum() instead of needing 3 for-loops mutating outer variables.


Which part of a for loop is unreadable?


Right, you use aboriginal angle brackets instead.


What's the point of posting this? Rust and Go are only "competitors" in the sense that every language competes with every other. You might as well post the page comparing Go to Fortran or Rust to Clojure. Rust and Go have different objectives, philosophies, tradeoffs, and intended domains. Posts like this seem like they serve no purpose other than to inspire flamewars. And I say this as a heavy Rust user, so don't think that I'm embarrassed by these microbenchmarks. :P


>> You might as well post the page comparing…

They already have! :-)


Not really that surprising. I spent a good chunk of time trying to make a Go program as fast as a well written Rust one (tokei) which I wrote about here https://boyter.org/posts/sloc-cloc-code/ and in the end the GC was what held me back.

I suspect its possible to get Go programs for the most part close to the performance of Rust but in the end the lower level and fewer abstractions mean a well written Rust or C/C++ program will almost always be faster.

I wonder how different these benchmarks would be if the Go runtime was set to have GC disabled though. A fairly simple thing to try out and see.


GC at the systems level is an idea that has had its chance. 40 something years out, and we all mostly use C, with Rust being the only contender to knock it off its perch. It's probably time to give up the dream; GC is only good if you're willing to assume the always-moderate cost of its implementation, relegating it to scripting languages.


Go doesn't really play in the "systems level" field all that much, tho. They even removed the "systems" bit from the Go home page.

Go doesn't compete with C, unless C was the wrong choice to begin with. Nobody is writing an OS in Go. It competes with Python and Node.

I've reduced my Python/Node code by roughly 92.5% after Go reached 1.0 and I couldn't be happier.


The results don't surprise me much, Go isn't that fast. Much more interesting are the results of the Rust vs C++ benchmarks: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... - it looks like in some cases Rust is actually faster than C++, and that's a big win for me.

Also, it would be great to see some Rust vs D benchmarks.


>> Also, it would be great to see some Rust vs D benchmarks.

Make those measurements and publish them!

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Even though Rust apparently "won" here(does anybody really win in such comparisons?) I don't care that much about it, because I don't consider this the main appeal of the language.

It's the compiler error messages, memory safety and the lack of a Garbage Collector that speak to me more than raw performance.


> Even though Rust apparently "won" here(does anybody really win in such comparisons?) I don't care that much about it, because I don't consider this the main appeal of the language.

Completely agree. For me, my primary consideration is developer sanity. A subtle compromise between speed, ease, and clarity. I don't program in Node or Python anymore because while it has ease, clarity is completely missing. I used Rust for 6 months or so and while it has speed and clarity, the ease was just not there for me. Go hits a nice sweet spot for me.

I think if Rust managed to improve ergonomics more and more it would really nail the sweet spot. But, there's still work to do.


Garbage collector is actually something you'd want, if it caused no performance hit.


Meh. I might have a huge Stockholm syndrome with manual resource management since I've basically started coding with C and used GC-free languages for most of my life since them but I never really understood why people liked GCs so much outside of scripting languages where convenience trumps correctness and you just litter your resources around instead of tracking them properly.

Are people using GC languages really often thinking "uh, I don't know when I won't be needing this resource anymore, thankfully the GC will figure that out for me"? Because I certainly never do. Either the resource is scoped and I expect it to be destroyed at the end of a certain block RAII-style, or it's some longer-lived resource (connection handle, object in a videogame, Window object in a GUI) that I'm going to store in some container and destroy when I no longer need it. If I have multiple references and a complex dependency graph I might use a reference counted container which is effectively a very simplistic GC but it's the exception and not the rule and it's still completely transparent and easy to reason about.

I never, ever, feel like I'm missing a GC when I'm coding in Rust or C++. In C I miss destructors and RAII but that's it. I just don't understand why they're so popular, at best they save you a few lines of cleanup code in your destructors, at worse they make your code behave sub-optimally and non-deterministically because you don't know when your objects are going to be destroyed and your destructors are going to run. Then if you need to write critical code you have to carefully step around the GC to be sure that your perf will be deterministic.

I genuinely don't get GCs.


From a Java programmer's perspective: I agree that a majority of your could would function just as well with RAII-style deallocations. That said, it's not too rare to throw objects around in queues or hashmaps and the garbage collector can save you from sloppy code turning into memory leaks.

Also consider, GC can "solve" a number of other issues. malloc() and free() aren't always cheap to do in the critical path especially when you have heap fragmentation. There are solutions without GCs, but compacting generational GCs combat both of these things without requiring you to even be aware of them. I'm not claiming they're always the best solutions but hopefully I've explained their motivations.


I'm guessing you've never written heavily concurrent code.

C/C++/Rust are totally fine when you have a single thread. It's concurrency and parallelism where object ownership and who gets to destroy what and when becomes a real problem unless you have GC.


> I'm guessing you've never written heavily concurrent code.

Guess again.

> C/C++/Rust are totally fine when you have a single thread. It's concurrency and parallelism where object ownership and who gets to destroy what and when becomes a real problem unless you have GC.

Given that one of Rust's main shticks is concurrency and parallelism it's an odd thing to say. You can always use an Arc<> if you want reference counting between threads which is sort-of GC but opt-in and perfectly deterministic.


There's more to it than that. Having a garbage collector can be faster even, but would you take a faster program if it cost you memory footprint or predictability? It just depends on what you're doing.


The performance comparisons between gc and non-gc languages show otherwise. Also, memory consumption eventually costs cpu cycles.


I'm not really disagreeing with you, performance comparisons between gc and non-gc languages DO tend show otherwise, but consider this: a common GC strategy in C++ is arena allocation. Faster allocations and deallocations but more memory overhead. It's still C++ but a little faster in the right situation.


Depending on the GC and what the particular definition of "performance" is for your use case (Memory usage? Throughput? Latency? ...), they don't cause a hit. It depends.

For instance, if you never hit the GC memory limit in D, it's faster than manually managing memory since you never free it.


Don't forget GO is not low level langauge. it is fast because it is compiled.

Not sure why there is argument against it. I thought it is obvious for everyone.

Rust is lower level, gives less abstractions, and designed for performance.

Rust performance is comparable with C and C++. That was the whole idea.


I wouldn't say that it gives less abstractions. The Rust devs are obviously careful about adding abstractions that add a performance cost, but Rust has quite a few abstractions that Go doesn't have; generics being the most notable one.


Apples to Oranges, and in the benchmarks involving Apples to Apples, that is C vs Rust, the binary tree test is 2x slower in Rust which is tragic.



This week there was a popular benchmark about implementing a treap in various languages, Go was as fast as both Rust version and c++ with shared pointer. It was even faster than those languages on some other platforms.

So yeah benchmark ...

https://github.com/frol/completely-unscientific-benchmarks


I would be interested in results of go-llvm - measuring some proxy of the intrinsic overhead due to the design choices of go vs. rusts focus on zero-cost abstractions.

The clean-room compiler backend of go has some usability advantages, but sacrifices the sort of comprehensive optimization that llvm does.


I haven't heard anybody claim that go-llvm is much faster systematically. I'm assuming that I would have by now. Similarly, the gogcc implementation doesn't seem to have any speed advantages, because again, I assume I would have heard of them by now.

The Rust team has some really interesting blog posts about how you can't just wave LLVM at a program and expect the optimizer to work magic, but that you need to process the code for LLVM first, such as this one: https://blog.rust-lang.org/2016/04/19/MIR.html Bear in mind that not only is the little paragraph about LLVM optimization relevant to my point here, the entire blog post is ultimately about the things the Rust compiler can do that the LLVM optimizer could not; if LLVM just optimized the code magically perfectly, MIR either never would have happened, or would have happened later because the pressures to create it would be less if LLVM was already as optimal as possible.


Link to live dashboard of Rust vs Go activity on GitHub:

https://demo.humio.com/shared/dashboards?token=g79uTaUrFY9ky...


This would be interesting if there were any comparison of what exactly is slower or faster between the languages or the particular implementations. This data provides a great opportunity, but it's just presented as a dumb contest.


>> This data provides a great opportunity, but it's just presented as a dumb contest.

Are you going to do-the-work to take that great opportunity?


Sometimes the most significant aspects of something is unmeasurable.


It would be interesting to benchmark the time required to write these programs as well.


Ah nice, a benchmark.unwrap()!


Why is Java faster than Go?


Java, or more specifically the jvm have had some of the worlds smartest developers tweak every corner of the code for 20+ years. Java is faster (in some cases) because it's older and have been tested in pretty much any scenario... And it remains wildly popular, so development never stopped.

Similarly Go and Rust are faster than Java in many cases, because the developers have been able to learn for past language like Java, and avoid the same performance pit falls. Or simply because they didn't have to deal with legacy support.


Because both are very similar languages and Java is way more mature? Go is essentially "Java, from Bell Labs".


Why should Go be faster? Both languages use a garbage collector and Java is highly optimized. Plus Java can be optimized even further at runtime.


Java is not faster than Go.

Edit: Why I'm beeing downvoted, this very much benchmark linked show that Go is faster than Java in most scenarios: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


if you look at how the benchmarks run i suspect they would disadvantage java. java is known for having slow startup times and also an optimising JIT that takes a while to kick in. go is definitely going to be faster for some command line application that takes 20s to run but if you have a long running process i think it will be much closer.



Anyone has tried modifying the go regex benchmark to use https://github.com/BurntSushi/rure-go ? It seems to use a library that isn't even on github anymore.


I had fun with this:

    [andrew@Cheetah benchgame] time bench-native < /tmp/input5000000.txt
    ...
    real    0m20.709s
    user    1m19.839s
    sys     0m0.183s
    [andrew@Cheetah benchgame] time bench-pcre < /tmp/input5000000.txt
    ...
    real    0m10.510s
    user    0m35.362s
    sys     0m0.211s
    [andrew@Cheetah benchgame] time bench-rust < /tmp/input5000000.txt
    ...
    real    0m2.235s
    user    0m3.410s
    sys     0m0.157s
For comparison, this is the timing for the Rust program, unchanged from the benchmark site:

    $ time ./target/release/bench-rust-regex < /tmp/input5000000.txt
    ...
    real    0m1.452s
    user    0m2.643s
    sys     0m0.158s

See https://gist.github.com/BurntSushi/9d35258444fda83de208d31fd... for source code and full results.

And no, I won't submit this myself, although I would find it absolutely hilarious if someone did and managed to get it in. :-)

The PCRE version is definitely sub-optimal. I don't think it's using the JIT, for example.

Enabling SIMD optimizations in Rust's regex crate shaves off another 13% (for Rust program) and 9% (for Go program). The SIMD optimizations will be enabled by default in Rust 1.27 for CPUs that support them.


Awesome work, thanks !


This looked fun, so I did my own micro-benchmark on an rpi3, armv7 :

        $ for i in bench-native bench-pcre bench-pcrejit bench-rure; do echo -n $i; (cd $i; time ./$i < ../input500000.txt > /dev/null) ; echo; done
        bench-native
        real    0m24,883s
        user    0m56,580s
        sys     0m0,120s

        bench-pcre
        real    0m13,125s
        user    0m27,200s
        sys     0m0,160s

        bench-pcrejit
        real    0m2,688s
        user    0m3,140s
        sys     0m0,120s

        bench-rure
        real    0m2,779s
        user    0m3,300s
        sys     0m0,090s

It looks like pcre with JIT enabled is always faster than the rure version, but this is very close.


"Go bindings to Rust's regex engine." - I't would not be in the spirit of the comparison. - Even though most scripting languages also have native regex implementations.


The fastest Go benchmark for regex-redux is using PCRE, which is not a native Go regex engine. It's written in C.


Precisely my point. Most scripting languages also use an engine written in C, too, as grandparent said.


And that's not all! We really need to see memory consumption.


Memory usage is hidden on mobile in portrait mode, but displayed in landscape. Od design choice...


You knew what does the word "mem" on that page means do you :D


What mem?


In portrait mode on my mobile device that website shows only the amount of time. In landscape it shows more columns of data. That kind of design is quite irritating and not how to do responsive websites right.

Perhaps you were looking at the page in portrait mode on a mobile device as well?


Thank you for sharing this because I would not have assumed otherwise! Yes, I was looking at it in portrait mode from mobile.


You are welcome and I am glad I could help. Someone downvoted that comment of mine so for a moment I thought that perhaps it had not been helpful but am glad to hear that it was.


whoa


In before a specious and infinitely malleable definition of "systems programming language"




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: