Hacker News new | comments | show | ask | jobs | submit login
The relative performance of C and Rust (dtrace.org)
326 points by dmpk2k 73 days ago | hide | past | web | favorite | 148 comments



This was a well-crafted benchmark, but what I found especially interesting and insightful was the "apples to oranges" part: C and Rust encourage distinct approaches, different data structures. I feel it is actually more interesting than implementing the same algorithm and comparing optimizer pipelines.


I agree. Many of us like it about C that we can build intrusive data structures easily enough. The price is more complex memory management and locking protocols, but in some cases the price is too steep in actual run-time cost. The apples to oranges comparison is not telling about whether Rust produces better code than C, but it shows how Rust's design choices force themselves on the coder and yield better results in the end. When you stay away from unsafe code, Rust makes one write code that is much easier to reason about, especially in a threaded environment. Performance is nice, but the nicer thing is correctness -- being able to get good performance along with correctness, now that is really nice.


I think this is more a result of rust having good generics.

If c were built with good generics and memory management (not that garbage preprocessor stuff), I expect similarly expressive and good data-structures would be built.

C++ is an affirmation of this because it happens to have good libraries of data-structures which are not really very different from rust's.

I don't think this is as much "rust's design forces the coder to do this" and more "c makes it hard to do as nice of a design"


Right. In C, correctness is by convention, not enforced. C++ allows "userland" enforcement via RAII. There are, however, pieces of compiler magic partially outside the type system (const, &, &&) - you can't specify such behavior within C++. Rust does more of this magic in ownership. If you just want a particular instruction soup, all of them will get you there. It's a question of ergonomics, convenience and the kinds of compiler guarantees that we want.


C is also expensive in that it's practically impossible to get correctness in large codebases.


The Linux Kernel would beg to differ. I love Rust, I love strong typing, but statements like this are what make people scoff at its evangelist.

People can and do get C right, in safety critical systems, at larges scale, all over the world.


>The Linux Kernel would beg to differ

And it would be wrong in differing. Memory and synchronization bugs the kind of Rust prevents are found in the kernel all the time.


Rust does no such prevention. You are free (and encouraged to) use the unsafe parts of the language for performance sensitive code. A kernel written in Rust would likely have memory leaks and overflow errors too.


It's all speculative whether a kernel in Rust would or would not have as many bugs as a kernel in C.

As for a kernel just ending up as a bunch of unsafe, doesn't seem grounded in reality.

https://os.phil-opp.com/minimal-rust-kernel/

> Note that there's an unsafe block around all memory writes. The reason is that the Rust compiler can't prove that the raw pointers we create are valid.

[..]

> I want to emphasize that this is not the way we want to do things in Rust! I

[..]

> So we want to minimize the use of unsafe as much as possible. Rust gives us the ability to do this by creating safe abstractions. For example, we could create a VGA buffer type that encapsulates all unsafety and ensures that it is impossible to do anything wrong from the outside. This way, we would only need minimal amounts of unsafe and can be sure that we don't violate memory safety. We will create such a safe VGA buffer abstraction in the next post.

As is always the response to the "A kernel in rust would be just as unsafe as a kernel in C", the goal is to build safe abstractions.

So you have unsafe, plus some constraints that you enforce at a module level, which wraps the unsafety.

This is not perfect - you'll still write vulnerable unsafe code, I'm sure, sometimes. But the rest of your code can be built on top of it, and now your unsafe code can be the main target of your testing and verification.

We have no real rust kernel the size of Linux to compare to but I think there's plenty of reason to believe that it would have considerably fewer vulnerabilities and would make auditing far simpler.


Wouldn't the exact same arguments work for C++ too? Modern C++ is just as safe as Rust.


That’s just not correct.


Nice try, Stroustrup


> You are free (and encouraged to) use the unsafe parts of the language for performance sensitive code.

The point of the Unsafe Rust is the containment of relative "unsafeness" (that is, however, not equivalent to C/C++'s UB [1]) within a certain boundary. Having less things to audit surely helps.

[1] https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html

That said, of course Rust is not a panacea for memory and synchronization bugs. Rust helps a lot but does not fully solve logical memory leaks and deadlocks for example.


On the contrary, you should spend a couple of hours watching the Linux Kernel Security Summit 2018.

Kernel devs are of a different opinion than yours.

Hence the Linux Kernel Self Protection Project initiative.

EDIT: corrected as suggested.



Thanks


What definition of “correctness” are you using here? Bugs are found and fixed in Linux all the time.


Bugs would be found in a Rust codebase all the time as well. The difference is that a certain class of bugs is going to be absent from safe usage of Rust. (Unsafe Rust gives you just as much rope to hang yourself with as C).


Unsafe Rust has a little less rope than C because there are less cases that can lead to undefined behavior (see https://doc.rust-lang.org/nomicon/what-unsafe-does.html and https://doc.rust-lang.org/reference/behavior-considered-unde...). However, once you trigger UB, it's the same.

By the way, Rust devs are working on a mathematical formalization of those rules. Here's a blog post about a part of this effort: https://www.ralfj.de/blog/2018/07/24/pointers-and-bytes.html. (There are similar formalization efforts for subsets of C, that led to software like the CompCert verified compiler)


Rust and C may encourage different data structures, but a benchmark of two programs that happen to use two different data structures doesn't demonstrate this difference. To demonstrate what you claim, you'd have to perform a more holistic language analysis than we see in the article, and you'd in practice have to talk about C++ too.


That Rust and C use different data structure is not a thesis in the article, it is used as a fact in an explanation. To support this fact the author explain how intrusive data structure are easier in C thanks to a lack of static ownership (for the same reason they are way harder in Rust).

To demonstrate the claim that C and Rust have different idiomatic data structures you need to rewrite software from C to Rust and to Rust from C and see what is hard and what is less hard.


How is Rust encouraging a different data structure besides providing a btree set and not an avl tree set?

The C stdlib does not come with an AVL tree. Why would one not implement a b-tree in C for this comparison? It’s not as if K&R has “Use binary, not b-trees!” on the front cover.


I think he mentioned this briefly in the article. In C it's common to use intrusive data structures like linked lists and AVL trees because that's how the language works, you can embed data structures easily. This wouldn't work with a B tree because it's allocation pattern is very specific. Rust on the other hand is very pro-generics where you can write a B-tree and painlessly have it store whatever anybody wants it to. It's more about what the language makes easy and idiomatic than anything.


The comment is wrong for C, but could be right for C++. The data structures in STL are all implemented as red-black trees. They can't be rewritten as B-trees because of some iterator/synchronization/backwards compatibility mess.

I once did a similar benchmark to the one in the article and found that Rust outperformed C++ because the C++ code was using the builtin std::map. But for reeeeeeally big datasets PostgreSQL beats them both so who cares. :p


There are STL-compatible B-tree implementations in C++, so I would still use one of those instead of std::map. Besides, std::*map implementations are all rather slow.


Rust encourages the creation of more generic code, that makes switching into non-DIY more complex data structures easier. The difference is even larger after the code is written and you are able to profile it.


Then Heng Li’s kbtree.h from klib is your generic, fast implementation. I’d love to see this repeated using it for a fair comparison.

For example, the benchmark shootout recommends his khash for a hash table.


The point is entirely that it is harder to take a typical C code and change it to use something like this than it is to do the same in Rust. Much harder.

I don't think anybody will find any consistent performance enhancement in Rust that can not be replicated in C. The problem is how to run it out on real code.


That’s fair. klib (I assume) was not easy to develop, but it easy to use.


It's easy to use. What is not easy is to take a large code-base that doesn't use it and change it so it now uses.


Yes, especially when b trees tend to be easier to implement too.


Agreed. Comparing compiler pipelines is a poor way to judge how your code will perform in practice, and it's even not that useful for compiler authors, where optimizations that matter in practice are not the same.


As a meta comment, I really like how active Stepha(e?)n is on internet forums. And the energy around Rust is pretty intense. I feel like in there, people are unwilling to try to embrace some of the nuance in the arguments that some people make against Rust.

A person should understand that no language(programming or otherwise) is perfect. Therefore there is some criticism of any language that is valid. Therefore some criticism should be taken seriously as there may be a real point to the criticism.

There are many posts like this, where there are comparisons of Rust to C. Realistically they are all a little biased towards Rust as Rust is a C++ replacement really. A more proper comparison would be Zig or D as a better C against C.

Just understand that a person defending C is not always an idiot, and maybe they have point. Consider the excessive memory use of any working Rust compiler. That will probably not be remedied anytime soon and is a legitimate complaint. The ideal of how something could be is not how something is. The reality is that C works pretty well most of the time, Rust works well most of the time. They both fail at some things.


I don't think I've seen anyone claiming that Rust is a perfect language without flaws. Heck I'd hold up all of the huge strides that have been made in errors, usability and constant feedback as a counter example of that.

You see a lot of people holding up Rust precisely because we've been there for the last 5/10/15/20 years. The day I don't have to write a makefile or build yet another CMakeLists.txt is the day I rejoice.

What Rust offers is another option in the native, non-GC'd language space. A space that has very few languages and even fewer yet that are shipped at scale. Rusts inclusion w/ FF means that the have to address the robustness, security, performance and usability of the language to a degree that you don't commonly see.

Having just blown 4+ hours today dealing with the linker on a mixed C/C++ project I don't really miss a lot of the baggage that comes with native development these days. Rust gives you the option of dropping down to that level while still preserving a set of sane, opinionated defaults that are pretty well thought out.


Since you made a mention of CMake, does Rust really provide an alternative for building multilanguage projects? When I went to look for examples on how this is managed, it seemed horrendous. For example, some projects manage this by writing a program to download, unzip, and build the source:

https://github.com/elrnv/ipopt-rs/blob/master/ipopt-sys/buil...

Others just assume that the libraries are there, but they still require a program to compile them:

https://github.com/cmr/openblas-src/blob/master/build.rs

As long as all of the dependencies are in Rust, things appear nice. At the same time, unless I'm missing something, it seems like crates manages multilanguage projects poorly. Though I have some major gripes with CMake, I've managed multilanguage projects mostly well.

Is there a sane way outside of CMake to manage a multilanguage project with Rust?


Check out Bazel: http://bazel.build. I've been using it a lot to build several multi lingual projects that include Java, C++, embedded C, Javascript, CSS, and even Docker images. There are community Rust rules, that I haven't tried yet, but am planning on investigating soon: https://github.com/bazelbuild/rules_rust


Is there a non-Google alternative for those of us who don’t want our builds spied on?


Not so much with Rust in the driver's seat. Cargo, Rust's primary toolchain, only has weak support for pre/post build scripts. It's solely concerned with Rust's own dependencies and compilation. In the couple of projects where I've added Rust to a larger project, it's always been bash or node that coordinates the overall build.


You can call a compiler for C/C++ from build.rs. That tooling is currently not very advanced. As far as I know there is no crate to write compile recipes that as easily as a Makefile or CMakeLists.txt.

When I tried this in build.rs, I had to check modification times myself.

There is an opening for a ninja-type crate in Rust.


Indeed. Rust is the only new contender in the field of systems programming, where C++ has to contend with decades of negative mindshare (C++11: "I've changed! I promise!"), and D, well, never really took off (because it started off in the early Internet era as a commercial compiler?).


There is also Zig, which seems nice as a C replacement but still highly unstable, and Jai, which is also a nice C replacement but currently has no publicly available compiler yet... So for the time Rust is the only new systems programming language (And please don’t say Go/Nim is a contender, they’re good languages in their own ways but having a garbage collector disqualifies it fron that categorization)


> Jai, which is also a nice C replacement but currently has no publicly available compiler yet...

If there is no compiler then there is no programming language.


Well, there are videos of code being compiled. But nothing verifiable outside of the Jai team.


> Well, there are videos of code being compiled.

There are videos of nazi bases on the moon.


If that counts as existing, then cold fusion exists.



No language that drops C compatibility while having the same manual memory management is an actual contender.

Yes having a GC is a contender, because not all GCs are implemented the same way, and C did not eliminate Assembly as well.

So if a systems language, with GC, covers 95% of use cases. We can happily use something else for those remaining 5%, while enjoying better productivity and safety.

Companies like Astrobe manage to have enough customers to keep their Oberon compiler for bare metal deployments business alive, just as one possible example.



ATS is in a better state than Zig/Jai lol


> You see a lot of people holding up Rust precisely because we've been there for the last 5/10/15/20 years. The day I don't have to write a makefile or build yet another CMakeLists.txt is the day I rejoice.

Ironically, cargo is one of the areas the rust ecosystem falls down completely. Not to mention, there's no ABI...

> What Rust offers is another option in the native, non-GC'd language space.

True, and while this is great in principle for embedded development, not using a GC is overrated.

> Rust gives you the option of dropping down to that level

Well, if you're not dealing with the linker, you're not at "that level"


Very strongly disagree that Rust is a C++ replacement and D is a C replacement. If anything, it's the exact opposite, Rust is a C replacement and D is a C++ replacement. If you tracked communities so far, you'd see that Rust community is very sensitive on the issue of memory print, performance impact and asm implicitness of Rust. There is already a huge osdev community built around Rust, I know kernels written in Rust. Rust is the perfect language to reliably write drivers, embedded systems or system tools. I cannot think of a language semantically closer to C than Rust. Not even C++ is there. Rust is literally just C with a good type system, and that's what makes it so awesome.


> Rust is a C replacement

well, which language of C and C++ has the possibility of having traits, structs with function definitions, generics which work through monomorphism, variants, optional, reference-counted pointer and pointer ownership semantics ?

> If you tracked communities so far, you'd see that Rust community is very sensitive on the issue of memory print, performance impact and asm implicitness of Rust.

yeah... like C++ ?


> I cannot think of a language semantically closer to C than Rust. Not even C++ is there. Rust is literally just C with a good type system, and that's what makes it so awesome.

ATS is far closer and indeed compiles to C. Rust's type system is still less expressive than that of ATS.


Never heard of it, link?


"Applied Type System" - a language built for correct low-level programming inspired by functional programming and theorem proving.

http://lmgtfy.com/?q=ats+language

If I remember correctly there were plans to implement GNU Hurd (the Half-Life 3 of kernels) in ATS, but that probably went nowhere. But there is a small osdev community messing with kernels and ATS.


Actually Rust is clearly both a C and C++ replacement, but C programmers tend to become quite fanatical and for some inexplicable reason C also has a quite decent reputation, so the Rust community is focusing their efforts on the much more controversial C++ instead of attacking the holy cow of C.

I think they've also figured out that with time C will be squeezed into smaller and smaller areas of use, as is already happening.

Writing robust code hasn't really found a space to live in the mental model of C programmers. Using C outside a kernel or existing C project today is professional negligence.


> some inexplicable reason C also has a quite decent reputation

if you think like this, you will never replace C...

> Using C outside a kernel or existing C project today is professional negligence.

or embedded development lol


C has been in the process of being replaced for decades. I specifically mentioned kernels and legacy projects, because these are its last retreats; even embedded has lost a large chunk to C++.

Here's a silly prediction: in the next 10 years a significant number of C programmers will retire without replacement. The new generations will rather learn Go, C++ and Rust for low-level stuff and will banish C to pure legacy projects.


I disagree about replacement programmers--at the moment, practically every graduating electrical engineer and computer engineer will have a working knowledge of C. Embedded C has lost some ground to C++, but primarily in ARM and x86 architectures, since for more memory constrained AVR/PIC/etc. microcontrollers, you absolutely avoid template inheritance because vtables are untenable, there often isn't an available STL for the chip toolchain anyways, templates balloon the program space, and you are still constrained to using just stack allocation. That right there pretty much means there will always be a space where C is more pragmatic.

I see this pattern a lot on HN. Many aren't aware that there are fields alive and well that require the use of unsafe, deterministic C that can compile to programs on the order of kb.


Fully agree with you.

Although not politically correct, that is the only way to overcome religious resistance against new technologies.


> Actually Rust is clearly both a C and C++ replacement, but C programmers tend to become quite fanatical.

Every body whole who creates a new language would like to see it more and more used thus replacing others.


Fantastic read. Learned a few things about profiling I'm sure I'll use.

Just one tiny gripe, don't use a blue and green next to each other in graphs if avoidable. I'm not even blue green colorblind, and I found them hard to distinguish.

Thanks for the great read.


Thanks for looking out for us colorblind people. But, funnily enough, there is no "blue green colorblind", and to this person with red-green colorblindness, the colors in the charts are incredibly easy to distinguish. Much easier that the typical colors chosen.


There is such a thing as blue-green colour blindness, though it's a little more unusual (I as understand it). My father is one of those lucky people. His father was completely colour blind.

My father can see the two colours just fine, but if they're adjacent to each other and about the same kind of tone, they start to blur into grey.

One particular example I remember growing up. He hated the board game "Game of Life" because there's all these greens (ground) and blues (rivers, lakes, ocean etc) right next to each other all over the board. It was not a pleasant experience to be looking at it.


https://www.color-blindness.com/2007/05/18/mixing-up-blue-an... (read the part titled "Mixing up blue and green") seems to say that people with color-blindness in general may have a harder time with blues and greens. This further supports my experience given that I'm not color blind and have trouble sometimes with these colors. Green and Blue are just really similar perceptively.

All this said, you may be correct that there's no such thing as blue-green color-blind, and maybe that's not the best word to describe what I'm trying to say.


I am not colourblind. I did find those colours difficult to distinguish next to one another.


Sometimes it's the display device, and even the angle viewing it on that device.

Those colors appear quite distinct on my monitor. Others sometimes don't


This is not really a good comparison: the author acknowledges that the C and Rust programs are using different data structures, and while he tries to vary the problem a bit to explore that difference, it's still not a fairly matched comparison, and the headline leaves the reader with the incorrect impression that we're comparing similar things.

I'm not sure that comparing "Rust" and "C" performance is even meaningful. The latter is a multi-implementation language, and in most implementations, can be written in a way that gets the compiler to generate the same machine code that the Rust compiler does. (Frequently, they both just go through LLVM.)


The fact that the two benchmarks are using different data structure is one of the main points of the a article. The author (in my opinion) argues that each data structure would be an hell to use in the other language.

when comparing languages you need to include a measure of how idiomatic your programs are.


And in this particular benchmark, they didn't choose the same compiler backend too. Namely GCC -O2 vs LLVM -O3. This has a large impact, often larger than the frontend language, when measuring programs in native languages.


This is a benchmark comparing different data structures more than different programming languages. Of course the Rust program has fewer cache misses; this is because it’s a b-tree, not an AVL tree.


It is a benchmark comparing what type of data structure are easier to use in the two languages.

Of course it is entirely subjective (the author argues for how that is due to a static vs dynamic ownership model) and there is a single data point, which is the author rewriting the code. (two is you consider when he/she swaps one of the data structure)


> It is a benchmark comparing what type of data structure are easier to use in the two languages.

Then it's not a performance benchmark, it's a style and usability benchmark.


What is a scenario where you would consider C and also Rust? If Rust is in the mix, wouldn't you opt for C++ before C?


C++ inheritance doesn't map well to Rust's trait system for one. I think this is an ongoing issue within the `bindgen` community, but I don't know much beyond that. Suffice to say, some concepts in C++ do not map well to Rust and so FFI can be hairy in some instances.

On instance of this becoming a complication would be in the GTK+ world with the Rust bindings to GObject. There's some really good articles about their adventures in that realm you could find.


Writing an FFI for some C++ so that C code may call it takes almost as much work as doing the same for Rust. That's less true the closer your preferred dialect of C++ is to C, I suppose.

Meanwhile Rust, in this user's opinion, offers a superior toolset for modeling data and building behavior, namely: sum types and traits.

So if you're looking to extend or integrate with a C codebase, barring some exotic target platform and all else equal, I don't know why you'd opt for C++ over Rust.


> Writing an FFI for some C++ so that C code may call it takes almost as much work as doing the same for Rust

where "as much work" means putting `extern "C"` in front of your functions and having a few `static_assert(is_pod_v<type_that_i_pass_in_argument_to_my_c_api>)` or only using opaque typedefs, ie it frankly amounts to nothing


No, as much work means that you can't use 99% of C++ in C FFI. You have to fully instantiate template functions, template structs (you can pass `std::vector` to C FFI), etc. before you can expose them to C.

If you are writing C++ in such a way that this does not matter, then you are basically writing C already and `extern C` is all you have to do though, but then you might as well just have been using C this whole time.


having done this process for multiple libraries using a lot of fancy C++ >=14 features, I really think that it's not a big deal - at most an afternoon of repetitive work if you have an API with ~100 functions.


If that is all you need, then Rust has bindgen/cbindgen that does the same.

But when your API uses templates/generics and other types that C can't comprehend, you need to write additional wrapper functions operating behind opaque C types. It's not rocket science, and tooling/macros can help a lot, but it's still tedious and error prone (you're erasing types, redoing error handling, adding manual memory management, etc.).


Rust interfaces very well with C. When you have a large C codebase it's easy to one day just stop writing any more C, and write all new code in Rust without skipping a beat. There are successful examples of this like librsvg.

OTOH interfacing with C++ is very hard. D might be the only language that has seriously tried it, but from Rust's perspective C++ has largely overlapping, but incompatible set of features, and it's a weird dead-end.


Rust is a much simpler language than C++. Even more if you care about the final binary properties, like for auditing, or performance reasons.


This comparison is disingenuous. They’re using a completely different data structure and getting a slight difference in performance, and are chalking this up to language differences. Sure, it might be harder to implement at B-Tree in C than it is in Rust. But then you can just use C++? Don’t make an apples to oranges comparison.


> [They] are chalking this up to language differences

That didn't happen. The data structure difference is discussed and the possibility of the performance difference being down to this is clearly acknowledged. Also discussed is the reason for the different data structures, which is actually the most important take-away.


> Also discussed is the reason for the different data structures, which is actually the most important take-away.

I agree; this would have been a great stand-alone piece had it focused on this instead of performance.


Suggestion: actually read the article.

It's one of the few high-quality technical articles that pass through HN... if you don't get hung up on the title.


Please don’t do this here; I did read the article and this is my response to it. If you have a meaningful comment to make on my analysis, please reply with that instead.


OK, read the article starting with the phrase "But that would be overlooking something important".

Normally, "read the article" is a bit not-HN-friendly, but when you call someone out as disingenuous because they didn't say things that they in fact do clearly call out and have a multi-paragraph discussion of, you are earning a "please read the article" response. They're not being disingenuous about what they are doing if they clearly explain what that is and then have a (IMHO) balanced pro-and-con discussion about it.


Regardless of the person's reaction, they do make a very good point. There is a weird thing that happens when you criticize Rust on HN or Reddit where lots of people jump in to prove that any criticism is invalid. I like Rust well enough. I like C too, I have my issues with both, but when someone does raise an objection it can be hard having people pile on them after.

A point of fact is that the two data structures are different. There was even an article on the front page today about how minor changes to data structures can dramatically impact performance.

Anyway, my main point is that there should be a space to discuss the relative merits and issues of languages without it resorting to petty fighting, which even you and the OP and the other commenter are engaging in.

Please everyone try to use the principle of charity when reading a post on this site. There is a good chance the person you are responding to has a reasonable background and isn't totally ignorant. Instead of playing on pedantry, try to understand where they are coming from and understand a one-off post on an internet forum is not the strongest argument they could make.


> There is a weird thing that happens when you criticize Rust on HN or Reddit

Can I ask where this impression came from? The original, now-downvoted comment that started this thread wasn't criticizing Rust, it was criticizing the article. If people disagree with that person's criticism of the article (which is itself not a criticism of C, and acknowledges as much), I'm afraid I don't see how that contributes to any perception that people are unjustifiably leaping to defend Rust.

lossolo 73 days ago [flagged]

> Can I ask where this impression came from?

Are you serious now? I mean you are very active Rust activist on both HN and Reddit and you exactly know what Rust Evangelic Strike Force is and where it came from. I've seen multiple threads on HN and Reddit where people do exactly what OP wrote. I don't know if it's denial or ignorance on your side or you are writing this with premeditation.


The "Rust Evangelism Strike Force" is a dumb meme (but I repeat myself) that gets used sarcastically, and was originally coined by people who were themselves critical of Rust. If you have links to instances of people doing exactly as OP wrote, please paste them here; I am mod of /r/rust, where we have had a "No zealotry" rule ("Stay mindful of the fact that different technologies have different goals and exhibit fundamentally different tradeoffs in pursuit of those goals.") since long before cat-v.org concocted the RESF, and I am happy to rein people in if their heads get too big. :)

As to whether my above question is serious, the answer is yes, because truth be told I see more people complaining about Rust Evangelism than I see instances of evangelism in the first place, to the extent that I wonder if that itself has become a meme by now.

lossolo 73 days ago [flagged]

> where we have had a "No zealotry" rule ("Stay mindful of the fact that different technologies have different goals and exhibit fundamentally different tradeoffs in pursuit of those goals.")

I've never wrote anywhere that you encourage that kind of behavior. There is a difference between encouraging and pretending that problem do not exist. Seems that you are just biased and what you are doing here is white washing a community you are part of.

> If you have links to instances of people doing exactly as OP wrote, please paste them here;

I would rather spend time with my family than searching whole HN and Reddit for obvious things that were mentioned here multiple times by multiple people but I've found couple of links fast (not for you, but for other readers) [1][2]. And this are only two examples that I've found fast. Those are examples of things that you surely have read and aware of. Another example is some Animats posts in some C++/Rust debate thread on HN in which he was describing his experience and technical details while criticizing Rust and was downvoted to hell.

Here you have another person writing same thing and dbaupp acknowledging that the problem exist [3].

This comments didn't come from vacuum and denying that problem exist will only make it worse.

[1] https://www.reddit.com/r/programming/comments/5fyhjb/golangs...

[2] https://news.ycombinator.com/item?id=17088181

[3] https://news.ycombinator.com/item?id=14178950


It's fairly clear from the context of the comment he was asking about where the impression came from in this thread. I have no doubt he was interesting because he believes it is somewhat unjustified and self perpetuating, but if this thread is any indication, there may be some truth to that.

So, to restate the question clearly and unambiguously, what prior to kibwen's question lends someone to think this is a matter of Rust evangelists taking issue, and not the somewhat common HN interaction of someone calling out someone else for, in their eyes, misinterpreting or ignoring and aspect something and then denigrating it for that lack?


Well, first let me take this out of the way: if plenty of people do what the OP was complaining about, I would welcoming he complaining about them, but not about people that didn't do it.

Now, I'm having problems understand what that Rust Evangelic Strike Force is, and what nefarious goals it could have. Do you have a pointer? (Also, is it a funny group?)


We're the Rust Evangelism Strike Force and we'll convince you to write more secure code!


Ok, sure, I’ll accept that calling the article itself disingenuous was a bit harsh. But it does seem like it sent the message that Rust was faster than C, when it is clearly not a fair comparison (as the author mentions themself). I’m actually relatively happy with that part, but it comes at the end and it’s clearly spawned discussion here that proves my point; look at the other comments: the other top-level one says that I should rewrite everything in Rust because it performed better here.


> look at the other comments: the other top-level one says that I should rewrite everything in Rust because it performed better here

That comment appears to be downvoted, and in addition the winking emoticon at the end suggests that it was meant to be tongue-in-cheek in any case.


It also says you should rewrite things in rust "anyway ;)", not "because it's faster".


It was the highest-voted top-level comment when I posted that.


It was probably one of the most recent comments. New comments get a temporary rating boost to give them a chance to be read and earn upvotes.


The article very clearly calls out the data structure difference.

Your statement that they simply are "chalking this up to language differences" is false. It's not surprising that you received a negative reaction.


Despite the author’s claim that they are not trying to make a comparison, and their lengthy discussion on the details why such a comparison is not apt, we still ended up with a title that suggests a comparison and comments that focus on the two languages in relation to each other.


That is a fair point. The title may be a bit misleading.

Although, not entirely, as I'm sure you would agree we can't claim that different data structure is the only reason that the rust program was fast.


Which is exactly why it's a terrible benchmark -- it doesn't tell you anything about what you care about. Which is one reason why it's a terrible article.

Another reason is that the author seems to have put a lot more engineering into the Rust program than the C program. Most of the word-count of the article is devoted to extra engineering on the Rust program! (The whole thing about sorting out lumps, etc). It's reasonable to infer that this also mirrors the situation prior to the first benchmark. If you put more effort into optimizing one program than another, you'd expect the higher-effort program to be faster, all else being equal.

It's embarrassing that HN thinks this article is worth posting. There might be something to write about here, somewhere, but it would have to be framed in a very different way in order to be honest. Also, it would be a much shorter article.


The author appears to disagree with your assessment in the previous article that is linked at the start:

Because I had written my Rust naively (and my C carefully), my hope was that the Rust would be no more than 20% slower — but I was braced for pretty much anything. ... my naive Rust was ~32% faster than my carefully implemented C.


I don't think the author was claiming that this is a rigorous benchmark, but I can see how it might look that way. I agree that it could have been framed a bit better, but I also don't think the author was trying to make any bold declarations.

The technical aspects of the article are interesting and well written, but only if it is not viewed as an exercise in rigorous benchmarking.

The context here was that the Rust program was a reimplementation of an existing C program, as part of an effort to learn Rust.


Unfortunately, that's not entirely accurate. The problem with language level benchmarking is that the benchmarks are only as good as their implementations and while I agree that different languages encourage different kinds of idioms it seems only fair to use the same data structure/technique when the language offers that choice (as Rust does with intrusive data structures).

The Rust program was not a faithful reimplementation as the author concedes. I expected a more closer comparison given the somewhat broad title of the article. Claiming the perf benefit came out of the thinking encouraged by the language is okay but debatable.


Can you think ofa simple, unambiguous, descriptive title?

How about — Why Rust BTreeMaps beat C AVL Trees ?

I don't know if that would be an adequate title, but it doesn't claim that different data structure is the only reason and it doesn't generalize to a claim about C and Rust.


Isn't that to some extent also a benchmark of gcc vs llvm?


Could the lumpiness be caused by the overflow and subsequent split of the root of the BTree (which cascades all the way down)?


Did I read the article correctly that the C code was compiled with -O2 and not -O3? Why?


-O3 isn't always strictly faster than -O2. There are different tradeoffs made about memory usage and binary size which interact with CPU caches in interesting ways.

I can't speak for the author but at my work we found -O2 to give better results on our OpenGL ES rendering engine.


cargo --release doe not compile at -O3 either it's at -O2


I believe that’s wrong.

cargo build —release —verbose would tell you for sure; I’m not at a computer right now.


TL;DR: Using dtrace to analyze the performance of AVL Trees in C vs BTreeMaps&HashMaps in Rust. Spoilers: Rust wins every time, and you should rewrite your software in Rust anyway ;).


Note that the data structure is different: AVL trees vs. BTrees. The interesting part for me was that because of modern cache hierarchies, BTrees appear to be superior than AVL trees even for in-memory data structures, while before I'd thought of them as a special-purpose on-disk data structure for databases.


AVL trees were more competitive when memory was more shallow. These days an L1 cache hit is dirt cheap and a RAM fetch is ~200x as expensive, following a pointer creates a data dependency. B-trees give you more L1 cache hits and fewer pointers to follow. Back in the 1990s the cache hits and RAM fetches were much closer to parity and there weren't as many pipeline stages to fill so data dependencies weren't as important. Go back even farther and you have completely uniform memory that could be accessed in a fixed number of cycles no matter what the address.

There are a lot of good books from these eras, and there are a lot of good books that teach with the uniform memory model. Most algorithms classes don't talk about the hierarchy at all (even though it distorts the true asymptotic runtime of many algorithms).


Agreed. It blew my mind to learn that tiling matrix multiplication is more efficient than untiled, but that you can only show tiled is faster in a hierarchical memory model (for tiled, you have arithmetic intensity that scales with size of your fast memory rather than a fixed constant). Yet in uniform memory model both are O(n^3).


Makes total sense. The insight is that the memory acts like the disk used to and the caches act like the memory. BTrees worked well for spinning disk because they'd nicely load a whole array (node) of BTree child pointers in one disk read. But the same works for caches, they'd load the the whole array of into the cache when the first one is accessed.

A silly idea I had, wonder if it would be possibly to auto-adjust the B factor of a BTree (the number of nodes) at runtime based the cache line size.


Auto adjusting would be interesting, even better (though harder) would be to do it experimentally rather than based on cache line size. Cache line size only determines the minimum block size that doesn't waste cache capacity, after all.


Ultimately the point was that Rust discourages intrusive data structures, and not using an intrusive data structure turns out to be a win even though experienced C programmers might believe otherwise. Now, of course, an apples to apples comparison would be nice, but I'm sure that after some experience with Rust one might end up writing similar code in C and the difference might be very small.

The point really is that Rust foists better design choices on the programmer.



It always surprises me how unaware people are of the cache efficiency of b trees.


I thought the tl;dr was: Implementing B-trees as an embeddable library in C is hard, therefore Rust is faster.


Rust makes hard problems easier, but easy problems harder. BTrees are a great example of that sentiment.


Let's look at some problems in C which should be easy:

1. Storing an array and its length together and always checking it correctly such that you don't have a buffer overrun.

2. Freeing memory, not leaking memory.

3. Using a union correctly, e.g. always accessing the right variant.

4. Printing unicode characters.

5. Macros which dont' conflict with their scope (are hygenic)

All of those problems seem like they should be easy, are incredibly easy in rust, and are not fully solved problems in C.

I don't think rust makes easy problems harder just because its type-system and macro system and such are all so good compared to C.

It turns out having proper sum types instead of C's garbage unions, and having proper string support, and so on make easy problems easy too!


The problem that Rust makes hard: linked lists.


Doubly linked lists. Singly linked lists and one-way trees work fine with the single ownership model.

As I've said before, the two places where Rust has problems with expressive power are backlinks and partially initialized arrays. Those are routine cases which are forced to unsafe code.


One of the places you most often need to use backlinks and other unsafe code in Rust is implementing fancy data structures. Unfortunately that happens to be what a lot of people think of as "programming" because they come up a lot in computer science classes... Fortunately most production code seldom needs new fancy data structures; maps, sets, tuples, structs and reference counting take you a very very long way. For less common things like graphs there are usually crates you can use off the shelf.


Please, these are not interesting or useful talking points. You always get the same responses:

Partially initialized arrays are a solved problem- anything more would just be baking `Vec` into the language.

And backlinks are nowhere near "routine."


Backlinks ARE a very interesting point about Rust. Just because you don't find yourself using them all that often doesn't mean it's not interesting to the design and expresivity of the language.

The issue is a direct consequence of Rust's strict aliasing rules, which I feel any programmer would benefit from groking.


They're routine in CS classes. Curiously, I've never needed to use them in real life.


Except really common things like LRU caches use them.


I think it's important to note that in the case of most data structures, you only need to do the hard work of verifying your unsafe code once, after which you can package it up in a crate.

I guess many C programmers might be used to reimplementing data structures for their programs, but I don't think it's useful in a majority of cases


Why would libraries be more common in Rust than C?


Use of libraries may be more common in rust than c because of many possible factors:

1. Rust's standard library is large and contains data structures. People are good at centralizing on one thing if it's the standard one, if there are 10 competing string libraries, people might say "ah well, let's write our own". That also sometime means using one dependency will pull in others (e.g. in C you might want someone's regex library, but it might be incompatible with your string library because it uses a different one. A stdlib helps ensure libraries have a standard set of types to talk about, and C barely has that).

2. Rust's package manager (cargo) and registry (crates.io) are really good and making finding and adding dependencies much easier. If I want to add a random library on github to my rust project, it's 1 line in my Cargo.toml (whether it's on crates or not). In C, I have to figure out if they're using cmake, meson, autotools, or any of a dozen other things, and possibly vendor the code into a third_party folder, and possible spend several hours hacking on my build system to link against it.

3. Rust's community has made an effort to standardize on certain crates and include their use. See the rustlang nursery. The closest to this in C is stuff like the gnulib I think, which is an ancient effort pushing garbage code that suffers from the lack of good dependency management severely. I know of no recent similar efforts for C.

4. Rust has generics and a better type system in general which makes it much easier to implement generally usable libraries.

It's important to remember that C also predates the internet, and many c codebases also predate modern package management. Legacy practices have a habit of propagating themselves.


No, it's not harder than in C++. Compare: https://gist.github.com/stjepang/07fbf88afa824e11796e51ea2f6... , https://gist.github.com/stjepang/fc20ea97030a2153148117dddeb... . In Rust, it even works faster (with array bounds check!).


Please just use an `Option<usize>`, not that wonky NULL thing.

If you're worried about the slightly more memory that uses, you can use `Option<NonZeroUsize>` and subtract 1 for all indexing operations.


klib's kbtree header/macro library seems to provide rather nice B-trees in C.

https://github.com/attractivechaos/klib/blob/master/kbtree.h


I think that TL;DR misses lots of nuance; I'd suggest a better TL;DR would be "the features and idioms of different languages can lead to different preferred solutions to the same problems, which means that we should be careful not to unnecessarily restrict our perception of new languages in terms of the familiar idioms of old ones." Or maybe: "performance benchmarks are hard, and even carefully-crafted comparisons can miss the point if they paper over important practical implementation concerns; an apples-to-apples comparison can unintentionally mislead if it precludes the consideration that perhaps what you actually wanted was an orange after all". Importantly, the article explains why Rust is faster on this specific benchmark, but goes out of its way to note that benchmarks should always be interpreted with appropriate caution, and that nobody should come away with the notion that this implies any sort of strict dominance of Rust over C.


So it's called "The relative performance of C and Rust" to put the emphasis on the data structures? :)


As suggested in the introduction, one would think it was called "the relative performance of C and Rust" because this article is a corollary to a previous article and conceived as a response to an inquiry regarding that previous article--specifically, to answer the question "in your given benchmark, what explains the relative performance of C and Rust?" :P The basic answer: "data structures"; the insightful answer: "data structures that were originally chosen to play to the different strengths and weaknesses of their respective implementation languages, giving us food for thought as to how language design can shape the solutions that users of that language will tend to arrive at, and the performance implications thereof".


Obviously Why Rust BTreeMaps beat C AVL Trees is way too-long and misleading a title ;-)


Just because that would indeed focus on the specific sub-portion of the article that you think is most important, doesn't necessarily make it better than a title that meant to encompass the issues surrounding whole act of benchmarking, specifically between languages that might encourage different idiomatic solutions, and then drills down to what that means in this specific instance.

In other words, what you took away from the article might not have been what other people did, or what the author meant to express. It surely wasn't what I thought was important (but neither was "rust is faster/slower than C").


Might there be a C library that could be used?

https://old.reddit.com/r/programming/comments/9jsqb3/the_rel...


> The C version was compiled with GCC 7.3.0 with -O2 level optimizations; the Rust version was compiled with 1.29.0 with --release.

Isn't rust based on LLVM?

It would make more sense, for a true apples to apples comparison, to use clang instead of GCC.

Not that I'm pro-C or anti-rust or anything but the beginning of the article goes on about previous tests not being fair.


It also insist on how this data is unfair and only relevant to the system (comprehensive of OS, compilers, programmer skill/talent/interest/errors and so on)

At the end the author never states that one is faster (yes, the title is misleading, but it is then clearly clarified in the text) the main part is the discussion about intrusive data structures.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: