Hacker News new | past | comments | ask | show | jobs | submit login
Sayonara, C++, and Hello to Rust (thecodedmessage.com)
90 points by mooreds 5 months ago | hide | past | favorite | 165 comments

Trait objects are still no replacement for the areas in which subtype polymorphism excel, and Rusts' generics are a pittance compared to the completeness and expressiveness C++ templates. I know that sounds like flamebait, and yes I've read STL source code in all its underscored and abbreviated glory, and yes I've read that one snippet of the std::ranges proposal that people like to throw around. But the thing that always gets me is that most Rust proponents I've talked to simply define out of the equation Rust's problem areas - an oft-repeated mantra is that Rust 'pushes you to a good design' as if what is inherently a good design are what Rust is capable of and the areas it is less so are simply inherently poor designs anyway.

Sure you can do everything you'd want to do in C++ in Rust, but the developer experience when actually writing code is agonizing most of the time. I deeply miss C++ templates in most other languages, excepting fully dynamic languages, and D. Rust is, to me, more of a replacement for places I'd write C, and be in a C mindset. C++ (well-written, modern C++) is so much more than C, and so much more than people give it credit for. A true C++ contender would have to pass not just the C bar but the C++ bar as well, rather than looking at C++'s strengths over C and saying "actually those are bad things" like it's been popular to do for the last 10 years.

Speaking of replacing C, Rust is not without its strengths: the one place I find Rust truly a joy to write is OS dev, and similar low-level projects, in which the concepts and limitations of the language map to quite nicely to the expectations of the system. It seems like the obvious candidate for these types of applications (including embedded development) but there's little surrounding literature - it always baffles me that most people seem to be using Rust to write ... web services?

Definitely for prototyping, as someone with years of 'advanced' C/C++/Python experience, and someone trying to learn Rust, I'm finding Rust's 'opinionated' ways of doing things very painful for things like fast prototyping, where you need to impl 'default' traits on structs, or create builders (which is almost as bad as C/C++'s split of declaration and implementation in that you have tightly-coupled things in different parts of the code.), sticking unwrap() everywhere to not handle all errors correctly at this stage, etc, etc. i.e. you don't really know what you want, seeing what works, how the data could be organised, refactoring in the early stages, and so much is in flux at this stage, the compiler's warning about all sorts of things, but I don't want to fix them as it might be a waste of time at that stage if you change things yet again, so just end up shoving: #[allow(dead_code)] above all enums or something, so I can actually see errors within all the warnings.

Once you move past the prototyping stage on to full on production coding, I totally buy that in a lot of cases, the safety that Rust provides (if it compiles, it will almost certainly either panic or work, ignoring logic bugs), is a big win, but I'm not convinced yet for experimentation when prototyping...

I personally love Rust for prototyping for one particular reason: even if it's not as quick as what you'd do in dynamic languages (JS for me, for most people it would be Python) it comes with an enormous benefit: iterating on your prototype several times until you eventually reach the final version is much, much quicker than in any other language I know, simply because the compiler will catch almost any mistakes you make during your refactoring.

So, The first draft is going to take like twice as long, but every subsequent iteration on your prototype will be much faster. I find the payoff time to be pretty quick (like one or two iteration are often enough), but obviously if you're doing a true one-shot that you're never gonna touch again, it won't be worth it (that being said in my experience, such situations are pretty rare and most one-shots end up being way more than what they where supposed to in the first place).

You can argue that (and I'd generally agree in most cases above a certain complexity size) about dynamic vs any static language though?

Yes, it's exactly the static vs dynamic trade off, but with one of the best static programming language design.

C++ is probably at the other end of the spectrum since it's too easy to accidentally cause a memory error during refactoring. C# and Java are a bit in-between, the automatic memory management limits the footgun, but the type system is much less powerful than Rust's one so there still a lot of issues that can come during a refactoring. I have only very limited OCaml experience and no Haskell experience but I think you'd have most of Rust advantages with those, except the thread-safety part which is my favourite part of Rust but is also situational.

Haskell is quite thread safe, I suppose it is at least as good as Rust in this respect. Immutability by default and garbage collection really help to cut away the ceremony around threading.

> I suppose it is at least as good as Rust in this respect.

(Safe) Rust has Data Race Freedom, which means you get Sequential Consistency. As I understand it, Haskell does not promise you data race freedom, so, if you mistakenly implemented a data race now you don't have Sequential Consistency and now you can't reason about non-trivial programs at all.

I think Haskell, like Java, and unlike C or C++, promises you still get a meaningful program if you have a data race, but it may be tricky to debug the mess.

Edited to expand: Suppose you're writing a program to process 16GB of data, the result will be a floating point value between 0 and 100.0 as a result of this processing. You realise it's embarrassingly parallel and so you spin up 16 threads to solve the problem in parallel on your CPU cores.

Mistakenly instead of processing 1GB per thread, you process the same 1GB on all sixteen threads, the processing involves a lot of twiddling data, and so now it's full of data races. What is the result of your buggy program?

In Rust, the program doesn't compile, or depending how you wrote it, exits immediately reporting that you can't go around processing the same data from different threads.

In C or C++ maybe it deletes all your data, more likely it simply exits instantly with no result, spins forever, or outputs something completely spurious.

In Java you might get an Exception reporting that you mustn't do this, but you might get a bogus but real looking answer like 46.2 now good luck figuring out why.

I don't know for sure about Haskell, but I think you get a similar situation to Java minus the exception.

> (Safe) Rust has Data Race Freedom, which means you get Sequential Consistency.

I don't think this is correct. Safe Rust includes atomics that are data-race free but need not be sequentially consistent.

Unsurprisingly core::sync::atomic is not in fact implemented in Safe Rust.

However you can indeed use it from Safe Rust, and you can choose Orderings which do not give you Sequential Consistency if you're comfortable with that, or more likely because you are implementing an algorithm whose authors assure you this is fine (I hope they're correct). I think that's a very modest caveat to the overall promise of Sequential Consistency, but sure, it is a caveat. If you write code that explicitly asks for Ordering::Relaxed and are astonished you don't get Ordering::SeqCst I guess I apologise for confusing you.

On the other hand, if one decides to use 16 processes to handle the data across a NUMA shared memory segment, to be able to easier scale it across the cluster, the Rust type system is of little help.

To make this work you're going to need an (unsafe) wrapper that lets all your processes get write access to the same 1GB of shared memory. If you write a wrapper that ensures safety, you'll get back to the same place where the faulty program reports that you can't all map the same 1GB of data and twiddle it, and if you don't it's on you for introducing unchecked unsafety.

Assuming every single OS process that has access to the NUMA segment is written in Rust and originates from exactly the same build, which is impossible to represent on the type system.

> I think Haskell, like Java, and unlike C or C++, promises you still get a meaningful program if you have a data race

This is true (when you insist on mutable shared memory), but in Haskell the safety guarantee is in fact not so far from Rust. The Haskell community is well aware of concurrency issues very early, and they developed a dozen of composable abstractions to solve these problems. (IMHO Rust is the only one that has a truly novel solution in recent years.)

In Haskell, either you have to use `unsafe` functions to share bulk of memory or you don't share memory at all (as mutability is a pita in Haskell). On top of these unsafe functions, you can build other safe abstractions, just like Rust. These abstractions are often available as higher-order functions in libraries, making them highly composable and reusable.

E.g., the canonical memory-sharing data type is `TVar`, which must be access from the `STM` monad [1], guaranteeing both atomicity and data race freedom. (BTW, STM stands for software transactional memory if you aren't familiar with it.) A major offender is `IORef`, and fortunately Haskell provides atomic operations on `IORef`s.

GHC Haskell also offers free parallelism for all pure computations via the `Eval` monad [2].

> Mistakenly instead of processing 1GB per thread, you process the same 1GB on all sixteen threads

This is highly unlikely in Haskell. Haskell emphasizes on composable combinators, so a program that does not share memory would probably look like

    foldr accumulateFunc initValue . mapConcurrently processBatch . split batchSize $ data
(mapConcurrently here is a real function [3].) I'm sure C++, Java, and Rust have similar abstractions, but imperative languages make it too easy for programmers to do `for` loops, making it much more error-prone.

Several array libraries provide safe parallelism. The interface is similar to Rust rayon, but the way they work differs greatly. For example, in Repa [4] you can do

    computeP $ traverse inputArray newShapeFn (\getElem curIdx -> ...)
Here `traverse` results in a "delayed" array: the results are not immediately available, and what you are doing is just composing array operations. `computeP` forces the computation to happen _in parallel_. Since GHC is very capable of optimizing intermediate data structures away, this code can compile to very efficient machine code -- there is no mutability at all, yet you get very safe and efficient parallelism.

PS: what Haskell is really bad at so far is safe in-place updates for arbitrary data types (like custom ADTs), which is solved by Rust cleanly. GHC 9.0 added Linear Types [5] to address this problem. Haskell and Rust are both bad at safe in-place updates of arrays, vectors, etc., and are likely to be bad for a long time -- their indices are arbitrary integers, which cannot be analyzed statically at compile time. Haskell probably can solve this problem completely when it gets dependent types [6], whereas Rust is unlikely to support DT any time soon, if ever [7].

[1] https://hackage.haskell.org/package/stm-

[2] https://hackage.haskell.org/package/parallel-

[3] https://hackage.haskell.org/package/async-2.2.4/docs/Control...

[4] https://hackage.haskell.org/package/repa

[5] https://github.com/ghc-proposals/ghc-proposals/blob/master/p...

[6] https://github.com/ghc-proposals/ghc-proposals/blob/master/p...

[7] https://github.com/rust-lang/rfcs/issues/1930

You shouldn't compare C++ templates to Rust polymorphism. The latter is decidable, the former is not. Compare C++ templates to Rust macros.

Btw. Who likes C++ templates? They are such a roundabout way to program the compiler that it regularly hurts me. You need to understand a set of arbitrary rules (steered by implementation of the existing compilers, not language design, it seems) to make the compiler do what you want in many lines of code. And only fellow template lovers will have a chance to understand your intention. It is admittedly powerful, but compared to any decent macro implementation (Lisp, Haskell, OCaml, Rust) it is so much worse...

I mean, if you could tell from my original post, I like C++ templates. The point is not to constantly write templates in your calling code, the point is to architect a library with templates that affords flexibility and dynamism so that the calling code is easy to write, read, and reason about. Consider, for example, the sol2[0] example usage code vs the actual source code itself[1].

0. https://github.com/ThePhD/sol2 1. https://github.com/ThePhD/sol2/blob/develop/include/sol/func...

> Btw. Who likes C++ templates?

Saying I like them is a stretch, but I prefer to write a C++ template rather than a macro (C++ or Rust) any day.

C++ macros are completely unlike Rust macros so lumping them together makes no sense.

Templates are not very much like macros. You cannot perform a syntactic rewrite at the point of instantiation that replaces the template use, and they are integrated into the semantics of eg. overload resolution in a way syntactic macros obviously can't be.

I don't see why decidability should render them incomparable.

> they are integrated into the semantics of eg. overload resolution in a way syntactic macros obviously can't be.

Exactly, they are macros that can only be invoked via some weird implicit entry points. Proper macros could be used very much in the same way, of course, if the compiler recognizes their output correspondingly. You could, for instance, use macros to generate traits/type classes to enhance overloading. Why someone would prefer this code generation to be so implicit, I don't understand.

> I don't see why decidability should render them incomparable.

Of course you can compare them if you want to point out that particular difference. But if you claim that templates are more powerful, the obvious answer is: Sure, because they're not type safe. Templates simply aren't a type system extension, they're more like macros.

C++ is a fully compiled language, and will not compile code that has type errors (barring you hack your way out of the spirit of the type system). Templates are no exception - they are type-safe once instantiated because they transform at that point into normal C++, which is type safe. Templates themselves are not types and are not meant to be, they’re meant to be templates, yes, more akin to macros, but not at any cost or defecit because of it. Frankly I don’t understand the criticism; I don’t care if I get a compiler error during template evaluation or during compilation or during linkage, a compiler error is a compiler error. Code that works works and code that doesn’t doesn’t.

> Frankly I don’t understand the criticism; I don’t care if I get a compiler error during template evaluation or during compilation or during linkage, a compiler error is a compiler error.

You might want to consider a setup where the library author is not the same person as the library user. Don't you think it's important that the former sees and not the latter? That's why linker errors and compiler errors and template errors are not equal. Also, if you make an error in a template or its instantiation you might not even get an error but an infinite (i.e., maybe aborted) computation. If you don't see the difference between these things, I cannot help you.

> Sure, because they're not type safe. Templates simply aren't a type system extension, they're more like macros.

Could you elaborate in which way C++ templates are not type-safe? Does the upcoming concepts feature alleviate any of it?

A type safe language has a deterministic, terminating algorithm that can prove the absence of certain errors in a program. For instance you cannot use integers instead of functions in well-typed a Haskell program whereas you can make this mistake in a python program.

C++ Templates are not type safe because they need to be evaluated before you can make any statement about their successful, correct application. This evaluation might not terminate. Macros habe the same issue, of course, but they explicitly give you a Turing-complete programming language so it feels more natural.

Templates are just what their name says: recipes to generate types or functions.

They work very well and are quite pleasant to work with.

I don't really see what's so complicated about them.

The complication of templates arises from the fact that you need to know the inner workings of a C++ compiler instead of just a language definition. I know that C++ is specified in a very implementation-oriented manner and I also know that many developers don't see the difference, but design ain't implementation.

If you have proper macros you don't need to know the compiler internals. You have an API that generates code with the language semantics you already know. By definition that's simpler. It's also more powerful, btw.

That is just not true. Compiling C++ is a very complicated task, I'd guess that there are less than a hundred people in the world with the experience of implenting a standard-conforming C++ compiler, and there are a lot more developers comfortable with templates.

What exactly is your perceived problem? You need to have the template definition in order to instantiate it; that is pretty logical. Is this kind of basic common sense what you call "knowing the inner workings of the compiler"?

Where shall I start? Template specialization order? Recursive template definitions for variadic arguments? Conditional template arguments? std::enable_if ?

Template specialization works under the same rules as overloading with some restrictions (this is actually how it is formally defined).

The first step prior to overload resolution is name lookup, which discards template candidates whose declaration refers to dependent names on template parameters which wouldn't exist for that implicit instantiation.

The only order that matters is what specializations are visible at the time of the instantiation, and adding more specializations after that is more likely than not to lead to ill-formed programs (due to the ODR rule that you must satisfy), so you can assume it doesn't happen.

enable_if, recursion or whatever else you're referring to are merely trivial applications of the rules.

I don't think it's unreasonable to have to understand how overloads are selected in order to program in C++. The semantics were not designed to match an implementation but rather to make the most sense. This has nothing to do with compiler internals; if anything many compilers implemented it incorrectly for decades.

I don't think Rust polymorphism is decidable: https://sdleffler.github.io/RustTypeSystemTuringComplete/

Well, shit. They broke the language. I still hope that a reasonable subset of the type system remains decidable, though.

Re OCaml: For what a C++ template does, you'd use a parameterized module (“functor”), not a macro. The OCaml macro equivalent would be a PPX rewriter, which is a rather heavy-handed approach to metaprogramming.

No. An ML-style functor is well-designed and can be type checked at the site of definition. If C++ had functors I would be very happy, but functors are not "zero cost" abstractions..., maybe they can be in a compiler like MLton, though.

I was referring to the use case, not the implementation details. ML-style functors may not be zero-cost abstractions, but they are a way to implement polymorphism, just like C++ templates. You may be able to achieve that with a PPX rewriter, but I'd rather use a functor.

You realize that C++ templates are just a compile-time Lisp-like language interpreter, right?

I think you're right in the power and expressiveness of C++ templates and in how "modern C++" is a joy to code in but I suspect you have the privilege of working in a codebase that does not have much legacy C++ that you have to maintain.

Most large C++ codebases that have been running in production over decades are unfortunately a Frankenstein's monster written in a mix of old style C with classes C++ code full of malloc and free footguns, heavily OOP driven prefer-inheritance over composition era code full of virtual destructor footguns, and the modern nicer RAII std::unique/shared_ptr memory management based C++ code. And a lot of template and macro magic thrown in across each of those eras.

So you can get really productive in modern C++ pretty quickly and it can be an enjoyable experience. But to maintain a large old C++ codebase you are going to have to grok a lot of stuff. So much that I really can't find too many people who can hold it all in their head. I certainly can't.

Finally, am not sure if there was a hint of derision about Rust being primarily used for web services but I think it's a great thing if the next generation of web services will be developed in something as efficient as Rust rather than more resource hungry counterparts like Python or JS. Good for the balance sheet. Good for the environment. Good for the web devs who get expressive easy to understand code with a lot of guard rails built in.

Until Rust gets the compilation story sorted out, what you save on the computer center gets spent in developer workstations.

Do you mean compile time?

> what you save on the computer center gets spent in developer workstations.

um, that cat is already out of the bag. unless you work on mobile or web frontend most backends are big, ugly, have a lot of dependencies, running them needs ALL the RAM anyway :)

C++ is slow as well.

True, except the tiny detail that the usage of binary libraries is common in C++ culture.

Building from scratch Gentoo style not so much.

I seldom compile third party dependencies.

You don't recompile those every time.

Indeed, only every time I checkout something from github, update cargo dependencies or switch worspace.

I would not use Gentoo if I had to compile the world to, say, update PHP.

I would not use Gentoo if I had to compile the world to update PHP.

> it always baffles me that most people seem to be using Rust to write ... web services?

I think it's the whole x-rewritten-in-rust-now-faster hype train. Although those limitations you mention, what's easy and hard to do, do have a more or less automatic impact on performance, fueling that train.

I think of it like this:

Programmer level 1: Makes something work in a higher level language

Programmer level 2: Makes something work in a lower level language

Programmer level 3: Makes something fast in a lower level language (in most cases just a few simple optimisations on top of #2)

Programmer level 4: Makes something fast in a higher level language (object pooling, understanding how the GC and VM works to avoid GC pauses at the wrong time and all that)

So with Rust (or yes, C++, though it's a bit more of a foot gun IMHO), level 2/3 people can write code that's efficient. What I think is somewhat unique about Rust is that it keeps programmers from making lots of common mistakes working with lower level languages - like it's designed for level 1/2 folks.

Writing web-services in Rust is easy and I don't understand the OP's confusion about it. Web-services are highly concurrent and benefit from async, both of which are areas where Rust excels at - and Rust looks like JavaScript most of the time when writing these so it is pretty clear what to do here if you need scale and performance.

Then there's the plethora of other advantages you get from its great type system and inference etc.

I agree completely, it's a very strange objection - Rust is one of the very best languages for web servers. I wish I could use it at work instead of Django.

I think some C/C++ devs take a perverse pride in their language not being suitable to write web servers, because webdev is not real programming anyway (excepting the 'systemsy' parts like nginx).

Well, Rust is great at systems, great at web servers, pretty good at one-off script, even pretty usable for frontend. Every time I hear someone pigeonhole it as just a systems language, I think - you're missing out.

It definitely wasn’t meant as a criticism - I think Rust is uniquely suited to writing web applications as well. I just find it funny that writing web apps is easy enough in Rust that it almost minimizes its opinionatedness, since you don’t really need to worry about difficult lifetimes or ownership when it’s JSON in, JSON out, and the database is just an externality, and you don’t have hard performance requirements despite Rust being made for performance. It seems like an impedance mismatch but it works out.

> compared to the completeness and expressiveness C++ templates.

The error messages in particular - so very expressive.

> the one place I find Rust truly a joy to write is OS dev, and similar low-level projects

That's what Rust actually is useful for. And what C++ should have been used for. It has grown in the abomination it is, because the people (well, yes, me too ;) abused it as language for 'everything'. I do actually fear that the same will happen to Rust.

Btw. most people use Rust to write web services because most people actually write web services ;)

I deeply miss C++ templates in most other languages, excepting fully dynamic languages, and D.

Have you tried Zig? Comptime templating it one of its selling points.


I loooove Zig - my only gripe with Zig is that it feels somewhat arcane at times, and the documentation was borderline absent last I checked. I follow Andrew Kelley on twitter and am fascinated by his exploits. Crystal has a similar macro system that I enjoy, from what I've used of it. Rust macros are similarly satisfying in what you can accomplish with them, but the implementation is so immeasurably difficult, it's almost never worth it to bother, from a general design standpoint.

As far as C replacements go, Zig has my vote for the most approachable but also the most well thought-out. I couldn't sing its praises enough.

Would love Zig if it had RAII.

RAII works fine in Zig

show me a raii code example in C++ and I'll show you the zig equivalent

Note that in `good` below, one can have many such lock guards: lk1, lk2, etc without any explicit lock release expression in the body of the function.

    std::mutex m;
    void good()
    std::lock_guard<std::mutex> lk(m); // RAII class: mutex acquisition is initialization
    f();                               // if f() throws an exception, the mutex is released
    if(!everything_ok()) return;       // early return, the mutex is released
    }                                      // if good() returns normally, the mutex is released

    var m: std.Thread.Mutex = .{};

    fn good() !void {
        defer m.unlock();

        try f();
        if (!everything_ok()) return;
It's still the RAII pattern. The explicit call to deinitialization does not disqualify it. I can overcomplicate it with a LockGuard class too:

    const LockGuard = struct {
        mutex: *std.Thread.Mutex,

        fn init(mutex: *std.Thread.Mutex) !LockGuard {
            return .{
                .mutex = mutex,

        fn deinit(lg: *LockGuard) void {

    fn good() !void {
        var lk = try LockGuard.init(&m);
        defer lk.deinit();

        try f();
        if (!everything_ok()) return;
In this case it demonstrates an improvement over C++, which is what happens if an error happens during a constructor. In C++ throwing an exception in a constructor is essentially UB; in zig it's a regular function call just like anything else.

Thanks for the informative snippet, but explicit expressions involving defer is not what RAII means. In RAII resource is automatically tied to object lifetime.


Regarding errors in a C++ constructor: https://isocpp.org/wiki/faq/exceptions#ctors-can-throw

templates are definitely more powerful than Rust generics. And "enterprise" Rust code can easily become so infested with traits it's as unreadable as C++ template spaghetti. STL is a special level of unreadable. But "enterprise Rust" might be worse than "enterprise C++" at times.

I am curious, what are some actual things you have done with templates that you couldn't do with generics? I'm pretty firmly in the camp of "most of C++'s strengths over C are bad". So I'm struggling.

I'd maybe possibly say C++ templates are better for math? Which is true. Rust generics are not great for complex math. However eigen is nightmare monster that absolutely ruins my compile times so I'm not sure whether this counts as a win or loss for templates. Maybe a bit of both.

Rust is still duper bad at UI. Even if there are some imgui-like Rust libraries doing cool things. I've never seen a C++ GUI library that didn't suck horribly. But at least it was possible.

Traits, macros and crate features actually.

Subtyping is mostly a bad idea so losing that doesn't matter.

(White on black in Courier? Please.)

After a year of writing Rust for a client for a virtual world, some comments.

For this problem, the server is constantly feeding you UDP packets full of changes to the scene, and you're constantly fetching assets from other servers using HTTP. All this has to happen while updating the screen at 60FPS or so. It takes several CPUs and a GPU working hard to show a detailed 3D world properly.

* Safety is a big deal. I haven't had to go into a debugger in the last year. That's a huge win.

* There's a lot of concurrency involved, and being free of race conditions is a big help. Rust does not help with deadlock avoidance, though. That's the next frontier in concurrency - static deadlock avoidance analysis. There are people working on this for Rust, but it's a hard problem.

* Rust has a big problem with certain data structures. In particular, if you have a single ownership tree, you cannot get from a child to a parent. If you want to have backlinks, you have to reference count everything. The language needs something like "single owner with weak backlink, made safe at compile time". That's a hard problem. I keep ending up with refcounts forward and weak refcounts backwards. With refcounts, if you borrow something twice, you get a panic at run time. The compiler cannot catch those at compile time. So borrowing has to be very short term.

* Rust wants functional programs. If what you're doing fits into that model, everything goes smoothly. When there's a lot of state, it's harder. The state here is a whole 3D scene, so it's inherent in the task. In server side web stuff, most of the state is in the database, so the application programmer doesn't have to deal with maintaining its consistency.

* You can paint yourself into a corner with data structures. Figuring out where you need reference counts, where you need locks, and where you need interior mutability has to be right. And you can run into problems such as "that struct can't have the 'send' attribute because it has an element which, deep down, contains a reference count". Make a mistake and rework is difficult. Data structure design is a puzzle-solving problem.

* "Async" has most of the same locking restrictions as threads. It's not like Javascript, where you never explicitly lock. Unless you have some huge number of threads, async isn't much of a win. It's really for the use case of a server with a huge number of slow clients. You probably shouldn't be using Rust for that anyway. That's Go's ideal use case.

I have some arguments with the Rust designers. But really, it's a big advance over C++. It's not a panacea. As I keep saying, do your web server stuff in Go. Go has a more user friendly memory allocation model, a more user friendly threading model, and well-exercised libraries from Google for things web servers do. Rust is for the hard problems.

> being free of race conditions is a big help

Maybe this is a nitpick, but the need to say it suggests you might not understand something important. (Safe) Rust has data race freedom not freedom from race conditions. The most likely reason to muddle those is not knowing what the difference is, which is important because race conditions are an unavoidable phenomenon in computer systems (and indeed the real world) whereas data races are something very strange and difficult to reason about.

If you put the cat out the front door, then walk to the kitchen and close the kitchen door, you can't be certain the cat is still outside, it might have run around and back inside. That's a race condition. Rust doesn't inherently care about those, so if you do then you need to program carefully to handle them. Data races aren't like that, they don't have a clear real world analog, perhaps they're as if you could discover that sometimes while you're putting the cat outside, somebody else is simultaneously putting the same cat inside your house - except, that doesn't make much sense. Nor do data races, which is why it's important that Rust prevents them while many languages just make that your problem.

> Rust has a big problem with certain data structures. ... The language needs something like "single owner with weak backlink, made safe at compile time". That's a hard problem.

GhostCell and its variations (QCell, LCell etc.) are meant to address this without resorting to Unsafe Rust. But it's a bit of a hack so far, and yes it would be nice to have a more elegant, language-level solution.

GhostCell is interesting. I need to try that in some test cases. Apparently you can make data structures with backlinks, but the whole structure may be immutable once made. You need to change at least two links as an atomic operation for that. It's hard to express that via a type system hack.

Then there's the problem of you got there by following links, and now you want to mutate something. Chasing links is fine if they're read only, but if they're mutable, you have to make sure you don't have a loop of mutable references. Again, this is a non-local analysis kind of problem.

Go and try it. I did it, productivity is much higher than cpp for me.

For one, pulling down a package is a one liner. It just works, and I haven't come across a special one that does its own thing yet. The ecosystem seems pretty complete, too. I've yet to have to write my own component: everything from serialisation to async runtime, queues, config parsers, rate limiters, hmac, logging and so on is there.

The big thing though is debugging. I'm CPP is possible to write bugs that manifest in very strange ways when you run the program. It eats up days to debug, and you're never sure if it's actually solved, because there's a large surface that might contain the bug. With rust it will complain at you at compile time. This can be confusing but you'll figure it out eventually and then it's fixed.

All those crates are actually the main thing bothering me about Rust. I've seen some fairly trivial code bases that pull down north of 1 GB of barely necessary dependencies (a couple of wrappers and utilities).

It's not quite as bad as NPM land where some people add a dependency for left padding a string, but dunno...

Probably a fairly unpopular take, but I kinda always liked that about C/C++ code bases (that avoid hidden dependencies): It makes you realise the cost of dependencies :P

So while Rust eliminates a lot of foot guns from C/C++, IMHO it introduces a major one.

Pay attention to cargo features for crates. (Those are flags for crates, which also affect the dependency set needed by the crate affected.)

A lot of crates have a bunch of selectable features. Their defaults usually lead to many dependencies needed. If you carefully select those, you can cut down on dependencies a lot.


Great tip! I wasn't quite aware, basically like compile flags with C/C++ deps (implemented somewhat similar to Cargo in Gentoo, I'd say).

A recent update also made the resolution of such features smarter in the case that the same dependency is used by multiple dependencies, so the size has been reduced. There's also tooling for investigating your dependencies, automatic git bots to make suggestions, etc.

Built into `cargo`, I'd recommend `cargo tree`, `cargo tree -i` and/or `cargo tree --duplicates` to figure out where certain types of dependency tree bloat are.

There's tools that show you the dependency tree, forwards and backwards. I was pleasantly surprised that it existed and was so simple.

I've seen cpp codebases where people just copy paste the libs because they don't like struggling with cmake. Then there's boost, which is huge. Maybe a bit less relevant in recent versions but still, once you need it...

From a security perspective you may have a point, if it's security critical maybe think about that. But then also security benefits from rust in other ways.

My main focus is ergonomics: how simple is it to grab a lib, how often do unforeseen issues happen? With CPP I just get this feeling every time that this is a cave entrance, and adding this lib that I know I need might just eat up a whole day of googling. It doesn't happen every time of course, it just has happened enough times that I'm fearful.

With Rust it hasn't happened yet. Something in general about it is that errors seem to make sense. You get a message, and the hint is an actual hint that helps solve the problem. This goes for using libs as well as the language in general.


I'm more and more finding myself either either looking for less-popular and more minimal crates (that don't need Async, or tens of other further deps), and even writing my own things for certain simple stuff (the latter often reinforced by wanted to experiment and learn).

I'm more concerned about many 'critical' (because there's nothing like them in the standard library, i.e. 'rand', 'chrono') and well-recommended crates are still not at version 1.0.0, so technically they could change the API... (they won't/can't, but seems a very odd situation to me).

What makes you say they can't?

Well, I'm assuming it would break a lot of code...

That's what semantic versioning is for?


Pretty much everyone's using chrono and rand, as there's not really anything else...

I'm not sure what that link is supposed to mean in this context. Yes, they should have released 1.0 sooner. That doesn't change the fact that they may want to make a breaking change in the future and bump the major version number.

Tokio is the de facto async runtime, used by many (though not as many as rand and chrono surely, but still), and they've bumped major versions. Some projects still use old versions, which is actually much more problematic since multiple async runtimes usually can't exist simultaneously, but things are still mostly fine overall.

It probably needs some more time to catch up with NPM...

I don't think that will happen. Rust is much harder than JavaScript, so anyone who wants to write dumb one-liner packages is probably going to still be writing JavaScript and anyone who has managed to learn Rust will already be good enough not to need them.

Unlikely to ever happen.

Learning Rust is a major time investment. Basic Javascript? Not so much.

They'll just draw different crowds.

Comparing Cargo package management to NPM is just silly. You can pull all the crates from crates.io and scan them for memory unsafety with Cargo and Rudra. I don't think anything similar is possible to do with C++ for example, let alone JS.

The alternative would be to arbitrarily slow down library development by including them in STD which is silly for many reasons (ie. it would hurt ecosystem, it would hurt the actual library development, it would hurt the STD development).

npm automatically does a "security audit" when you install packages. it's not a panacea, but a pretty big help.

The reason you typically rewrite all those things in C++ is because they're performance-sensitive, and for performance you need control.

There are also correctness concerns; it's easier to ensure code is correct if you write it than if you take it from the Internet.

How long ago did you change, and what kind if software are you working on?

Most of a year intensively, was keeping an eye for a long time before.

Real time multiplayer game side project, and a trading application.

So stuff that needs at least some performance and garbage collection would be something with a real cost.

I can't help but think that with the way modern languages/programming are going, we're making a rod for our own backs!

Don't get me wrong, I've read about Rust and it's capabilities vs things like C++ and yes, I can see benefits. That's not what I mean.

It's in 10 years when we clone a repo on Github and it pulls down 2000 dependencies: will they all still work?

Everything nowadays seems to require massive dependencies!

I know we've long used libraries for different bits of functionality rather than reinventing the wheel but it just seems like we're giving up masses of control/agility for convenience that may kick the shit out of us later!

Just a thought...

> in 10 years [...] will they all still work?

Most likely no. This is why Rust has its own repository of immutable packages on crates.io. Depending directly on Github is indeed a terrible idea.

> If you are a systems programmer, if you are used to C and C++ and to trying to solve systems programming types of problems, Rust is magical, just like when you learned your previous favorite programming language.


> If you are not, Rust is overkill for your task at hand and you shouldn’t be using it. I earnestly recommend Haskell.

That's... unusual advice. There's a lot to be learned from Haskell, but if I was recommending languages for the application space, there are other languages I might mention first.

Haskell has the downside of being lazy and thus very unintuitive. OCaml, with multicore might be a better choice, if the community gets a tad more vibrant.

But in general, I get the sentiment. For high-level projects, use a high-level language. And if you look at the sad state of js, python, et. al., there is plenty of room for improvement.

Maybe it would even work if someone strapped Rust's type system on go or go's GC on Rust.

> Haskell has the downside of being lazy and thus very unintuitive.

Haskell uses monads for lots of stuff, which can be unintuitive too. From what I've seen, multicore OCaml will allow you to write programs in direct-style (instead of monadic-style) and still be async, which would be great. I hope multicore OCaml will be that "garbage collected Rust" that many people seem to want.

I had to (ask my team to) port a system implemented in Haskell to Java because we couldn't find knowledgeable developers (people who knew Haskell + NLP). This was around 2006-7, not sure if it changed since.

Any important codebase should rely on a language for which sufficient talent and community support (StackExchange...) exists, otherwise you add yet another risk to your project.

Wait a second. Your team was capable of porting from Haskell but incapable of maintaining an existing Haskell code base? Are you sure they just didn't want to use Java because of their own personal reasons?

I agree with pretty much every point besides the author's desire for significant whitespace. Please no. Whitespace is for people and readability, not the compiler.

Significant whitespace is one of the things that irks me the most about the likes of Python, I'll take C-style braces over significant whitespace any day.

I actually prefer the Rust approach here as well, removing the single-statement shortcut makes sense, as all it does in my view is cause issues, and I have it baked into my code styles for it not to be used.

Yes, that is the sane choice.

Seeing this made me realize my own honeymoon period with Rust is fading, if not already over.

What of the article indicates Rust as deserving an only passing attraction? Or more in general: ...why?

I've been using it since around 2015 and still loving it. I haven't earnestly tried to use it for a frontend project, but I've used it in many other types of projects and found it very enjoyable.

If I interpret you correctly you've switched to Rust and then after a while disliked it?

Would be very interesting to hear your experience. Most of these articles like OP, come off exactly what you say: just switched, and now a big fan.

Not everybody whose Honeymoon is over immediately gets divorced.

You might go from believing that you've found somebody flawless to discovering that actually two weeks with them on a sandy beach or in a hotel room was maybe more exposure to their specific quirks than you are OK with, and you're quite glad to spend a few hours every day not with them - without that meaning you wish you were with somebody else instead.

For example, you might start out very happy that Rust's assignment operators have no value, and so lots of trivial C and C++ mistakes can't happen, but as the Honeymoon wears off, you realise two things, firstly, that once in a while it would be convenient to use values of assignment, and so while on the whole it's better to be rid of them you do miss them, and secondly that some other mistakes are needlessly still present, why didn't Rust fix those too?

The more I see these kind of articles the more I am convinced that Rust is the future. Coming from a non technical background and being a python developer I wanted to learn rust but struggling to get comfortable with it. The main reason being I am not in a position to write anything practical with it apart form going through the book and reading tutorials. Gotta put more wood behind this one. I hate to speculate but most of the modern stack will be Rust, python and JS ?

No.. there is no "language x is the future". There are many languages for many purposes. Learn multiple, see what differentiates them. All of this hype needs to end.. seriously. Terms like "modern stack" should be forbidden.

This all reflects in job descriptions. They just mention a ton of buzz tech, no matter if they actually use it, if you would actually use it in the job. No description of what you will actually do. This focus on the tech stack is superficial.

I agree with you. Sometimes it's just so hard not to use the tech buzz words.

There’s an idea that you should work in the highest level language you can get away with for the task at hand.

E.g. if you were writing a device driver, you might try to choose c over asm wherever you can. If you were writing an enterprise app, you might choose java over c++. Etc etc

The idea is that you don’t want to give up productivity boosters like garbage collection unless you’re forced to.

But it’s just an idea and not everyone buys into it. Otherwise we’d all be writing in the ultimate programming language (lisp).

That's an interesting take. Reminded me of this https://twitter.com/brettsky/status/1455600734813052930?s=20

I think python is probably the more pragmatic choice of ultimate programming language - certainly easier to hire python talent than lisp talent.

I agree with all 4 of those points in that tweet.

You can do some pretty twisted things from the comfort of python, e.g. years ago i was able to create an LD PRELOADable shim in python: https://github.com/CraigJPerry/pyshim

> Otherwise we’d all be writing in the ultimate programming language (lisp).

I like Lisp, but I'd argue that languages like Python and Ruby are higher level. Also easier to read.

Yeah i think i have to agree with easier to read - although now that i'm used to lisp syntax, it's less clear cut than it used to be for me. I really like lisp when it's written in an immutable style, it becomes really easy to divide and conquer a code base to understand only the bit you care about editing - but it's not an absolute thing, i still have reservations about macros even though i've personally not yet been bitten.

Higher level - i think that has to go to lisp and it's because of macros. You can write your own programming language, lisp is more like a toolkit for creating the ideal language for solving your problem in that regard.

I mean, I don't find Lisp hard to read, the parens disappear in my mind and it looks almost like Python-esque white-spaced code if you space things correctly albeit with weird function names. But it is at a lower level than Ruby/Python if you don't use macros, to make it "higher level" you use macros, but at that point you're in DSL territory, you can always just add more abstractions to any language really and write DSLs in plenty of languages (something like Rails which uses a bunch of metaprogramming could almost be called a DSL). Lisp IMO is in a strange spot where idiomatic enterprise-y Common Lisp is between Java and Ruby/Python, Lisp + Macros or something like Clojure is higher level. I think Ruby/Python are winning almost just because of their usage of readable and common sense English words.

> The problem? Convenience. Who wants to type std::unique_ptr<Foo> when instead you can write Foo *? Why are the somewhat-deprecated options the easy ones to write? Why isn’t it something like std::raw_ptr<Foo> with some convenient notation for std::unique_ptr?

It would be a terrible idea for Foo* to mean std::unique_ptr<Foo>. If you are typing out "unique_ptr" so many times in your C++ code, you're probably doing something very wrong. The C++ unique_ptr controls memory ownership, and you typically only want one owner of a resource. Sometimes it is necessary to transfer ownership of the resource, but it is (or certainly ought to be) a rare occasion in comparison to how often you need to access the resource. I could understand if you are passing shared_ptr around all over your code (although I wouldn't recommend that), but unique_ptr??

Also, raw pointers have not been deprecated in C++. Raw pointers are not a problem when you get them from a unique pointer. Raw pointers can be a source of errors when a single pointer is used to index into multiple objects, but for this C++ now has std::span<T> so you can avoid most of those common errors.

> Who wants to type std::unique_ptr<Foo> when instead you can write Foo *?

struct Foo{ std::unique_ptr<Foo> Ptr; }

Foo::Ptr foo;

Codebases that do that are very rarely const correct, however.

(Personally, I don’t think typing unique_ptr<Foo> or unique_ptr<const Foo> is such a big deal. But then, I also consider having “using namespace std” the only somewhat sane way to use the STL.)

> But then, I also consider having “using namespace std” the only somewhat sane way to use the STL.

I hope you never ever do that in a header :)

+1. In modern C++ you use can use auto most of the time anyway :)

The thing about auto compared to Rust's inferred types is that when Rust isn't sure it won't infer anything and punts the problem back to you.

  let sandwiches = inventory.filter(delicious_cheeses).map(make_sandwich).collect();
... won't compile because Rust can't infer what sort of container sandwiches is. Is it a vector? A list? Something custom?

When C++ can't be sure in some places auto is obligatory anyway, so, too bad, you get what you're given. There are rules for what you get, but you might have reasonably intended something else, and so the effect is surprising.

Angle brackets don't look good, but explicitly stating that the pointer is a unique_ptr rather than forcing the reader to figure out where Foo::Ptr is defined (presumably a different file) in order to look up the actual type has a lot of value.

using FooPtr = std::unique_ptr<Foo>;

Does a job in an idiomatic way

I think you forgot to say typedef.


CTAD also allows you to write

   std::unique_ptr foo = std::make_unique<Foo>();

> I plan on writing several posts about Rust features, why they’re an improvement upon C++ features, and why Rust is a better, more modern programming language.

I’m always impressed that Rust turns people into such devoted evangelists after relatively little experience with it. This is a testament to its quality.

When I see someone very devoted to a programming language, I see it as a sign that they might have problem with their perspective, not as a testament to the language's quality.

Rust might be great (I don't know) but this aspect of the community makes me less likely to try the language, not more.

I think a lot of people don’t realize that for people at the bottom of the stack Rust is almost a revolution.

We’ve had basically no alternative other than C, with a lot of failed attempts at half supported C++. A lot of us suffered the horrible experience of working with severely lacking C++ compilers in the embedded space. Unless you were working with ARM or on top of linux, C++ was a bad idea.

C++ on bare metal microcontrollers works quite well. I've done that. You have pretty much the same restrictions as C or rust in that space, even though they look a bit different due how C++ works. But then again, you also need learn which parts of C or rust you can use in that environment.

Let me guess, you’ve done it with ARM microcontrollers.

Pretty much until ARM, support was terrible.

The codebase was originally written for V850 and quite mature by the time I joined. It was ported to ARM.


( I didn't care a bit about this aspect when I approached Rust )

Because a community with a strong devotion to a new tool is evidence of a strong reality distortion field and the reality is then decidedly less rosy than the followers of the new creed proclaim.

> is evidence of a strong reality distortion field

How so?

The reality is that programming with a language like Rust or C++ takes a higher level of expertise, those are more sophisticated tools than web technology.

You need to invest a lot more to use the tool well, but then the tool can do a lot more.

That just sounds like snobbery; why even compare them at all.

> but then the tool can do a lot more

Can it? How do you measure that Rust/C++ do more than JS/TS given that they're used to solve completely different problems?

Rust/C++ are not specific to certain types of problems, they can do anything.

That's a whole paradigm shift from using languages that are only fit for a given application domain.

When reading the Rust Book and coming from a C++ background, it feels like every chapter brings a solution to an issue you have encountered previously. So yeah I think this gives C++ programmers hope to not lose too much hair anymore! (C++ made me bald the first 5 years!)

If JavaScript frameworks taught us anything, it's that people tend to fall for "new and shiny" much more willingly, than for anything related to quality.

I'm not too sure that's still the case in the javascript world. React is coming on to 10 years and still remains the most beloved framework.

That’s not really what was happening in the JavaScript world though.

It was a new ecosystem trying to figure itself out, and it takes quite a few attempts to do that. Both react and vue are near 10 years old, express is over 10 years, so it’s not “new and shiny” chasing.

Rust will go through the same thing before the dust settles.

Rust is not that new and shiny anymore.

Maybe that’s why they named it Rust. It sounds old and boring.

For a bit of trivia, I believe it’s named after a fungus.

It has a new edition every year though :)

No it does not. 2015, 2018, 2021. With the 2021 changes so small few projects actually required any changes to be compatible from 2018.

The 2021 changes could be applied with an autofix tool - “cargo fix --edition”.

Most of these changes are small paper cuts that make sense to fix, but they’re breaking changes so they need to be opted into explicitly.

For example, earlier use a field x of struct b in a closure would capture the ownership of the whole of b. In rust 2021, the closure will only take ownership of b.x. This makes sense, but it’s a breaking change and therefore it needs an opt-in.

The Edition system is one of Rust’s biggest strengths IMO. Some languages never make breaking changes be enough if they’re small because it would break too much code. Other languages make breaking changes but expect users to migrate, placing a burden on them. The Rust edition system leaves it up to the user, on a per-crate (package) basis. Staying on the old edition is perfectly acceptable, as is moving to the new one.

Further reading - https://doc.rust-lang.org/edition-guide/rust-2021/

*every three years, actually. The current one is 2021, the one before that was 2018 and the (implicit) first one 2015.

I think that's at least somewhat particular to the web, though.

Plenty of 'C++ people' have been doing C++ for decades.

It's the first new language I've learned (in let's call it 30 years of programming, not counting Turtle Logo or self-taught BASIC) which made me want to rewrite my programs.

I quite like several languages I learned, Java, Python, even C# is rather nice in some ways, others didn't make a dent, Scheme I had actually forgotten until I was asked about old Scheme code I wrote, ML was nice but too abstract for me... I am learning PowerShell at the moment for some reason, I can't say I'm fond.

However I never had any inclination to go back and rewrite software. The software already works, why change language? But I've already spent a bunch of time doing that in Rust because it's so much nicer than the C I wrote a lot of software in previously.

It's not just the language itself, which is pleasant, the tooling is excellent as well. I spent part of Sunday pair-debugging a friend's Perl AoC solution (it worked for the test input but not real inputs, that's a fun story) and they lamented that early on they'd forgotten to have revision control. When you ask Rust's cargo for a new project, it automatically creates you a git repository (if git is available) for the project, and if you didn't say you're making a library it gifts you a "Hello, world." example program to start with.

The error messages are excellent. The developers actually put effort into further refining the already excellent error messages. After a long day writing SQL Server queries for which the response is typically "Syntax error" if you got anything wrong like this is still 1975, it's really nice to write a line of Rust that's wrong and have the compiler explain why it's wrong.

The documentation is lovely. The official documentation for the standard library is in the exact same style as auto-generated docs for your own code. Because why wouldn't you do that? Java more or less did this and I appreciated it then, but writing such docs for Java is much more painful, and yet the results are less useful. C# is similar. I believe Swift got nice documentation behaviour in a recent version but I don't know Swift.

That’s a very kind interpretation.

"The problem? Convenience. Who wants to type std::unique_ptr<Foo> when instead you can write Foo *?"

I think this is an imagined problem. If you use Foo * for ownership, you will need to write new Foo() + delete foo. I don't really think that's more convenient than just writing std::make_unique<Foo>(). It's more a matter of old habit, I think.

Still, if C++ was designed as a new language today, I think there's a chance that unique pointers would have made their way into the native language specification, together with for instance std::tuple, instead of being library features, like they are today.

Apparently the author never heard of C++ modules.

> Apparently the author never heard of C++ modules.

I've been programming C++ professionally for 15 years. My work currently runs on Windows, macOS, and Android. With a little Linux on rare occasion.

I've been hearing about C++ modules for, I dunno, 7 years? Thus far they have had literally zero impact on my both my job and my decision matrix. I do not anticipate they will be actually something I can use until at least 2024. I will be mildly surprised if C++ modules are in broad use in 2026.

I would have said similar things about Rust until recently. I definitely can't replace all my C++ code with Rust. But I can use Rust to write new CLI tools and write certain libraries.

Just like I happen to use modules in 2021 with Visual C++.

Also clang has supported Apple/Google's point of view for quite sometime with header maps....

That's cool I guess. Modules do have partial support in all the major compilers. I'm not sure what that venn diagram looks like.

If you wanna argue that modules are ready for primetime and businesses should start converting their codebases to use them then go for it. But it's a pretty obvious fact that almost no one is using modules outside of hobby projects. I don't think I've seen a single significant C++ based project linked on HackerNews that uses modules. I'm sure a few did and I didn't notice.

Header files in C++ are hugely problematic. It's why the module system was designed. That system is not ready to actually be used by the vast majority of C++ programmers on the vast majority of C++ projects. Even if you happen to be using it right now in your hobby projects or even your job.

I hope someday modules are ready and make my life better. It'll be a few years at bare minimum. I'm not fully convinced C++ modules will be standard convention 10 years from now. I think there's a real chance they wither and die on the vine. I hope not! But it's definitely a possibility.

Just as much as I would be arguing that Rust is ready for all kinds of C++ workloads.

We were discussing a specific claim you made that was unreasonable. No moving the goal posts to unspecific claims that haven’t been made yet!

I am not moving goal post at all, you're the one that implying I was arguing about C++ modules being ready for prime time.

Well they are as ready for prime time, as Rust is for a large majority of C++ use cases.

The goal posts have moved too far for me. Good luck.

Even though I heard of it. I'll believe it when i see it usable in practice. Just like "export template"

Same goes for the gcc maintainers


Granted it is not yet all there, but so it is with ISO driven languages.

This article conveniently ignores that C++ modules exist, and that forward declarations are necessary in many scenarios.

Then the rest of the article is just nonsense, complaining that you have multiple ways of doing something and the like.

C++ modules "exist." They exist in the standard, there are experimental implementation in compilers and there is little to no support yet from build systems (including cmake).

I sincerely hope that it turns out OK, but it's not something usable in production yet.

They work quite well on Visual C++ and MSBuild.

Good to know, I hope other toolchains catch up soon.

By that standard the whole of Rust is experimental, and therefore not usable in production.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact