Hacker News new | past | comments | ask | show | jobs | submit login
Rust is more than safety (steveklabnik.com)
242 points by anp on Dec 28, 2016 | hide | past | web | favorite | 292 comments

For me, Rust is a return to performance without compromising on high-level abstractions (e.g. iterator methods). Rust is very expressive, but at the same time safe and has your back on performance and keeps you safe. Honestly, I feel safer writing in Rust than I do Java. Even Java has its footguns, especially with mutable state being shared across threads. I can't really comment on C++ as I have never written it professionally.

Rust not being a garbage collected language leaves it free to focus on more important problems as well. Not knocking the difficulty of implementing the borrow checker, but once you have GC in a language you spend forever tuning to fit your users' workloads. Unless you're Go, then you optimize for latency at the cost of CPU, but that seems like a very acceptable tradeoff to make given how Go is most commonly used.

I've never seen a single language with so much promise for both systems programming and service/application development.

Edit: Forgot to mention, every time I tried to get into C++, I was always reminded how good I have dependency management in Java. Rust is even better than Java in that realm; Cargo is an amazing piece of software. In that context I like Rust for the same reason I like Go: opinionated tooling. I might not always agree with the decisions, but at least it's opinionated and easy to work with.

> Honestly, I feel safer writing in Rust than I do Java. Even Java has its footguns, especially with mutable state being shared across threads.

Apart from big differences between Java and Rust like memory management (not just GC vs. lifetimes but also stack allocation), approach to threading, and Cargo vs. Maven, Rust is so much nicer in the seemingly little things like:

* UTF-8 everywhere instead of UTF-16 everywhere.

* Unsigned integers.

* Bytes from I/O being unsigned by convention.

* Ability to bake plain old data into the data segment of the executable with genuinely no run-time initialization.

If you're in a stateless server environment, GC almost never needs tuning. Stateless servers are close to ideal for a generational GC. Cacheing can muck things up, but if you implement that at a higher level, you're in a good place.

That's fair, so long as all of your requests have similar allocation profiles. I work on a service which has requests of variable complexity ranging from basically free (10s of KBs) to 10s-100s of MB of allocation. G1GC with a low pause target eliminated a lot of the tuning necessary I'll admit. This could also be a "grass is greener" mentality, but I do believe that in return for the upfront development effort of managing memory as you go vs. deferring to GC leads to more predictable results. Of course, if your algorithms suck, nothing will save you. :P

And when you start managing memory you will discover that GC are more powerful than you thought because your simple memory allocator cannot beat 20+years of engineering in the JVM.

GC enthusiasts say this, but I have never seen it actually be true.

The reason is that whoever wrote the GC for your language has to solve an extremely general problem for an extremely large body of users with very different use-cases.

A memory management system for a particular program only has to solve the problems of that program, which is a tremendously simpler thing to do.

General-purpose GCs are like the F-35 or Space Shuttle ... due to the broad nature of demands they are very complicated, and are much more expensive and perform more poorly compared to specific solutions.

>>> A memory management system for a particular program only has to solve the problems of that program, which is a tremendously simpler thing to do.

I don't like to sound rhetorical, but writing a memory management system for anything non trivial (that is, a scenario where you allocate and deallocate memory) quickly becomes a difficult problem.

Besides, could you give us an example ? I'd like to know about it. My example is a video game where we were allocating thousands and thousands of small stuff and deallocating it. Soon you end up with a fragmented memory where you can't allocate big chunk of memory...

That's almost the canonical example. Say for a particle system you use a fixed-block allocator that's the size of your particles. Now your allocations are quick since you don't have to do fitting and you don't fragment memory since they're all clumped together.

Other common trick in video games is to have a single-frame arena allocator for objects that are transient frame to frame. You just move a pointer forward and reset the pointer at the end of the frame.

> I don't like to sound rhetorical, but writing a memory management system for anything non trivial

Writing a generic memory management system is non-trivial.

Designing and implementing a memory management strategy with an understanding of the application semantics--especially when you can modify the semantics, or enlarge or narrow the relevant context, as appropriate--is much easier. Trivial in many cases.

Writing a GC for the JVM is difficult because the 1) JVM has far fewer degrees of freedom, and 2) they have to solve a much more difficult problem. All those awesome-sounding automatic optimizations JIT compilers can implement pale in comparison to the overall difficulty of the underlying problem.

By way of analogy, look at formal software verification. The underlying computational complexity of the algorithms are extremely costly. The only way to make it work is by using a human to refactor the big problems into many smaller problems, including smaller problems that encapsulate other smaller problems. Only humans can do that, and it will be that way for quite some time.

Manual memory management is the same way--a human can do better than a generic collector because only the human can refactor the program. Only a human can say, "this part is too complex, with too many lifetime and ownership interdependencies, so I'm going to refactor it so it's easier to use a more elegant memory management strategy". Generic collectors like those in the JVM can't do that--they have to work with the software they're given.

If you're having trouble with memory management, perhaps it's because you haven't yet figured out that you don't need to play by the same rules as an automatic garbage collector. And you certainly don't need to be, nor should you be, restricting yourself to a malloc/free-like interface.

That's an odd example, given most video games are written in memory managed languages for performance reasons. Typically an allocation pool would be used for that case because there's some bound to the number of simultaneously allocated objects and each object has a homogeneous memory footprint.

It's also odd because you're replying to a well-known game designer.

That said, there are allocation-heavy workloads for which GC is well suited.

>>> It's also odd because you're replying to a well-known game designer.

Authority position ! :-) I worked on optimizing the Outcast game 3D engine (ok that was around 20 years ago). And I clearly remember other devs in the team struggling with memory allocation.

In the end, they sorted it out without resorting to a full fledged GC. But since our case was simple (loading resources for building a world, then releasing resources that are not useful anymore), I concluded that it's not easy.

In the end, it all depends on the nature of the problem, and one can optimize for any of them (as it is customary in video games). I know that.

If I know my allocation patterns I can guarantee my arena/fixed block allocator will beat the JVM, every time.

Nothing is absolute, there are always tradeoffs. If everything was a one-size-fits-all solution then Software Engineering wouldn't exist as a career.

Rust's allocation strategy beats the JVM hands down in every benchmark. 20+ years of engineering cannot cover the flaws of VM and GC languages. Writing efficient stack-allocated software will always trump GC/VM solutions.

No, nothing can beat manual memory management in performance. GCs don't have a clue where and when they should make what trade offs, but programmers do.

GC can surpass the performance of manual memory management in certain situations, assuming that the manually managed code is forced to allocate for some reason. GCs almost universally use far more memory than manual schemes, though (I'm having a hard time imagining a counterexample).

That depends on the allocator you are using. Rust bundles jemalloc by default, which organizes deallocations/allocations such that they are very efficient.

You mean a select few programmers do. And even if you've struck gold and all of your programmers are from the 90's so that they know what it means to manage memory manually, the landscape - the code - always changes and so will the trade off. The deal is that you want to deliver software quickly and make sure it performs adequately, but no one needs perfectly, in regular business.

It really depends on what sort of business you're in.

If you're doing application development then UI performance is usually good enough using modern runtime environments and libraries. When this is true being able to release features quickly is the most important metric.

But if you're working on a 3d game engine, writing a web browser, writing code on an embedded platform or writing a database then performance itself is a feature. Being lazy with allocation or your choice of algorithms is a really expensive mistake that will hurt your bottom line.

Unless you're Go, then you optimize for latency at the cost of CPU, but that seems like a very acceptable tradeoff to make given how Go is most commonly used.

Here's how I see the use of GC in Go: It allows the programmer to more easily get to the point where there's a correct, running version of the system. Once you're at that point, you then optimize as much as you need to reduce GC pressure. Emphasizing latency at the cost of throughput is exactly the right tradeoff for this purpose. First you get it running, then you get it correct, then you make it fast.

Large pauses are more likely to be a barrier to program development than being a bit too slow.

Not everyone have to tune GC. I never had to do that. I work with government project which serves the whole country now (small cluster with jboss servers), nobody tuned GC there, because there was no need (except -Xmx setting, of course).

I wrote few simple programs with Rust, but its memory management model is much harder to use, than GC (no thinking about memory at all, just don't leak) or ARC (just think about cycles). While I understand why it's necessary to achieve higher performance, it's harder to use for developer.

It might have uses for multithreading, but, honestly, I never had any problems with it. It's usually better to write single-thread code and avoid multithreading except in few carefully written places. So it solves a problem I never had.

That's my perspective, I mostly did enterprise web development and iOS development. Probably from systems programming things look different.

Rusts guarantees aren't just about multithreading: http://manishearth.github.io/blog/2015/05/17/the-problem-wit...

> It's usually better to write single-thread code and avoid multithreading except in few carefully written places.

implies that you did have some problem with multithreaded code, which is why you ultimately chose to avoid it.

Stuff like rayon in Rust lets you sprinkle multithreading over your codebase very easily without needing to worry at all, because Rust keeps it safe.

> but its memory management model is much harder to use

IME it takes some getting used to but once you do it's pretty automatic and you don't have to think about it much. YMMV.

I can see where my statement was ambiguous, I'd intended to say that the GC developers spend tons of time tuning their implementations to fit their users' workloads.

I will say that "government project which serves the whole country now" doesn't really specify how many requests you're doing or what kind of pauses are acceptable. If it's a traditional website, a 300ms GC pause might be invisible. A 2s pause might even be invisible.

For a service doing 30,000 requests/s (~1-2k/s per machine), a single STW pause of 2s is _very_ disruptive. Even 300ms is high. A 300ms pause is 600 queued requests @ 2k/s. We certainly tune this service to eliminate most STW GC. We were even hit by http://www.evanjones.ca/jvm-mmap-pause.html .

If you're into safety than you should probably go with a functional programming language with low-cost abstractions. Rust isn't an expressive language, its only benefit compared to other general purpose languages is its memory management.

I haven't developed this thesis fully, but I've been thinking that Rust's model is a better version of functional programming, and we lack a good term for it (since it's obviously not a pure functional language). The point of functional programming is to avoid shared mutable state by eliminating all side effects and using a rich type system that's hopefully easy for the programmer to use. Rust avoids unsafe shared mutable state, without requiring the avoidance of all side effects, and by using an even richer static-analysis system that tracks exactly what side effects are safe.

Of course, there are a lot of useful things a pure functional programming language gets you, like Haskell's implicit IO scheduling and threading, that Rust doesn't. But for many use cases where functional programming languages are great, they're great for specific reasons that Rust is also great at.

I've had similar thoughts. Rust does a really good job of asking, are you really sure you want that shared mutable state? It's possible to break out but I've found the friction involved has guided me towards better solutions.

It's worth noting that Haskell has "shared xor mutable" via the ST Monad. It effectively providing mutable memory cells that aren't allowed to "escape" a scope in much the same way as `&mut`.

Also having actual purity knowledge (something Rust punted on before 1.0) can sometimes be useful. Although honestly 99% of the time it's only for the benefit of the compiler, which isn't a big deal if you have the tools to write code that's efficient from the get-go.

e.g. list fusion is enabled by purity, but is largely uninteresting in Rust because lazy iterator chaining already orders operations and avoids intermediate lists just like list fusion.

A common point I've seen is that "Functional programming sees the aliasing vs mutability issue and declares that the solution is to avoid mutability altogether. Rust goes in a different direction, saying that only aliasing XOR mutability is allowed".

It's solving the same problems, and the solution ends up having many similarities with functional programming.

When people keep writing "aliasing XOR mutability", don't they actually mean "aliasing NAND mutability"? And if yes, is there a reason to use XOR here? And if no, isn't it the case that if it's worth communicating, it's worth communicating clearly?

Can you elaborate more what makes you think Rust isn't an expressive language ? And what feature your «functional programming language with low-cost abstractions» has that your miss with Rust ?

In my experience Rust is really close to OCaml in term of expressiveness. I don't know what feature Scala or F# have that Rust doesn't.

The only major thing I can think about is Higher Kinded Type in Haskell. They are indeed cool and there is work underway to add HKT to Rust (but nobody is sure yet if it's possible though).

I don't know a great deal about scala or f#, but I know scala has higher kinded types as well as implicit function arguments which are things that rust does not have. OCaml had parameterizable modules and functors, tail call optimization, and presumably other important features.

I'm much more familiar with Haskell, which definitely has a lot of things that rust does not, not just higher kinded types but sophisticated type level programming, type families, functional dependencies, existential types, etc etc etc. Additionally it has lazy evaluation, a very sophisticated green threading system, software transactional memory, etc. Not to mention of course the purity. Haskell has a bunch that rust does not -- of course the reverse is true as well, but it's worth mentioning.

At the end of the day, though, rust is meant to address use cases that no mainstream functional language does, and it is very successful at it, while also having a ton of really cool features which makes it very exciting for FP enthusiasts.

Thanks !

Rust is very expressive. The main benefits of functional programming languages is contained with Rust's Iterator trait and the ad-hoc polymorphism that traits allow.

I don't know that traits in Rust are an example of ad-hoc polymorphism, since they're fully type-safe.

I don't think that means it isn't ad hoc. Type classes in Haskell were originally introduced as a form of ad hoc polymorphism, for example: (One of Wadler's great papers btw, worth a read.)

See also: https://en.wikipedia.org/wiki/Type_class

Of course, Haskell type classes and Rust traits aren't the same. But they are quite similar!

"ad-hoc polymorphism" is an informal term; traits are sort of both ad hoc and parametric.

I've sometimes imagined someone writing a paper about specialization and naming it "How to make parametric polymorphism less parametric" after Wadler's paper.

Here's an example: `Stdin` and `File` both implement the `Read` trait. Therefore, you can write a function that takes the traits that you want to use as input parameters.

fn do_something_with<S: Read>(source: S) {

    // do something with input source

Then you can do the following:

let stdout = io::stdout();


let file = File::open(path).unwrap();


You're free to narrow down/expand the types/methods that can be used with the function if you add more traits.

As I understand the term, "ad-hoc polymorphism" refers to dispatch based on types (as opposed to "parametric polymorphism", where one code path works for any type). Nothing to do with type-safety.

A big non-safety selling point of Rust for me is a systems language with the tooling and ecosystem of a modern dynamic language. Dependency management and composition is a necessary part of complex programs, but a tedious, painful timesink in almost all systems languages. While CMake and similar tools improved the situation for C++, it's still an enormous pain and a barrier to sharing and reusing code. You are often forced to re-invent the wheel -- complete with your own bugs and maintenance obligation -- simply because getting a dependency to be built and detected cross-platform may be an even larger cost.

Cargo is a wonder. It's more limited in scope than something like CMake, but for the vast majority of use cases that are just pulling in a library or binding from the same language, it's spectacular.

Related to this, having an opinionated lint built into the tooling creates a common, readable dialect for the ecosystem. This is especially important for verbose syntax languages like Rust or C++. There are C++ libraries I can't understand simply because special snowflake formatting combined with complex template syntax renders it illegible.

> A big non-safety selling point of Rust for me is a systems language with the tooling and ecosystem of a modern dynamic language.

I've reached this point from the opposite direction, as a frustrated Rubyist: I really crave the nice things that come from having a statically typed, compiled "systems" language, but I'm not prepared to lose the kind of tooling and ecosystem that I currently have with Ruby to get them.

Last time I checked cargo was still not handling external dependencies and the packages were simply wrapping system-wide dependencies, that you have to manage yourself. Same problem as with all CPAN-derived package managers.

This only applies if you are using system libraries. Most libraries do not rely on external dependencies. Regardless, cargo is build tool for Rust, not a replacement for your distribution's package manager. If you want to build a library that's fully static then use the musl target and build your external dependencies with musl. If you want distribution integration then use a makefile or other tool.

It is intentionally not handling external dependencies. That's not what it's for. Many libraries and wrappers will compile the library if missing on the system (eg: rust-curl as a practical example will compile curl and statically link it if needed).

CPAN did it exactly like that, many packages also bundled C libraries, etc. It's a very fragile mess and a mistake package managers refused to learn from CPAN. It is especially pronounced once you have a CI/CD system, forces you to pretty much abandon such package managers and only use them to explore/discover new packages.

>forces you to pretty much abandon such package managers and only use them to explore/discover new packages

Given the unbelievable popularity of Rubygems.org, PyPI, and NPM, I think this is an unfounded assertion. Furthermore, Rust programmers aren't afraid to rewrite libs in Rust when a widespread external dependency becomes annoying, e.g. with FreeType: https://www.reddit.com/r/rust/comments/44btaz/introducing_ru...

It's up to the implementer of the wrapper; some will compile a version if they can't find one on the system, for example.

One of my favorite things about Rust is one of the practical applications of the safety. Specifically, it's that I can write multithreaded code without fear because the compiler won't let me get it wrong. It's far far far too easy to screw up multithreaded code if you're using any kind of shared data, and Rust is the only language I know of that truly makes it safe without compromising on performance.

As a trivial example, some time ago I fixed a subtle threading bug in fish-shell. The code used RAII and lock guards, which is good, but in this particular case the lock guard was created using the wrong lock. So it was locking, it just wasn't locking the correct lock, meaning the data it was mutating was subject to a data race. As I fixed that, I found myself wishing the program had been written in Rust, because that sort of bug simply won't happen in Rust.

That's one thing I hate about Go, is how easy it is to shoot yourself in the foot with it.

Really, it has a decent way to manage inter-thread communication - channels. But the language still permits me to read/write a "non const" global variable from a goroutine.

This is horrible, especially when refactoring.

Each goroutine should have it's own scope (unless explicitly defined).

More than once I've also had to kick myself for making the rookie mistake of closing over a loop variable. So something like:

    for _, v := range values {
      go func() {
        // Do stuff with v
Instead of:

    for _, v := range values {
      go (func(v string) {
        // Do stuff with value
It's rare enough, and subtle enough, that every instance tends to result in 5-10 minutes of puzzled debugging with stdout printing statements until I realize what's going on. What does Rust do here?

I like to say that Go's strictness is unevenly distributed. Unused imports are illegal, but Go is perfectly happy to let you shadow variables, have closure use loop variables, or reassign the built-in values ("nil", "true", etc.). It's like a parent who locks the scissors away in a drawer but doesn't mind leaving drain cleaner on the kitchen table.

> What does Rust do here?

This code results in a compilation error:

    for v in values {
        spawn(|| {
            // Do stuff with v
Error output as follows:

  error[E0373]: closure may outlive the current function, but it borrows `v`, which is owned by the current function
   --> <anon>:7:15
  7 |         spawn(|| {
    |               ^^ may outlive borrowed value `v`
  8 |             v;
    |             - `v` is borrowed here
  help: to force the closure to take ownership of `v` (and any other referenced variables), use the `move` keyword, as shown:
    |         spawn(move || {
You can play with the code here: https://is.gd/f4guCw

As the error message says, the real problem with this code, given Rust's semantics, is that the closures are being given pointers to memory that might not be valid by the time the closure gets around to executing (unlike in Go (or any other GC'd lang), the mere existence of a pointer is not enough to keep memory alive). And as the help text at the bottom of the error message describes, one solution is to have the closures themselves assume ownership of the data via the `move` keyword on closures.

Excellent, thanks. Great error message.

Go might have some complex concepts initially and be a big language but its error messages are impressively helpful and precise.

I'm very thankful to that level of dedication.

The error message above was from Rust, not Go, is that what you meant? :P

Fortunately, that mistake isn't possible in Java with lambda expressions or anonymous classes. It is a compile error if any captured variables are not "effectively final".

Prior to Java 8, which introduced lambdas, variables used with anonymous classes were required to be final. That restriction is now relaxed as the compiler can infer it.

Interestingly, the for-each loop variable is effectively final, unless you explicitly modify it, so this code is legal (and correct):

  for (String v : values) {
      executor.execute(() -> System.out.println(v));

I like that in C++11, you specify what should be captured and how it should be captured (by value or by reference).

Rust is the same, though it's less flexible[0]: it doesn't have capture lists, only [&] and [=], and they're very slightly different:

* the default is similar to [&] but will infer the capture mode and use the "simplest" possible one (reference, mutable reference or value) depending on use on a per-value basis.

* `move` closures are similar to [=] but will use Rust ownership semantics, so they will copy Copy values and move non-Copy values, it can capture external references (mutable or not) by value so if you needed to explicitly tweak your capture that's the one you'd use.

[0] OTOH it's more readable

I'm not sure that "less flexible" is accurate; we can accomplish the same things, but through different means.

It might have been unclear, but I meant the closure syntax/capture itself is less flexible. You can get the same result e.g. using a move closure and declaring references outside the closure then capturing that by value, but I'm sure you could also do that in C++.

Ah, yes. Cool :)

I can never decide if enforcing (effectively) final is a good feature or a bad one. Sure, you cannot shoot yourself in the foot with these limits, on the other hand you loose all the power of real closures.

You can easily modify variables outside of the stream. For example, to increment a counter declared outside of the lambda, use a final AtomicInteger or similar class instead of an int. For any other value, use a final class that wraps it.

The point of enforcing final variables is to prevent programmers from accidentally modifying things they don't want to. It does not prevent programmers from modifying variables intentionally.

I know very well that I can use inner mutability to work around that restriction, that doesn't change the fact that it is a workaround, nothing more.

It's not a workaround. It was designed to work that way on purpose. The people who developed this feature could have easily prevented programmers from using any variables outside of the stream, final or not, but they chose not to because they wanted to allow programmers to use final variables in this way.

Furthermore, it does not cause you to lose "all the power of real closures" like you said it does. All you lose is the ability to use a closure around non-final variables, which is a trivial drawback in any use case I've ever come across. You lose only a little bit of the power of closures.

There are no final variables in Go. Anybody who pretend Go has a good type system is a liar. Go type system is broken. It doesn't make the language bad, it makes it a missed opportunity.

>There are no final variables in Go.


const aren't final variables, you can't have a const pointer or a const array with Go.

Shadowing a variable really ought to be an error. At least, you shouldn't be able to hide a local variable with another local variable. Yes, sometimes you have to write "vv" in the inner loop instead of "v", and not feel as l33t. Deal with it.

(A shadow variable problem in C just turned up in firmware for a surface mount reflow soldering oven I have. The variable name is "avgtemp". This may explain why some ovens scorch PC boards.[1] Read the discussion. Note that none of the people writing about the issue understand that "static" would confine the scope to one file, and that uninitialized variable declarations at top level are global to the whole program. That's probably because they came up from Arduino land, where people are not taught to think about that stuff. Arduino land is really full C++ using gcc, but it's not taught that way.)

[1] https://github.com/UnifiedEngineering/T-962-improvements/iss...

>Note that none of the people writing about the issue understand that "static" would confine the scope to one file

That seems a bit surprising, unless they did not know C even reasonably well. IIRC (though I haven't used C much lately, I did a lot with it earlier), that is a not-too-advanced feature of the C language. I think it is covered in the K&R C book, near the middle or in the latter half (don't have it handy right now to check).

I don't think the parent is talking about problems with shadowing.

They wrote "but Go is perfectly happy to let you shadow variables...", although their main issue was with closures.

Whoops, right you are!

I feel like this specific issue Go just got wrong. 99% of the time you want a new variable in the range loop.

I think Go's strictness is very practical -- it's strict where they saw real problems and the cure was easy.

Something like threading race conditions is a real problem, but not easy to fix.

Go does give you a pretty good runtime tool with the race checker, though. It would quickly catch something like the fish bug of grabbing the wrong lock.

>Something like threading race conditions is a real problem, but not easy to fix.

It is. It's called channels.

Sure, you could only use channels and you'd never get races, but in practice that'd be unwieldy and slow, which is why people write traditional lock-based sync stuff in Go all the time, and depend on old-fashioned debugging (and the race-checker).

Rust really fixes this (with its ownership system), but it wasn't easy.

> Sure, you could only use channels and you'd never get races

Go allows sending pointers to non-locked structures over channels, so it's quite easy to "only use channels" and still get race.

You can also hit that issue if you spawn multiple goroutines sharing the same initial lexical environment if they make use of a mutable structure from that environment.

If you're accessing mutable data from multiple goroutines, then you're not only using channels! :)

Of course you are. Go has essentially no support for immutable data structures, and even if you send large structures by value (which can get expensive) they themselves probably embed pointers to mutable data, and you're back at square one.

I didn't say it was a good idea (in fact, if you read far enough back up the comment chain, you'll see I was saying it's a bad idea), but you could, if you wanted to, restrict yourself to only communicating between goroutines via channels of immutable data and be confident that your code had no data races.

>unwieldy and slow

So force one to do it consciously (kind of like unsafe).

You should use go vet


(There seems to be a documentation error.)

And try the race detector:


Though of course, it's hard to argue that it would be nice to prevent these from compiling.

Indeed, it's invaluable. I use "go vet" and have enabled all of the Gometalinter linters [1] that make sense.

"go vet" never warned me about goroutine closure errors, not sure why, maybe I was running an old version. But I'm glad that it's supported.

That said, some of the things that "go vet" catches should, in my opinion, be errors.

[1] https://github.com/alecthomas/gometalinter

Well, that's an "easy" bug.

What about not-shadowing a "non-loop" variable?

Take a look at this question:


It's easy to mess up during refactoring, and there's pretty much no reason to allow it.

Rust will prevent you from doing this, mostly. Basically, the borrow checker will either force you to move the value, preventing it from being used elsewhere, or copy/clone the value which prevents any issues.

I don't know Go, so I don't really understand what the issue is with the code you posted. In Rust, the closure used to spawn a new thread needs to own its environment; if it does then there's no problem. If it doesn't that's a data race and you have a compile time error about it.

The error in the Go snippet is that "v" is a single memory location shared throughout the loop.

The closure doesn't get a copy, so what usually ends up happening is that every goroutine gets the last value (since the goroutines are probably not scheduled until the end of the loop).

This problem doesn't just affect loops, of course: A goroutine can access any local environment outside its scope, and local variables can mutate.

The worst surprise I ever encountered was this (simplified):

    func makeWorkersDocomplicatedStuff()
      ch := make(chan string)
      defer func() {
        if ch != nil {
      for i := 0; i < numWorkers; i++ {
        go func() {
          for {
            select {
              case s := <-ch:
                // ...
      ch = nil

      // ... More stuff ...
What will happen here is that the goroutines will all block forever, because "ch" becomes nil, and in Go, polling a nil channel will block forever (it's an interesting design choice in such a strict language).

Rewriting and refactoring this was trivial enough, but catching it was time wasted. Lessons learned: (1) Be scrupulous about closure environments, (2) be super careful about nil channels, and (3) try to avoid defers whose concerns don't fully encapsulate the function body (so defers in the middle of a function is often code smell).

> The error in the Go snippet is that "v" is a single memory location shared throughout the loop.

In Rust this is not true. However, the same concept applies - say it was a variable from outside the loop. In that case, you would get a clear compile time error about moving the value into the thread's closure more than once.

A similar error would apply if you iterated over a container by reference (instead of by value).

I had a relatively simple CLI tool written in Go that nevertheless had a data race bug that I hit about once a month. I wasn't able to track it down until Go actually introduced a data race detector, at which point I found it immediately. The race occurred when I spawned a task and watched the task's stdout and stderr. I assumed it used one goroutine that listened on both pipes, but it turned out that the stdlib actually used one goroutine per pipe, meaning the shared buffer that I'd captured in both callback functions was being raced on. Of course, the stdlib didn't document how many goroutines it used in that scenario. So yeah, footgun, meet foot.

I was going to leave basically this same comment -- I like Go, but the fact that it doesn't have nice locking support always gets to me.

(Almost every Go program I've written uses goroutines so I end up using locks in almost every program I write)

There's no way to do this without language support or sacrificing perf with interface, so I get why nicer locks don't exist, I just wish they did.

Ooh, a chance to talk about Rust in the context of fish.

I did the initial introduction of pthreads to fish 1.x, when it was thread-oblivious, and it was very difficult because there was a lot of global data (it's a shell, after all). Races and deadlocks were initially very common, and Rust could have helped with at least the first problem. It would have been very valuable.

More generally, the Rust pattern of having a lock own the data it protects is really nice. This can be more-or-less implemented in C++11 via lambdas, which should solve the wrong-lock problem. (I didn't use that technique at the time because I was targeting C++98.)

It's fun to think of a shell in the context of Rust. One place where Rust's guarantees cannot provide much help is signal handling. Signal handlers require global data to do anything, and signals are incompatible with most of Rust's ownership machinery for globals, primarily locks.

Code that runs post-fork is similar: it must not allocate memory, and there's no protection against that in Rust (or in C++).

A second place is the system interface. fish often has to dip below the surface-level APIs. It can't use getenv() or strerror(), and instead has to access `environ` and `sys_errlist` directly. Many of Rust's safe interfaces would not be usable, and we would have to write replacements using unsafe code.

fish also uses shared memory (mmap, shm_open) in a few places. Rust is supposed to protect against data races, but I don't see how it can do so in the context of shared memory.

Last and probably most importantly, shells make use of a lot of the horrible termios stuff, which tends to vary across systems and make heavy use of the C preprocessor. This will be gross anywhere, but especially gross in anything that's not C or C++. I first attempted to write a shell in Go, and this is why I gave up.

Overall IMO Rust could provide a lot of value for a shell, but probably will require a fair amount of unsafe code at the edges and fiddling to support legacy interfaces.

We've not had any issues developing the Ion shell in Rust -- performs as well as Dash. Basically, you can't come at Rust with the mindset of object-oriented programming. Make your shell event-driven and you'll do perfectly well. Queue your signals and act on them at a later time when you can.

I've been particularly fond of Rust's Iterator trait which has been quite valuable for efficient parsing in Ion.

> I don't see how it can do so in the context of shared memory.

Basically, it does not let you access shared memory without _some_ kind of synchronization primitive in safe code. That might be an atomic variable, or maybe a mutex, whatever.

If two threads independently mmap the same file, what is the thing that Rust forces them to synchronize on?

The de facto memmap crate for Rust makes it unsafe to access the contents of a memory map: https://docs.rs/memmap/0.5.0/memmap/struct.Mmap.html#method....

This doesn't actually answer your question, but the presence of `unsafe` will at least warn you that you need to put some thought into what you're doing.

mmap can't be exposed with a safe interface, in my understanding.

Of course, you can access `/proc/self/mem` safely so technically it's all possible in safe code.

But that's not something Rust can prevent.

That's a known bug: https://github.com/rust-lang/rust/issues/32670

This bug was posted on April 1st, being a bit more ha-ha-only-serious than the emoji-based error handling one that the core team put on. AFAIK, it's the only WONTFIX safety bug in Rust's issue tracker.

FWIW, you can avoid allocation in Rust by limiting yourself to libcore, which doesn't know about allocation. Stick the #![no_implicit_prelude] attribute on your module and now you can't use anything from liballoc or libstd without an explicit `use` directive.

I just picked up a book on Rust yesterday and am looking forward to working with it. I'm glad to hear that Rust makes concurrency easy, since that is a huge requirement in my current project. Since you obviously know more about it than I do (I'm currently a chapter-one-level neophyte), does Rust's ease come via the compiler catching errors for you, or are there mechanisms in the language itself that protect you from the pitfalls of concurrent programming?

There are other languages that make concurrent programming easy and less error prone. Elixir is the one I'm probably most familiar with. It achieves this ease, in part, because it doesn't allow you to store state in objects, the way, say, Java does. Instead, you pass data through a function chain that will always produce the same result, unlike object-oriented code. There's more to it than that, of course, but that's a good starting point for understanding how Elixir helps with concurrency.

If you don't mind me asking, what about Rust makes concurrency less error prone?

I read this quote somewhere and its helpful:

The problem with threading is, shared mutable state. Most functional languages solve the issues with threading by eliminating, mutable. i.e. Only allowing "shared <strike>mutable</strike> state". Rust solves the issues with threading by enforcing mutual exclusion of either, shared xor mutable. i.e. Only allowing, "<strike>shared</strike> mutable state" or "shared <strike>mutable</strike> state".

> does Rust's ease come via the compiler catching errors for you, or are there mechanisms in the language itself that protect you from the pitfalls of concurrent programming?

Yes, the Rust compiler catches and responds with (very helpful) error messages, when "the rules preventing pitfalls of concurrent programming"[1] are broken. Rust has a powerful type-system, and a borrow checker[0], it does not have special mechanisms in the language related to concurrent programming. The powerful type-system + borrow checker, happen to additionally solve the pitfalls of concurrent programming.

[0] Both the powerful type-system, and the borrow checker, are extremely useful for other reasons not related to concurrent programming.

[1] "The rules preventing pitfalls of concurrent programming", are not defined by the compiler, but rather by the stdlib. Similarly, (session types)[http://munksgaard.me/papers/munksgaard-laumann-thesis.pdf] are a cool way to solve "the pitfalls of protocol programming". Again, nothing explicitly in the language regarding protocols, however, is solvable by a Rust library because of a combination of the powerful type-system + borrow checker.

Thank you. That is very useful.

Jessica Kerr posted on Twitter, "GOTO was evil because we asked, "How did I get to this point of execution?" Mutability leaves us with, "How did I get to this state?" which feels relevant to this discussion.

> quote somewhere

I've said this a lot. Here's one version of that talk, IIRC: https://vimeo.com/144809407

Thanks. I will watch this.

You might enjoy reading https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h...

It's from before 1.0, so some of the method names have changed, and scoped threads are in an external library instead of std, but the principles are all the same.

Thank you!

Which book did you pick up?

A pre-release copy of Programming Rust, by Jason Orendorff and Jim Blandy.

Clang provides annotations that can do this for C++.

Which annotations are you thinking of?

Cool, was just curious. Thanks.

> the compiler won't let me get it wrong. It's far far far too easy to screw up multithreaded code if you're using any kind of shared data, and Rust is the only language I know of that truly makes it safe without compromising on performance.

While I am, in general, a fan of Rust's focus on safety, I think this particular feature (data race prevention) may actually be somewhat problematic in terms of making code safer. I'm worried that it may be sort of analogous to "all-wheel-drive" being marketed as a winter driving safety feature that ends up (perhaps apocryphally) causing more accidents because it instills a sense of overconfidence that results in drivers neglecting more important/effective safety practices (reduced speed, snow tires, etc.). I think it's beneficial to read one of the Rust issue threads, "Rust does not guarantee thread-safety #26215"[1], about why they stopped referring to Rust as "thread safe".

Data races only occur in the context of "casually" sharing objects between asynchronous threads. That is, accessing a shared object from asynchronous threads directly, instead of through a "fail-safe" access control mechanism. Some programmers may be of the position that directly accessing shared objects is perfectly fine in some contexts. In those cases Rust's data race safety feature is a plus.

But data races are really just a subset of race conditions, and Rust doesn't prevent those other race conditions. The practice of directly accessing shared objects is prone to both (low-level) data races and (higher-level) "non-data race" race conditions. I'm worried that the larger effect of touting/marketing Rust's data race safety is to "(over-)legitimize/condone" the practice of "casually" sharing objects asynchronously, resulting in the neglect of prudent access control mechanisms (even if only by the inexperienced), and an increase in "non-data race" race condition bugs.

So, in the interest of public safety, perhaps all-wheel-drive cars should be bundled with some sort of warning/notice that prudent winter driving practices (and speeds) should render all-wheel-drive almost irrelevant as a safety feature. And perhaps an analogous one for prudent asynchronous object sharing practices and Rust's data race safety.

[1] https://github.com/rust-lang/rust/issues/26215

And a related article on safer asynchronous object sharing in C++ (shameless plug): https://www.codeproject.com/articles/1106491/sharing-objects...

> I think it's beneficial to read one of the Rust issue threads, "Rust does not guarantee thread-safety #26215"[1], about why they stopped referring to Rust as "thread safe".

No they didn't. That issue was closed as WONTFIX and rust-lang.org still says to this day "… and guarantees thread safety".

More generally, Rust can't protect you from logic errors, but it does more than just guarantees freedom from data races. The very issue you referenced has a discussion on this topic, about how the phrase "thread safety" isn't well-defined, but that Rust does give you a stronger notion about consistency in a multi-threaded world than just freedom from data races.

I genuinely don't understand your all-wheel-drive comparison. You seem to be arguing that the consistency guarantees Rust provides are actually bad because it will trick users into thinking that they don't have to give any thought at all to logical races in threading. And that's nonsense. Users have to think about that regardless of the consistency guarantees the language provides. The fact that Rust does most of the heavy lifting for you makes it a lot easier to reason about the logical races, because you know you don't have to even consider the consistency issues that Rust protects you from, which means you have much less complexity to reason about. In addition, most of the synchronization mechanisms that you need in order to share mutable values across multiple threads will tend to protect you from logical races too. For example, if you have a value that you want protected by a lock, you can't just stick the lock in the value and lock/unlock it in every method, because that doesn't help you share the value itself across threads. So instead you'd probably wrap the value itself in a Mutex, which you can now share easily (e.g. via Arc), and now the Mutex guards the whole value instead of just guarding every function call, meaning you won't have logical race issues when calling several methods on the value in a sequence.

As someone who's been in the system programming space for a while and played with Rust for ~1.5 years here's my highlights.

1. I haven't written any multithreading code yet but I love the borrow-checker since it does a fantastic job of dealing with mutable state. I find I get better architected programs with single-ownership long before I start writing Rust and I love to see the compiler enforce it.

2. ADTs. Simply the best way to represent State + Data bar none. Combine with pattern matching and it's just sublime.

3. Iterators. Never seen a language do them so well with so little overhead. The focus on zero-cost abstractions really shines here.

4. Cross platform support. This is a huge one. Having dealt with MSCV/GCC/LLVM/custom compiler toolchains I can't stress to wonderful it is to have a consistent compiler that can build other targets on a different host.

5. Cargo, other people have covered it, enough said.

6. The community. Truly one of the better ones out there who not only want to help but also provides releases like clockwork with clear semantics on stability and APIs.

FWIW that the borrow checker has nothing to do with multithreading, all the guarantees enforced are important in single threaded contexts.


It's a prerequisite for safe scoped multithreading, but that's about it.

Fair enough, it's a common enough refrain that I wanted to point out specifically that there's lots of benefits to it outside of memory safety.

> 3. Iterators. Never seen a language do them so well with so little overhead. The focus on zero-cost abstractions really shines here.

Try some D, maybe? :)

Haven't had a chance to play with D yet but a quick perusal shows quite a few things missing from their iterator lib(zip, enumerate, cycle, collect). I probably just need to spend some time with it at some point.

That said the examples look much close to Rust than C++11(seriously did they think through how painful std::transform() is with all the begin/end pairs?).

One thing that not a lot of people know is you can iterate over a Result<T,E> and collect::<Result<Vec<T>, E>>() which is a really elegant way to bail on a single error or get a collection of values.

D does have zip, enumerate, cycle (https://dlang.org/phobos/std_range.html). It doesn't have a universal collect like Rust, but you can create arrays from ranges using array(). And if the target container/collection type is an OutputRange, you can use the put()(https://dlang.org/phobos/std_range_primitives.html#.put) function to create collections from ranges.

The biggest flaw in Rust's marketing, as far as I can tell, isn't noting that it's safer than C or C++, or that it's a systems programming language, it's the simultaneous (implicit and explicit) confrontational bashing on languages that aren't Rust, but especially C and C++, and the egregious ego-inflating comparisons (x-in-Rust is y-times-as-fast-as-x-in-C-or-C++).

I've only written a few relatively small ("toy") applications to completion in Rust. I have one application I'm working on in Rust at work, which is in an advanced stage of development and that will replace a significant part of our production-deployed current implementation (in python). I also have one application I'm working on in my spare time that is not a toy (although it is for entertainment purposes). My view is that for a good number of tasks Rust is nicer to use than C++, and probably I'd reach for Rust before C++ in instances where I'd normally choose C++. It would depend heavily on the context when comparing Rust vs. C. The safety argument simply doesn't register most of the time.

No, this isn't because "I don't have problems writing safe C or C++ code," (although I certainly expect the above comment to be misinterpreted uncharitably in this way) this is because the kinds of safety guarantees Rust provides simply don't register on the list of reasons to explicitly choose it (over C or C++) in a large number of contexts. Even in contexts where it does matter, the current state of affairs is that those terrible languages, C and C++, run the world well enough. There are some pretty good examples of terrible safety flaws that may have been avoided with Rust's guarantees. These aren't universally applicable to all problems to be solved in programming.

My advice, for all it's worth, is to stop hyper-focusing on safety and definitely stop the oblique insults to C, C++ (and other languages--some notable GC'd ones come to mind, from recent posts and comments I've seen around here). Rust has so much more to offer than an implementation detail (which is what "safety" is).

> The safety argument simply doesn't register most of the time.

Just like Steve said:

> Rust is safer than C or C++. (“I never have problems with safety in the C++ code I write” is a common reply.)

The CVE provides a pretty convincing argument that people can't write safe code in C/C++. And improving safety over C/C++ specifically is Rust's raison d'etre (you know, being developed by Mozilla which has a pretty large C/C++ codebase where safety is an important aspect). The bashing isn't based on people merely disliking C/C++. It's based on the fact that we observed 30 years of C/C++ being used in the industry and yet we still discover security holes caused by the same underlying flaws in the language.

Yes, Rust may have other features to offer, too. But safety is a big one and is thus mentioned so often.

C++ can be safe. Just use only safe pointers and vectors.

The thing is that rust does two things:

1. Safety

2. FP style code ("magic")

I wouldn't mind #1, but #2 kind of turns me off.

> C++ can be safe. Just use only safe pointers and vectors.

Defaults matter. In C++ safety is opt-in, you have to explicitly chose safety, the default is unsafe code. Rust is opposite, safety is opt-out, you have to explicitly use 'unsafe' functions to write unsafe code. Take a guess which strategy makes junior developers write better code.

>Take a guess which strategy makes junior developers write better code.

An org can mandate use of only safe pointers.

Yes, but my experience at enterprise level, is that no one does.

Last I checked it's still quite easy to invalidate a std::vector's iterator in "safe" C++. Is that no longer the case?

Still possible. There is no "safe subset" of C++. There is a safer subset, which is usually safe. One usually ends up using lots of std::shared_ptr anyway.

Safe pointers aren't guaranteed safe. If you move a uniq_ptr, you get nullptr, which can be dereferenced. You'll get zero help from lints, as this is defined behavior by the standard, yet you'll still be having UB. In Rust it's a compile-time error.

C++ can be safe, and I usually make sure to use those features when I need to integrate C++ code within JVM/.NET applications.

However that is a personal option, I seldom have seen such options in enterprise code.

I haven't noticed this "bashing" you speak of, and I've kept a close eye on every Rust thread on HN the last few years.

That said, I think that in 2016, whatever one can say about the unsafety of C and C++ deserves to be said, loudly, and I think it needs to be heard. Programs shouldn't be susceptible to buffer overruns, race conditions or null-pointer crashes. Users shouldn't be disrupted, and servers shouldn't be compromised. In other words, we should expect more, and I think giving C and C++ some competition is a good idea.

I agree that Rust's marketing has been too focused on safety, though. For me, it's a few bullet points down on the list. I'm much more excited by ADTs, pattern matching, solid generics, and the performance potential that can come from zero-cost abstractions and flexible approaches to memory management.

Interesting comment. I'm not sure Rust's safety guarantees are what many would consider merely an "implementation detail". Aren't the merits and caveats in any programming language (not with regard to its overall ecosystem), just implementation details? Or is my line of thinking not correct about that?

I'd agree with you on this point if a language really marketed its relatively simple safety facilities — like, say, C++11's std::shared_ptr — but relatively high-level (or perhaps "mid-level") guarantees provided by the standard or by the implementation strike me as features of real substance. Something substantial enough that I would take it into account in choosing a language, among many other things, and I think Rust's are relevant to a large number of systems.

From what I can tell Rust is being marketed well enough to create a lot of enthusiasm. I don't know if that's necessarily an indication that it's being marketed properly (by whatever definition of "properly" makes sense).

Safety is definitely part of implementations at least as much as it is a property of languages. The baseline measure of that is with compiler error messages. Classically bad(as in, this is what you'd get in many environments circa 1980) error messages just terminate and say "error code 100", and then you'd go look in your reference manual to figure out what error code 100 meant, because the system was too crude to have actual error printouts. Maybe you'd use a machine code monitor, or roll your own trace logs. So you'd be mostly in the dark as to what is wrong and how to fix it, and debugging was default-hard rather than default-trivial, shaping the entire nature of the programs you wrote towards easy wins that would not break catastrophically.

Rust, on the other hand, has invested so much into its error technology. It tells you what the offending lines are, the type of error, and gives a best-effort to point out the underlying issue causing the error. It guides you towards a code style that best exploits the nature of the errors, allowing you to take "risky" moves that only the compiler can successfully guarantee, and in-the-large, to an architecture that follows the grain of the compiler instead of fighting with it.

The safety features of Rust are core to the design of the language, that's what makes it so effective. It essentially takes part of what makes the design of pure functional programming so safe and makes it highly performant and pragmatic for a systems programming context. An implementation detail would be like the intricacies of better code generation, which is not at all key to Rust.

Granted, it's "C fast," but the whole point of the safety features is that they are intrinsic and pervasive to how the language functions so that the programmer is not allowed to make certain kinds of mistakes that C and C++ are totally fine with and will generate crashing code for. It's not just about making unsafe programming practices safe through better compiler practices, it's about stopping the unsafe programs before they reach the compilation step.

> It's not just about making unsafe programming practices safe through better compiler practices, it's about stopping the unsafe programs before they reach the compilation step.

I would like to add to the end of that: "...unless you explicitly tell the compiler to do it." I can't think of anything that you can do in C that you can't do in Rust unsafe blocks, you just have to tell the compiler explicitly.

Yes, the unsafe parts are always explicit so you never forget about the gravity of working with unsafe code.

This. I've noticed the same exact thing. I have even wrote the same thing about insulting other languages by Rust community here/on Reddit couple of months ago. I don't know why but whenever someone have different opinion that sometimes shows flaws of Rust or shows some trade offs, he is getting attacked by Rust fans here on HN or sometimes on Reddit, sometimes personal. Doesn't matter if he only post facts/numbers it will get down voted to hell. But it's not everywhere like this. Rust community on their official IRC channel is great, I was there, discussing different languages and solutions, comparing Rust to C++, discussing what is great and what is not so great about both. We did it there without any bias or bad emotions. Great technical discussion in great atmosphere. But if you write here for example that in one case C++ made better design decision than Rust then you need to get ready for a war. It feels like some people live in this bubble and whenever you say something that can get them out of this bubble they are starting to be very defensive. But not all, for example discussion with burntsushi on HN that advocates Rust was great and this is an example of behavior that should be followed. He recognize the trade offs and flaws of the language because we all know there is no such thing as ideal programming language. He is not having problem with writing that case 1 is better in Rust than language Y but case 2 is better in language Y than in Rust. It would be great if other known Rust advocates would give same example as he does. His educated, long answer showed me something I wasn't aware about Rust regex implementation, which is a lot better than Go or C++ one. And he did that without too much bias or bashing other languages.

The discussion he referenced, if you haven't seen it: https://news.ycombinator.com/item?id=13267537

Compared to C++ even Java looks good.

A sales pitch asking others to help you with your sales pitch. That's a good marketing concept. It's similar to "ordinary people say why X is great" ads.

Beyond the safety issue, I'm still not clear on how to think about Rust. It's mostly an imperative language at the language level, but it has so much functional programming stuff in templates that it looks like a functional one. It's often hard to visualize what all that stuff is doing. Did that ".collect" generate a big intermediate array? How can you tell without looking at the generated code? What kind of development environment would help?

Rust uses closures very heavily for routine functions. That's a big change in thinking for C programmers.

The module system seems clunky. There are modules, and crates, and TOML files, and module statements, and required directory structure. Adding a module seems to require editing about three files. The management of imported names seems unnecessarily obtuse, too. Having both "extern" and "use" seems overkill. Most languages get by with a single "import" statement.

I see all this as an acceptable price to pay for safety, but it's rough on many programmers.

> The module system seems clunky. There are modules, and crates, and TOML files, and module statements, and required directory structure. Adding a module seems to require editing about three files. The management of imported names seems unnecessarily obtuse, too. Having both "extern" and "use" seems overkill. Most languages get by with a single "import" statement.

You have to edit two files - the new module and its parent. Crates are a separate concept from modules, and the toml file is only relevant for cargo. I do not find there are more concepts than in the module systems of other languages.

That said the Rust module system is empirically more challenging for new users than it ought to be and we'd like to improve it. "extern crate" declarations are actually unnecessary if you're using cargo and people have toyed with removing them.

Though I knew how to use them, I didn't understand how Rust closures could work until I read these articles:



Basically, a closure is just an anonymous trait, and traits are implemented with vtables. This reminds me of how you can think of Java's lambdas as anonymous classes (though they're not implemented that way).

Close. :-) Closures are anonymous types which implement one of the Fn traits (which are not anonymous). Traits are usually not implemented with vtables; they are usually monomorphized and statically dispatched (this includes most higher order functions). If you need dynamic dispatch, they can be cast into "trait objects" with vtables. This is a pretty big deal - static dispatch is why functionalish abstractions like map and filter can compile to a very tight loop.

Ah, thanks for catching that subtle-but-important mistake. :)

And you wonder why C programmers have trouble with Rust.

No one in their right mind is wondering that. Going from "go ahead, do whatever you want" memory management to Rust's ownerhsip system or object oriented programming to trait based programming isn't something anyone excepts to happen without some perseverance.

"Expects", I think.

Ask who really needs Rust. People who write code that has to work and causes real trouble when it doesn't. That includes embedded system programmers. They're typically into C and can use an oscilloscope. They seldom know C++ and have never heard of Haskell. But they can lay out a circuit board and get it to work.

Rust needs to be accessible to those people.

Almost none of the above is something you actually need to know when programming in Rust.

You just need to know how closures work at a high level, which is pretty straightforward. Rust closures are basically the same as C++ closures.

If you're really worried about the generated code generally iterator code with closures compiles down to the same thing as the imperative code. This isn't a hidden complexity unique to Rust; C code these days is pretty far removed from the generated asm too.

I strongly disagree. Though it is possible to fudge through working Rust code without understanding the principles that govern Rust, which is probably more than you can say for C++, not understanding these things is going to make life a lot harder when you do run up against them.

An example: https://www.reddit.com/r/rust/comments/5iwyt9/how_to_accompl...

You might not need to know what "vtable" means, but the differences between, for example, `T: Fn()` and `T = Fn()` are pretty important.

Rust's closure model is literally just C++'s with an extra dose of memory safety on top.

> It's mostly an imperative language at the language level, but it has so much functional programming stuff in templates that it looks like a functional one.

Not everything can be neatly boxed in "imperative" and "functional". It's a language without purity, but with many other concepts that make a functional style work very well with it. Swift is similar. IIRC so is Scala. Depending on your POV you may classify this as functional or not; folks ascribe different meaning to the terminology here.

> Did that ".collect" generate a big intermediate array? How can you tell without looking at the generated code?

I don't understand this logic at all. How is this different from any function call in C? You don't know what that function call does without looking at the code or documentation.

This is no different from asking whether operator<< on cout allocates.

> and TOML files

This is like make/cmake/ninja files for C. Those are worse, especially since each codebase does its own thing.

> required directory structure.

no. You can set it up how you like, there's a default directory structure that's easier to use. You can redirect modules to whatever path you want using `#[path]`

> Adding a module seems to require editing about three files.

Two files.

In C you have a header and c file anyway, so the totals match up.

I agree that the module system itself is a bit confusing -- we need to get better at teaching it, but you're being rather inaccurate here overall.

> There are modules, and crates, and TOML files, and module statements, and required directory structure.

I'm a fan of Rust but I also think Cargo seems pretty inelegant considering how carefully considered everything else in the language is. Maybe I'm just biased against Tom Preston-Werner.

What would you have chosen instead? At the time, we weren't super big fans of TOML, but it was the least bad choice. It's grown on me.

I get what you mean by "least bad". I guess I'm wary of all configuration formats. Would plain old INI have been an option or is it too poorly specified?

Too poorly specified, yeah :/

It solves the following problems for me:

1. Fast program execution. Sometimes you just want the speed that a scripting language cannot provide. Although I have to admit that many times Python is fast enough. I also pick D sometimes to write fast "scripts".

2. Reliability. Rust is a language that puts an emphasis on correct and reliable programs. The compiler does a great job at pointing me at pieces of code that I should think about again. It is similar to Haskell in that "if it compiles, it is likely correct". You pay upfront for long "talk to the compiler" sessions, but from experience I'd say that it pays off.

3. A language that is a joy to use. There are some really great languages that I love to use. They include Clojure, Lua, Python and Rust. The first two languages are mostly the products from single inventors with good taste plus a couple of contributors. The last two are the product of their respective communities. The Rust community tries to bring together the best ideas and lessons from different programming languages and has a democratic development process that tries to create a language and an ecosystem that is nice to use and plays well together with Rust's goals.

4. A new language with stable releases. I like to try new programming languages from time to time. It is cool that you can pick between stable, beta and nightly, depending on your stability needs. Furthermore, you can count on Rust being around for longer because Mozilla is supporting the language.

Rust left big impression on me mostly because I was amazed that there exists functional language in Systems programming world! There, C is the absolute king, even Cpp doesn't cut it sometimes (when we are coming close to embedded world). Writing in Rust really feels like "programming" and not "systems programming", because of the way you approach the problems that are bound relatively close to metal. And community is amazing. So I just hope I gain more knowledge and time, so I can start playing with it properly.

Disclaimer: I haven't programmed a lot in Rust, because I am busy learning other thing atm (student heh), but I am following it for 2 years now, and reading a lot about it, occasionally I write some small programs.

If you like that concept, then check out this functional language in system programming that didn't require a LISP machine:


The House and MirageOS operating systems also put Haskell and Ocaml to work for this respectively. House even did drivers in Haskell with a modified version of hop. Here's their site:


"'[Rust is] Technology from the past come to save the future from itself'

He goes on to elaborate a bit:

    'Many older languages [are] better than new ones. We keep forgetting already-learned lessons.'"
Loving how this Graydon person thinks already. Same stuff pjmlp and I are always saying. Except he built something that took off massively with Mozilla's help. Extra credit there to both. Now, what might happen if this kind of thinking was applied all over the stack? I need 100 volunteers.

Start with VMS-style versioning for critical files (esp system or application), clustering, distributed locking, and app API to leverage all that easily. Put that in the good Linux or BSD disros. Maybe they'll stop eating my files or the services popular on HN will crash less. If they get that done, we're adding Microsoft's approach to drivers that work if they reject that of MINIX 3 or Genode. We can also use TLA+, Coq, SPARK, and/or Rust since those were things in the past that worked too on the kinds of problems this effort will introduce.

Systems and services should get really reliable after this. I won't even care about Linux kernel crashes as it will be a VM that goes down with my critical work checkpointed into another on same machine on a microkernel. A little driver reset happens then new one gets focus while old VM resets in background then joins cluster. For some extra money, this happens with physical separation in a desktop with two motherboards, one on each side, similar to old SGI Octanes. :)

Reminds me of Don Syme, creator of F#[1] on getting generics into .NET:

>It was seeing and experiencing polymorphic programming in OCaml and SML that made us persevere with the long and torturous process of convincing Microsoft that they should add such an "experimental and academic" feature as generics to their flagship runtime and languages. Of course we were in reality just making 1970s ideas work in practice, but at least now even Visual Basic has generics.


>Loving how this Graydon person thinks already. Same stuff pjmlp and I are always saying.

Rust was bootstrapped by a compiler written in OCaML.

Like you and pjmlp, Graydon is a massive fan of fanatically strong typing, and most of that stuff was actually in many older languages that lost because C was an optimal solution at the time.

However, at scale, C is no longer an optimal solution. Thus Rust can start eating C/C++'s lunch, and people will stop using C for what it does badly, and using it for what it does well (namely, writing small programs that do one thing and do it well, doing embedded work, kernels, and other bit hacking, and bootstrapping better languages).

"Rust was bootstrapped by a compiler written in OCaML."

Another wise choice. I gave them credit for that, too, when they told me.

"because C was an optimal solution at the time."

Whoa, we've talked about this before. It's closer to an optimal solution if you're squeezing max performance out of old hardware with a higher-than-assembly language. It wasn't optimal on ease of compilation, safety, large programs, or even performance optimization for general case. That all takes different design decisions than the personal preferences of its designers when building on Richards' work. It was also achieved by Wirth shortly after to point that C's remaining benefit could've been had as an option or tweak to Wirth's approach. Easier to tweak his for C's benefits than vice versa since his was actually designed and simple. Final proof of that was when C's inventors did that in their ideal language: an Oberon-2 improvement called Go.

"Thus Rust can start eating C/C++'s lunch, and people will stop using C for what it does badly, and using it for what it does well"

This is true. Again, though, due to social reasons more than tech. People been able to do that with GPL'd Ada. Rust takes off, though. The likely reason is a combo of developers preferring its syntax/semantics, how the community runs, and primarily Mozilla's effort [like Sun's Java & Microsoft's C# before it]. Like with the Richard Gabriel and the startups, details of execution matter more than how great the idea is.

>It's closer to an optimal solution if you're squeezing max performance out of old hardware with a higher-than-assembly language.

That was what was happening when C got popular.

Depending on the OS.

On MS-DOS, there was hardly any difference between the code quality generated by the Borland C and Pascal compilers.

But by that time, C was already very popular.

Not really.

I only bothered with C in 1993, yet I started coding in 1986, used Turbo Basic, Turbo Pascal 3, 5.5 and 6.0, alongside TASM before that year.

Similar experience to many other Portuguese fellow coders.

We only bothered with C either at the university level or because some had access to UNIX at work.

My first UNIX experience, the teacher carried a PC desktop with Xenix into class and we would take turns to use it.

I also think you might need to separate the circumstances of Portugal from UNIX and C in general. Same with other countries. I'm curious how much it spread in general versus other solutions plus at organizations with enough influence to determine a victor. I think Portugal's case is the rare one here.

Might be, but on those days there was hardly any usable data to validate this, so we are left with personal experience.

However, by that time, in the industry and academia, UNIX was starting to take hold, as was C.

And on the PC, while UNIX hadn't yet arrived, C was rising in popularity: If you look at what systems programs, major applications, and videogames (probably the 3 most poplar applications on the PC at that point) were written in, the two most popular languages, AFAICT, were Assembler and C.

C and Pascal were the Unity of MS-DOS, real games used Assembly.

C's adoption for games started to increase around MS-DOS 5.0 got released, specially with Watcom C, 386 and DOS extenders.

Doom was probably one of the first successful C based games.

Of course on Windows 3.x commercial games were all C.

Yes, but in was starting to grab on. In fields less obsessed with performance, like applications software, it had already taken hold: Word was written in C in 1983, and DOS itself was in C as well, IIRC.

No they weren't.

MS-DOS was written in Assembly, just like Word for MS-DOS.

Word got rewritten in C when it was ported to Windows.

EDIT: You can see early versions here


And the Lisa OS and the early versions of Mac OS were written in a mix of Assembly and Object Pascal.

http://www.folklore.org/ (Search for Pascal)


Ah. Sorry. Got that one wrong. My mistake...

So maybe not in the early 80s, but by the 90s, when PCs had progressed beyond the point of assembler being a common language, C took off, partly due to limited resources, partly because that was what all the Big Iron folks were used to at that point.

For me the main selling point is that it allows me to do code with performance constraints without having to resort to a programming language without modern package management, modern documentation or a modern ecosystem.

I would have jumped at any chance to do modern programming that way, and dodge C and C++.

It is a double blessing that besides satisfying these rather basic constraints, it's actually a great programming language as well, improving safety and reliability, having a concise but expressive syntax, a powerful type system, even a novel solution for enforcing some safety in shared memory situations.

I've done some rust tutorials, and I think the language is really interesting, but the reason I've never used it is I don't know what I would use it for, other than system programming.

I've never had a work related problem where I searched for a way to solve it, and someone had said: "Oh, if you use rust it's easy, because of x,y and z."

I'm sure if I were writing something like sshd from scratch today I'd use rust, but other than system-y sorts of applications like that, I don't know what I'd use it for.

I can relate to this. I'm a web developer and Rust (mostly the ecosystem) is not ready for web dev (the future look bright though, tools like Diesel[1] or Rocket[2] looks incredibly promising).

I recently found a place where Rust really shine : CLI tools. Rust multi-platform story is great, and the clap[3] crate makes it really straightforward to create helpful CLI tools. The blog post «Using Rust for scripting»[4] is really insightful in this regard. Now I use Rust any time I would have use Ruby to automate stuff with a little script. It's obviously more verbose, but the type system and error-handling story helps a lot.

[1]: http://diesel.rs [2]: https://rocket.rs [3]: https://clap.rs [4]: http://www.chriskrycho.com/2016/using-rust-for-scripting.htm...

I think that's perfectly fair. Rust isn't the best language for everything. Not everyone writes databases, codecs, filesystems, kernels, protocol implementations, etc.

Hi Steve,

We've met at Rails conf, in Tel Aviv many years ago. You seemed nice, so I'm motivated to try to give you my 2c.

I believe Rust can be more than safety but right now it isn't as you are imagining it.

I've started off in the demoscene in the early 90's, and so I've had my share of assembly, C, and C++ as a kid. I've built demos, games, generative 3D art with 8-64kb executables, without access to Internet or any form of proper knowledge, and it's funny how scared I am of attempting this again today when I'm much older, and much, much more experienced. I think one of the scary factors are the time-to-value ratio: I'm used to several orders of magnitude more productivity, and have much less time.

I think the most important take away from my background story that you can adopt is - make Rust playful. Fun is a a base element for curiosity, it what drives us where we're the most receptive - when we're kids - to learn, to experience, and to understand what will in be our toolbelt for the next dozen years.

Make rust playful. Make a game framework. Make it build on a mobile device. Make it build on a Pi (without docker, and the entire diarrhea of toolchain you have to use). Make it build for the Arduino. Cross compile it to Javascript (for fun!).

Often I try my hands at C and C++ again. When I do that, I'm not disappointed. I see a comforting place where I've spent my childhood at (and with C++ - some of my professional life). I'm happy with how C++ evolved and how CMake works. I'm not happy because productivity didn't change for the better that much. It still requires around the same amount of time to build things in C and C++ as it did years ago.

Where I do see a change is with Go. It gives me the same comfort, with an insane productivity factor. It also had fun baked in from the start - it can run on mobile, it can run games, and it can compile to what ever I want without a fuss.

I hope my thoughts will make a real change. I really do think this is the missing part with the Rust story.

Hey Jon,

Not Steve here, but I was wondering if you were being sarcastic here:

> Make rust playful.

Why am I asking if you're being sarcastic?

> Make a game framework


> Make it build on a mobile device


> Make it build on a Pi... without diarrhea

    curl https://sh.rustup.rs -sSf | sh
> Make it build for the Arduino

(An ARM based Arduino, AVR [which powers the original Arduinos] support is being worked on now)


> Cross compile it to Javascript


I'm not being sarcastic at all, and after reading my post again I can't see how it can be read that way. My apologies if my english isn't that good.

I'm familiar with piston, and by "making it build on a Pi" - my mistake, I meant make it cross-compile for a Pi. The rest of your examples are quite "it can be done" and not it "should be done".

The only thing I can say is - why not organize a game competition around piston? I'm sure Mozilla has the funds. This is what I mean by make it fun and make a game framework. I'm sure that other than piston you could find other Rust game frameworks but i'm sure you can't find one that's as accepted as PyGame or LOVE, and then, Cocos2d-x C++ port is being pitched "for performance", and why can't that be Rust?

Since you mentioned the demoscene, I think you may enjoy this fantastic 56kb demo written in Rust: https://www.reddit.com/r/rust/comments/597hhv/logicoma_elysi... :)

Wow! fantastic!

Hey there! That was a fun conf :)

I've said "Rust has a lack of whimsy" several times recently, we'll see what we can do :)

I am a systems programmer working at a company with a largely C++ codebase. We don't use Rust.

It has nothing to do with the reasons mentioned in the article or this thread.

Here are a list of requirements for us to use Rust:

  - Does it support all of the following architectures? x86_64, ARM, MIPS, powerPC
  - Does it support soft float?
  - Can it link against uclibc instead of glibc?
  - Can it link against arbitrary C libraries?
  - Can I use a C library's header.h file without having to hand-recode it to Rust?
  - Will the code I write now continue to compile 5 years from now?
Does Rust score a resounding "YES" on ALL of the above questions?

It mostly scores a "YES", where there are footnotes. AFAIK all of these things are being worked on.

> Does it support all of the following architectures? x86_64, ARM, MIPS, powerPC

It does: https://doc.rust-lang.org/book/getting-started.html#tier-2. Granted, while tier 2 support is pretty good, it isn't the same level of support as the tier 1 platforms.

> Does it support soft float?

From running `rustc -C help`:

        -C            soft-float -- use soft float ABI (*eabihf targets only)
> Can it link against uclibc instead of glibc?

It can link against musl, but I'm not sure whether that would enable uclibc. Probably?

> Can it link against arbitrary C libraries?


> Can I use a C library's header.h file without having to hand-recode it to Rust?

You can use something like rust-bindgen (https://github.com/Yamakaky/rust-bindgen) which will work with a minimal amount of cleanup afterwards in my experience. Still not a resounding yes, though.

> Will the code I write now continue to compile 5 years from now?

Yes. See https://blog.rust-lang.org/2014/10/30/Stability.html for more about the plans/policies/etc.

Great, thanks for the reply and links.

Sounds to me like the current state is "doable," but not "solid." I can convince myself to try it out, but not my whole company. But, maybe this will improve over time.

That seems like a fair summary to me. Several of these items (better interop with existing libraries, for example) came up in the recent 2017 roadmap RFC conversation, and I would expect them to improve in the future.

Also, if you do try Rust and find other issues that would be blockers for your company to pilot a project, I know that the teams have been very eager to get feedback from "systems" shops on what would be needed to get them trying Rust out commercially.

1. Yes. https://forge.rust-lang.org/platform-support.html

2. Yes, though it's kind of awkward.

3. There were some patches in the queue, I forget if/how they landed.

4. Yes.

5. Sort of. You have to rely on a tool to do it; they work reasonably well but aren't perfect yet.

6. Unless you're relying on something with a soundness hole, which we reserve the right to fix, and will be trivial to upgrade for you.

All of that will come after rust gets an ISO standard (like C and C++). Joking aside, C++ will be around forever and to a lesser degree, C will be too. They have 45 years of experience and revision. 50 years from now they'll probably still run the world just as they do today. C++11 and beyond is just as safe as any language. Rust came to be about 10 years too late.

> C++11 and beyond is just as safe as any language.

That's just not accurate. This is UB, despite being done entirely using modern C++ API:

    #include <iostream>
    #include <vector>
    using namespace std;
    int main() {
        std::vector<int> v;
        for (int i : v) {
            std::cout << i << std::endl;
        return 0;
I ran it on https://code.sololearn.com/caR4o14MOCWr/#cpp and got this output:


For me the best thing about Rust is that I can create beautiful APIs that are hard to misuse.

With ownership rules I can enforce logic of how my API is supposed to be used, e.g. "you can call this method only once", "you can use that object only until you call destroy() on the parent" can be enforced at compile time. Same with levels of thread-safety of my code. Users of my libraries don't need to RTFM to use them correctly.

Personally I don't have any use for Rust, other than occasionally dabbling on it, but I look forward to its adoption.

Because Rust appears to be one of the ways for newer generations to rediscover how systems programming was done in safer systems programming languages before UNIX alongside C, enjoyed adoption across the industry.

Also due to all the work on ownership management, that has already influenced future design decisions on C++, Swift, D, ParaSail and Pony.

And eventually with time, even those of us that don't use it, might enjoy having a safer stack to work with.

I like how "high level" Rust feels. It doesn't feel like C or something of that ilk.

Yes, the relentless focus on zero-cost abstractions is fantastic.

Could you, please, elaborate. In my opinion zero-cost abstraction is nothing but a catchy meme.

Could you provide an example, so I would try to implement such a wonder in one of classic languages I know, like SML or Scheme?

Here's a good example, fresh off of the first result from Google:


A simple one is Option. An Option<&T> is a pointer to some type T. Option is an enum, & is a reference, which has all of Rust's safety checking. This will all compile down to a regular old pointer, just like in C. Even though you have null checking and protection against dangling pointers, etc. It's all at compile time, zero run-time overhead.

So, typed pointers with an analog of the option type. Thank you.

Could we say that Swift or even SML with typed refs has similar safety and zero-cost abstractions?

Determining whether an abstraction is zero-cost or not is often buried deep in a language's implementation, so you'd need an expert to determine whether a given abstraction in Swift or SML qualifies. But regarding closures specifically, I can say that it's very unlikely that either Swift or SML have a zero-cost implementation, since having stack-allocated closures is an immense and deliberate decision in language design (AFAIK only C++ and Rust do this).

Similar safety, yes absolutely. Zero-cost abstraction not that I know of although I believe Swift has pretty good support for them and the long-term goal is to be in the same space as Rust (https://news.ycombinator.com/item?id=13077178)

While they may have some features that are zero-cost, it's not a defining feature of either language, and so they don't take it as far as Rust does. This was only one example, there are many more that those languages do not do, for various reasons.

> In my opinion zero-cost abstraction is nothing but a catchy meme.

It has a specific meaning: an abstraction which compiles to the same machine code as a low-level, unsafe, non-generic C-like implementation would. Without any additional pointer accesses or sanity checks required at runtime. The canonical example would be an iterator over a vector which compiles to the same machine code as a traditional C for loop.

> Rust is] Technology from the past come to save the future from itself

So Rust is the Terminator but in reverse.

This blog post was inspired by this comment: https://news.ycombinator.com/item?id=13266627

A language's error-handling story is a central part of the experience of using that language. Especially if that language is a systems language.

Rust's error handling still seems unnecessarily verbose to me. A specific error type is only necessary if the program can handle the error in question. Rust has acknowledged this but I still don't want every function call in my code to be prefixed with "try!" or suffixed with "?!"

Rust error handling is also inefficient: in C++, if no error happens, then no code is run, whereas in Rust, even if no error happens a return code must be checked.

I will sadly continue to happily use C++ until Rust improves its error-handling story.

> Rust error handling is also inefficient: in C++, if no error happens, then no code is run, whereas in Rust, even if no error happens a return code must be checked.

Just because you don't have to write the code doesn't mean that it doesn't exist! Exceptions are not a zero cost abstraction. Every C++ functional call that can throw adds several instructions of overhead to save information about the call stack and check the return value so even when you don't have an exception you still have the same amount of overhead as all Rust errors. Options are simple types that can often be optimized to the size of T in Some(T) and they're on the stack so worst case is you need an extra usize slot in your stack.

In the case of success, Rust is at least as efficient with errors as C++ but far more efficient when errors do occur.

> Every C++ functional call that can throw adds several instructions of overhead to save information about the call stack and check the return value

That's not how it works (unless you're using 32-bit Windows). I wrote a long explanation using an old gcc on Linux at https://stackoverflow.com/questions/307610/how-do-exceptions...

Basically, calling a function that can throw doesn't save any information (other than what it already has to save to do the call), and it doesn't check any return value (other than any check on the return value you wrote yourself). Instead, the exception machinery uses compiler-generated tables, keyed to the return address, to direct the execution flow to the correct landing pad.

In C++ with the Itanium ABI, exceptions only add overhead when thrown, not on every function call. In Rust, the result of every function call is checked.

Agreed on this, but this is ignoring the much more subtle effect of unwinding: it introduces optimization barriers around every function that "might" throw.

As a basic example:

    x = 1
    x = 2
With unwinding, the `x = 1` write must be preserved if whoever catches can observe it (and if that's unclear, it must be preserved conservatively).

The C++ STL riddled with complex machinery to try to work around this sort of thing because everything can throw. Copies, moves, you name it! Not even reallocating an array can be done with realloc unless non-throwingness can be proven.

If you try to make throwing "opt in" like Java, then you end up with nasty "throwingness polymorphism" problems and end up with constructs like Swift's "rethrows". By contrast, error propagation through normal values naturally composes because it's really easy to write code that generically handles return values.

Good point. Some responses:

Function calls themselves can force something like "x = 1" if x is observable from the callee.

In the case where "x = 1" is not observable from the catch block, then there is no effect. In cases where you would use "try!" in Rust, in the equivalent C++ cases the catch block wouldn't be able see local variables since it would be in a parent function call.

I think LTO largely eliminates this issue altogether since function bodies are visible to the optimizer in that case.

I see, my knowledge of exceptions is quite out of date then. Regardless, both of those work out to the same performance because Rust's single compare will be optimized away by the branch predictor except when errors occur. In the error case, C++ exceptions would have a lot more overhead that you would have to explicitly opt into in Rust on a case by case basis (so no need to disable errors altogether).

Not every platform has a branch predictor, like low-cost embedded platforms, where bare metal languages like C++ are commonly used.

Error-case performance is negligible as it's much more rare (cf. Amdahl's law). If errors are common in a specific section of code, in C++ you have the option of using simple error-checking. You have no such counter-option in Rust.

> Not every platform has a branch predictor, like low-cost embedded platforms, where bare metal languages like C++ are commonly used.

You're moving the goal posts. Itanium processors do have branch prediction so comparing C++ exceptions on Itanium to Rust on embedded architectures is meaningless. In embedded Rust, the worst case scenario is a compare between a constant and a register or a byte on the stack. Most non-ARM embedded platforms I've written firmware for barely have a three cycle multiply so a one cycle compare will be negligible except in the most extreme of cases (where you'd just write inline assembly anyway, no matter the language). On those platforms, you spend more time restoring registers than comparing the result of a call. On ARM processors, conditional execution behaves like a branch predictor on short compare jumps so there's again no overhead (except with value types, in which case Option adds 1-4 bytes for the None case on the stack).

Either way, Rust's error checking overhead is irrelevant. If your embedded environment is so resource constrained that Rust's error checking is too much overhead for you, then you certainly can't use C++ exceptions. They are too unpredictable without prohibitively expensive physical testing and tedious tuning of worst case code paths (unless you don't care about hard real time guarantees at all). If it's not, then it doesn't matter whether you use Rust errors or C++ exceptions because the overhead is a rounding error. We're not talking about something like reading memory from cache vs RAM or static vs dynamic dispatch where the difference in performance can be 10-1000x, we're literally talking about single digit cycles here.

> If errors are common in a specific section of code, in C++ you have the option of using simple error-checking. You have no such counter-option in Rust.

Yes, you do. You can unwind the stack manually and jump to the error handler, just like the C++ compiler does for you with exceptions. This is how the panic macro is implemented so all you have to do is modify its implementation (compiler-builtins crate) and create your own catch_unwind function. With the amount of time you'd spend typing "if (result != null)" or implementing an option type in C++, you can write your own exception implementation using Rust macros and traits. Rust developers made the right decision: "simple" error checking is the default with syntactic sugar to ease its use and if you really need those few extra cycles, you can make your own crate or compiler plugin that will give you the same features without polluting the rest of the ecosystem with a default error handling mechanism that is opaque, complex, and unsuitable for resource constrained real time systems.

The Itanium ABI is basically the reference ABI for all platforms, it doesn't only apply to Itanium.

Rust's error checking overhead is not unanimously irrelevant, it's not uncommon for functions to be called in tight loops.

I won't address your other comments since they are subjective and/or dependent on circumstances.

The `?` operator is just syntax sugar for:

let value = match attempt_something() {

   Ok(value) => value,

   Err(error) => return SomeErr::Kind(error),

That way you only need to type

let value = do_something().map_err(SomeErr::Kind)?;

It was called for by the community because error checking is something that can happen multiple times within the same function. Simply passing the error up the chain and handling all your errors in a central location is highly convenient, especially when matching requires that you handle all cases.

Additionally, in no way is C++ more efficient at error handling that Rust. Rust is able to perform this with zero allocation costs.

I understand exactly what "?" does, I just don't want to have to type it every time I call a function that can fail. Additionally I don't want to have to distinguish between which functions fail and which don't, I'd rather assume all functions fail and make sure my code is robust enough to handle failure at any point. This is my preference.

C++ is more efficient at error handling than Rust. In the common case of no error happening, Rust must check a return code, but C++ doesn't have to because of how exceptions are implemented in the Itanium ABI "zero-cost exceptions".

There is no cost associated with Result/Option when no error is returned. The compiler is smart enough to understand when failure is not an option. Only when an error occurs does Rust act on it.

Additionally, it's nonsense to state that you don't want to handle errors when they occur, but want to write software that can handle failures at any point.

Software libraries should handle their errors properly in the event that an error is possible in a function so that users of that library can handle them. Binary applications should handle errors when they occur versus randomly throwing them into the air and hoping for the best.

The `?` operator is highly convenient in handling errors wherever they may occur whereby the error needs to be handled. You do not have to use the `?` operator if have other means of handling the error. In example, maybe you want to use a default value in the event of a failure:

let value = attempt_value().unwrap_or(default);

Or maybe you want to keep trying until it succeeds:

let value = loop {

    if let Some(value) = attempt_value() { break value }


It's up to you how you handle your errors. The Option and Result types have a lot of methods for handling, such as and_then and or.

You're being too enthusiastic/wrong: there is a cost to Result and Option even in the good case. There still has to be a branch to check if there actually was no error. Result and Option are similar to error codes in C and have a similar cost/benefit trade-off, although of course with all the ADT/type-safety goodness.

Hmm no. Rust always issues a compare instruction after a function call to verify if it was successful or not. In a function that can fail depending on runtime conditions, the compiler is unable to elide such a check.

Also I don't think my error handling method is nonsense as it seems to work for all the code I write. If I don't handle an error at local function scope, my intent is to handle it higher up the stack.

The best part is that you can use Into/From conversion traits and use a different error type all through the call chain. A bunch of functions returning error type A and calling functions with error types B, C, D, and E can all just pass through the errors even though they return a different type. You implement the conversion trait once for each error and then it just works for all functions with no other boilerplate.

With this method, do you have to implement the From trait for every pair of errors that can be propagated up the stack?

I have almost the exact inverse view: when I write C++ code, I shed a tear if I can't disable exceptions and use something very analogous to Rust's Result type. I can't tell you how many times I've written some version of 'class Result<T> { union {int, T} }' (obviously, that's illustrative, given all the extra work that has to go into writing a proper tagged-union in C++) for use with C++ code at jobs I've had in the past. Rust's implementation here is great, in my opinion.

Just to clarify, In this specific case, I think we're talking about interface, not implementation.

Your preference is fair, since interface is a subjective issue. Personally, I find it more convenient to simply assume all code can encounter a error. Given that assumption, I find it redundant to add extra syntax to fail on unhandled errors.

As an aside: I'm curious as to why you'd implement a Result type in C++ as a tagged union where you have to manually branch on the tag, and not as an abstract class with two children.

The union avoids requiring an allocation for, say, returning a Result.

I am not sure what using an object vs using a struct with an enum + a union has to do with allocation. My C++ is rusty (no pun intended).

The equivalent of a function returning Result<T, E> in Rust would be returning something like unique_ptr<ResultBaseClass<T,E>> because the two subclasses have different sizes and thus can't be returned directly (or else one will be object slicing). The struct version stores everything inline from the start (it has a single static size) and so can be returned without needing a pointer.

My C++ skills have atrophied worse than i thought - I completely forgot you can't return an abstract class by value.

I find it easier to reason about.

Interesting. I tend to view ADT as syntax sugar for a shallow class hierarchy of small immutable objects with no encapsulation or behaviour.

> Rust error handling is also inefficient: in C++, if no error happens, then no code is run, whereas in Rust, even if no error happens a return code must be checked.

In this sense Rust's error handling is just like C (though harder to forget to check). Would you say C++ is more efficient than C in error handling?

Yes I would. In modern C++ we use exceptions and RAII for automatic cleanup.

Exceptions have high cost so I don't see where you are coming from. Rust's RAII goes further along than C++'s. C++ is playing a game of catchup to steal the move semantics from Rust.

According to http://stackoverflow.com/a/13836329/1517969 modern c++ exceptions are free when no throw occurs.

In addition, Rust Results/Options are free when no throw occurs. So the argument is moot.

No they aren't, you must issue a compare instruction to verify the result was successful.

Do you know whether branch prediction makes them effectively free, in places where the result is almost always successful?

Exception handling in C++ is absolutely zero-cost when no exception occurs. Look up the Itanium ABI. In Rust, even if no error occurs, the return value must still be checked.

Additionally, Move semantics were borrowed from C++, not the other way around.

You know what I would like? (Yeah, speaking of entitlement :) )

A "safe C". No classes, no functional programming (except function pointers, maybe anonymous functions), no exceptions, just plain C but

1. memory safe 2. optional array overflow detection (possibly unset by an operator in code, so [] is safe, but a^[] is unsafe. It's nice so if you _really_ need speed in a certain block and you know it's safe, you can turn it off).

Maybe something like Go but with "manual" (c++ safe pointer style) memory management.

Rust seems (to my untrained eye) too similar to C++ (it's quite a complicated/powerful language).

I mentioned a few in this thread on top of Cyclone already referenced:


Note that, while C developers ignored Cyclone, Rust built on its safety techniques. It's not that people aren't making the safe C's. It's that C developers at large don't care about them. So few write them. Then they die.

More work in CompSci, industry, and FOSS is therefore going into static analysis or automatic transformations of C into safe code.

EDIT: You might also like a version of Modula-2 with C-like syntax. It was a language for OS writing small like C but with modules & safe-by-default features.


There are safe C dialects already. The problem is they are not compiling into C, so you can't leverage the whole C ecosystem with them. Although I would suggest a different road - C code generation with meta languages, you'll get all the benefits of C with total control and all the high-levelness and safety you might need.

I second that recommendation. It's what PreScheme, Ada, Chicken, and Nim did among others. All brought benefits in practice.

You really have pretty much spec'ed Go, except that when you want something manual you either use some []byte hacks or use the unsafe package to speak to some C code. Even when you want manual memory management it's rare that you need the entire program manually managed; usually there's some sort of hot data structure or code or something that you'd like to take over but that doesn't mean you want to fiddle with every last little character string manually.

In particular, in the modern era, you're not going to find very many other languages that have anonymous functions but almost no functional programming capability like Go has. If you want to keep FP sealed away from your program... you know, for whatever reason, but to each their own... Go's got you about as well covered as you could hope for. Almost anything else that might come out is probably going to have FP constructs in it. (Especially after watching Go get raked over the coals for not having them.)

I don't even understand why you'd want to remove functional programming from a low level language like Rust. Its focus on zero cost abstractions and LLVM's optimizations mean that functional patterns like map or filter are most often inlined with your function at the call site, combining their overhead with the caller's stack and optimizing the code down to a single loop that is far less likely to have a cache miss or branch misprediction. This gives you complete control over performance and it's really easy to write your own iterators/operators to tweak the final result.

Functional programming gets a bad rap for performance and clarity (of the generated assembly) because most of them have gigantic runtimes like the JVM or Haskell but the way it is implemented in Rust is quite a bit different.

>I don't even understand why you'd want to remove functional programming from a low level language like Rust

Because I want a simple to read language?

It's not all about performance. C++ doesn't have bad performance and is safe, yet isn't used in Linux, nginx, or OpenBSD.


I've heard here how every team has their "subset" of the language that they use.

So you're trying to unseat C from it's perch. Fine. I understand, it's too easy in C to have memory errors and pwn computers.

But if you want to fix that by forcing people to switch to a different paradigm, it aint gonna happen.

Out of the top 10 programming languages on GitHub, most (by far) are imperative.

If someone would make a safe C, there can be hope in convincing Linus to slowly switch.

Now? There's 0 chance of that happening.

> Because I want a simple to read language?

So you want to make a language simpler to read by removing a paradigm that produces much more readable and less error prone code for a wide variety of very common use cases? I'm sorry but that makes no sense. You're saying that imperative code is always more readable, no matter what the problem is, and that is fundamentally untrue. Otherwise we wouldn't have a bazillion programming languages, paradigms, and syntax styles.

C++ doesn't have bad performance, it has worse performance. Linus has made some very public comments about why he doesn't want C++ in the Linux kernel and it isn't because it's hard to read or slow, but because it's too complicated especially when some of the more advanced features interact. Rust was built to avoid many of these downsides.

Rust doesn't force you to switch to a functional paradigm and I have no idea where you got that from. I explicitly said "remove functional programming from a low level language like Rust" not "switch to a functional programming language altogether."

There's a zero percent chance of a language switch in the kernel happening regardless of how similar a language is to C, so there's no sense in designing a language specifically to appeal to Linus. :P After all, if Linus isn't satisfied with C, he's more than capable of writing his own language (see also git).

> C++ doesn't have bad performance and is safe, yet isn't used in Linux, nginx, or OpenBSD.


Because those communities, logically due to the UNIX influence, are biased towards C.

C++ is heavily used on Android and ChromeOS native subsystems, Mac OS X drivers, Windows, Genode. Symbian and BeOS were written in C++.

>Because those communities, logically due to the UNIX influence, are biased towards C.

Not Linux (Warning: Typical Linus rant - http://harmful.cat-v.org/software/c++/linus).

Yet even Linus does use C++ nowadays:


There are a few attempts like Cyclone: https://en.wikipedia.org/wiki/Cyclone_(programming_language)

I typically explain that Rust's safety has side effects that are beneficial outside the obviousness of merely being safe. Namely, Rust's safety allows for migrating runtime checks into compile-time checks when constructing state machines around the ownership mechanism. Additionally, Rust's safety allows for additional compiler optimizations, because that safety can make more guarantees about the safeness of certain internal compiler flags. These have a speed benefit in addition to the already existing convenience benefits of not having to fight with broken code. I'd much rather fight a borrow checker than to fight through gdb's and lldb's to figure out why everything's totally broken.

Then I go on to explain that there's a lot of great features in Rust that make it more convenient than C/C++ in a wide range of general purpose programming tasks. The pattern matching, sum types (Option/Result), error handling strategy, and my favorite, the Iterator trait.

For the performance-minded C/C++ software developers, I explain that the safety that Rust provides via lifetimes and ownership allows me to comfortably perform extreme performance optimizations that would otherwise be incredibly nasty and error prone in C/C++ codebases. I can feel safe knowing that if it compiles, it's almost always going to work how I expect.

Then there's the whole test-driven development mentality to Rust software development that Rust provides with `cargo test` and `cargo bench`, and data-oriented programming approach that Rust takes versus the inferior object-oriented approach via protocol-oriented programming (traits) to feature ad-hoc polymorphism and generics. It's a bit more complicated to explain that though.

Then there's the whole community/tooling/documentation pitch. I think this is one of the more important aspects of Rust as it demonstrates that Rust has a future and is more than just a niche. It has momentum which cannot be destroyed. It's a language invented during the age of the Internet and takes full advantage of that.

Rust does not employ guerrilla warfare tactics like other languages have done in the past. You can think of C/C++/Java/etc. as long-running empires. Guerrilla warfare-style tactics simply don't work against them. Emerging languages have always targeted C/C++/Java/etc. through guerrilla warfare tactics -- a loose community that mostly does their own thing. Rust is ambitiously establishing an empire of it's own through it's community -- a plethora of official projects and services: rustup, cargo, crates.io, docs.rs, users forum, internals forum, reddit thread, official documentation, docs team, an official book, GitHub hosting, etc. It's a recipe for success.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact