It feels great, once one gets used with the borrow checker messages.
One thing that could make the language better(and was mentioned in the post) is faster compilation.
Having programmed in Go, this may be one of its best points, just have a watcher that recompiles the program on change(and maybe run the unittests). Though it can be argued that not all types of programs benefit from such workflow, it's still one of my favorite things.
Yes, compilation speed is one of the most important things for us. It's a lot harder for Rust than it is for Go, because Rust has a much more sophisticated suite of optimizations (necessary for Rust's goals) and zero-cost abstractions. You can do a lot better if you're willing to trade runtime performance for compilation speed, and we aren't. But I'm confident that we can get there with incremental compilation and more codegen/typechecking improvements.
There are plenty of situations where trading compilation speed for performance is useful as a workflow tool. Not every build needs to be optimized to fullest extent possible, either by Rust or by llvm.
1. The language is designed for zero-cost abstraction, which means we have more work to do to make it compile fast. For example, any Rust compiler must solve subtyping constraints on lifetimes, regardless of the optimization level.
2. Getting an optimizing backend and a non-optimizing backend to work well is more work than just getting a non-optimizing backend to work well.
IIRC, you usually claim that the majority of the compilation time is actually LLVM optimizations, not Rust typechecking. If that's true, it should be trivial to simply turn off LLVM optimization passes for DEV builds.
Sure, turning off LLVM optimization passes helps a lot, usually speeding up the compile by more than 2x. Though note:
1. There are some technical issues in LLVM that prevent Rust's -O0 from being truly -O0 (LLVM's FastISel not supporting the invoke instruction being the main one here).
2. Go's compiler doesn't have much of an optimization IR at all. From what I understand, it compiles straight from the AST to Plan 9 assembly. The equivalent in Rust would be a backend that went straight from the AST to LLVM MachineInstrs (like the Baseline JIT in SpiderMonkey). Such a backend would be the ideal way to get fast whole-program compilation but would be a non-starter for optimization, so nobody has focused on it given how much work it would be. Incremental compilation would be a better use of people's time than maintaining an alternate backend, because only incremental compilation gives you algorithmic speedups (O(size of your crate) → O(size of the functions that changed since your last rebuild)).
It still takes longer to compile Rust code than many people would like. That's why it's being worked on.
At -O0, Rust is _very_ slow. It's a common meme that people jump into IRC or the users' forum and say "Why is this Rust code slower than Ruby?" Considering Rust is often chosen specifically for performance, sometimes, this makes your application not usable.
Yeah, definitely. The compiler has a few features recognising this already, e.g. there's various levels of optimisation (0 through 3) and, more significantly, there's parallel codegen where the compiler will internally divide up a single crate into multiple compilation units and optimise/run code-generation on them in parallel (this reduces how much each compilation unit sees and so reduces optimisations like inlining etc.).
While not every build needs to be optimized to the fullest extent possible, every build needs to be checked for correctness to the fullest extent possible. And since Rust does so much more than Go in terms of correctness, it's probably always going to be a no-contest between the two.
These kinds of checks take up very little of the overall build-time. Frankly, everything is dwarfed by LLVM optimization passes, which usually takes up about 50% of the time itself.
It's certainly true that the Rust compiler does quite a lot of analysis (more than Go), e.g. non-trivial type inference and borrow checking. In fact, a no-op build of libcore takes 12s for me, with 5s in type checking, 0.8s in borrow checking and less than 3s interacting with LLVM (~1s of which are LLVM actually running). Turning on optimisations pushes the LLVM time out to 4s. Similarly, libstd takes 8s to build without optimisations and LLVM only runs for 1.5s (type checking itself is about the same).
(The plan to make the compiler more parallel and more incremental improve these parts without affecting the quality of the generated code at all.)
Fair enough, and also, all of this changes every release, so it's possible the kinds of numbers I'm seeing are due to my projects and the time at which I last paid attention to this.
It feels great, once one gets used with the borrow checker messages.
One thing that could make the language better(and was mentioned in the post) is faster compilation.
Having programmed in Go, this may be one of its best points, just have a watcher that recompiles the program on change(and maybe run the unittests). Though it can be argued that not all types of programs benefit from such workflow, it's still one of my favorite things.