If you're making type-level changes that aren't going to alter functionality, or want to just make sure the whole compiler will build, you can run `x.py check -i` to get the equivalent of `cargo check` outside of the compiler tree.
If you know ahead of time that you're not changing the code generation/ABI of the compiler (metadata, mangling, that sort of thing) then you can also do something like `x.py --keep-stage 0` and work entirely in stage 2 (you wouldn't pass --stage 1 with this option). Alternatively, you can pass `--keep-stage 1` and run the stage 1 tests; this will save you a rebuild of std and test which is often unnecessary after a compiler change.
However, even with these suggestions/hints, building the Rust compiler can be rather slow.
This narrows the time between when you have something to test and when you remember to test it. It saves you wall clock time if not CPU time.
(I highly recommend combining watch with an editor or IDE that saves all buffers at the same time, instead of one at a time).
If the compiler is built in Rust, I guess it would efficiently use several dozens of cores and gigabytes of RAM
Since LDC uses DMD's frontend and LLVM as a backend, it would suggest that LLVM is the bottleneck here. Like LDC, Rust also uses LLVM as a backend. It's well-known that LLVM isn't exactly the nimblest of drivers, but makes up for it with high-quality binaries.
There are tentative future plans for an alternative Rust backend based on Cretonne that, like DMD, would produce slower binaries in exchange for faster compilation: https://internals.rust-lang.org/t/possible-alternative-compi...
On the machine I am typing this message from , building DMD using DMD in optimized mode takes 58s, plus another 2s for druntime and 7s for Phobos (the two parts of the D standard library, for those not familiar).
A release build of LDC using LDC (CMake/Ninja), on the other hand, takes about 90s, which includes several versions of the runtime (only one is built for DMD by default), the JIT support libraries and a few ancillary tools. This is with debug info enabled, disabling it speeds up the build a bit further.
Since these are different codebases and the LDC build makes better use of the available cores, these are obviously not directly comparable if what you are talking about is compiler performance. However, a release build of DMD using LDC on the same machine takes about 45 seconds – i.e., it is faster and produces a faster binary.
Take these numbers with a grain of salt, obviously, as this was hardly a controlled benchmark. Your statement just conflicts with my experience working on both compilers, and the timings hopefully illustrate why.
 A 2015 MacBook Pro (i7-4980), so hardly anything out of the ordinary.
The C++ version of it, using Gtkmm, is done in a couple of seconds, a few minutes optimized.
Although I guess the Pango bindings are the ones to blame, as they seem to take ages to be done.
Could you file a bug against Pango? 15 minutes for a build is an extreme outlier.
> 150 seconds
It takes less time the second time, due to disk caching effects. Though my build machine still doesn't have an SSD because I'm lazy. An SSD would make it click along significantly faster.
DMD does all the usual data flow analysis optimizations - strength reduction, copy propagation, loop unrolling, dead store elimination, live range analysis, etc.
The optimizer / code generator is the same one that Digital Mars C++ uses, which was developed in the 1980's (!). Up until 2000 or so, the code generated was about the same as gcc, though DMC++ ran several times faster.
Since 2000, all my attention has been on the D front end, and so the optimizer wasn't getting the devoted attention it had earlier.
Just recently I finished converting the DMD optimizer from C to D, and am doing the code generator. This should make the code much more tractable.
GCC 4.0, which introduced the SSA-based GIMPLE, was released in 2004, which is roughly around the same time you mentioned that the optimization quality started to diverge. This is not an accident. At the time, lots of compilers thought that they could continue to compete with GCC without doing SSA-based optimizations. GCC crushed them all one by one.
I've also written a loop unroller, but it can be improved:
My evaluation was based on comparing the code gen output of the various compilers, not looking at how they were achieved.
As for SSA, DMD's optimizer is based on an intermediate representation that is a binary tree. A binary tree is a form of SSA - each operator node is a value assigned once and never modified.
No worries. I'm used to people telling me I can't do things :-)
D, on the other hand, was always designed for fast compilation (If I recall correctly), and so it compiles quickly. It's important to us, but not as important as other things. You can't be the best at every task, so you have to prioritize. For example, take two implementations of D: DMD and LDC. As I understand it, DMD uses its own codegen, so it compiles quickly, but produces slower binaries. LDC, based on LLVM, compiles more slowly, but produces faster binaries, for the exact same code. My understanding is that LDC is still faster to compile than rustc, but these are the kinds of tradeoffs you have to make.
I can iterate faster in UWP, C++/CX projects than what it takes to compile my dummy Gtk-rs project from scratch.
The factors that affect it I guess are the incremental compiler and linker, binary libraries (which cargo doesn't support) and whatever the Pango bindings are doing during their compilation.
But Rust is a template heavy language, and when it comes to compile time of template heave code, Rust is an order of magnitude faster than Clang in my experience (having worked a lot in and with Boost).
The benchmarks used by the computer magazines were all dead code, and so the flow analysis deleted them. The journalists sadly concluded and wrote that DC was an utterly broken compiler :-(
This problem persisted until a year or two later when Borland and Microsoft caught up. They had much better marketing teams (!). The benchmarks got changed.
And coffee, as that already takes around 5 to 20 minutes depending on what part of the compiler you're touching.