Hacker News new | comments | ask | show | jobs | submit login
The internals of testing in Rust in 2018 (jrenner.net)
188 points by djrenren 7 months ago | hide | past | web | favorite | 45 comments



Looking at this, I can now see why a test function nested in another function (see [0] and [1]) doesn’t work. I’ve tried dabbling with the Rust compiler to see if I could help fix this, but it takes 30 minutes to compile rustc and I couldn’t figure out how to reduce those compile times (each time I changed 1 line and wanted to test, I had to wait 30 minutes to build). How do people work on rustc (or internal Rust libraries, like libsyntax) without waiting for 30 minute builds after every minor change?

[0]: https://github.com/rust-lang/rust/issues/36629

[1]: https://github.com/rust-lang/rfcs/issues/612


Depending on what you're doing, there's often a few different approaches to reducing compile times. First of all, if you're going to be running (small) tests, then it may be beneficial to enable incremental with `--incremental` or `-i` to x.py. This does make the resulting compiler slower, but does make the build itself faster.

If you're making type-level changes that aren't going to alter functionality, or want to just make sure the whole compiler will build, you can run `x.py check -i` to get the equivalent of `cargo check` outside of the compiler tree.

If you know ahead of time that you're not changing the code generation/ABI of the compiler (metadata, mangling, that sort of thing) then you can also do something like `x.py --keep-stage 0` and work entirely in stage 2 (you wouldn't pass --stage 1 with this option). Alternatively, you can pass `--keep-stage 1` and run the stage 1 tests; this will save you a rebuild of std and test which is often unnecessary after a compiler change.

However, even with these suggestions/hints, building the Rust compiler can be rather slow.


Hi author here, I won't lie, it's rough. I definitely built this whole website and wrote the blog during my "compile breaks". On the bright side, typechecking is super fast so I only really have to wait when making algorithmic changes. Still, especially for anything that does code generation, you'll be tweaking the algorithm a good deal.


There's a cargo watch command that tries to do the same thing that other languages with slow compilation or test frameworks do, and that's to run all your tests the minute you save.

This narrows the time between when you have something to test and when you remember to test it. It saves you wall clock time if not CPU time.

(I highly recommend combining watch with an editor or IDE that saves all buffers at the same time, instead of one at a time).


I don't believe this would work with the rustc compiler because, although it uses Cargo to build, it has a special process that bootstraps the compiler by building itself.


Would building the compiler on cloud instances help?

If the compiler is built in Rust, I guess it would efficiently use several dozens of cores and gigabytes of RAM


A debug build of the D compiler on my (ordinary) machine takes 24 seconds. The optimized build takes 132 seconds.


I presume you're measuring the speed of building DMD? An optimized build of DMD (measured by `time`) takes 150 seconds on my machine. Meanwhile, a build of LDC on this same machine takes a little over 21 minutes (not counting LLVM, which was pre-installed).

Since LDC uses DMD's frontend and LLVM as a backend, it would suggest that LLVM is the bottleneck here. Like LDC, Rust also uses LLVM as a backend. It's well-known that LLVM isn't exactly the nimblest of drivers, but makes up for it with high-quality binaries.

There are tentative future plans for an alternative Rust backend based on Cretonne that, like DMD, would produce slower binaries in exchange for faster compilation: https://internals.rust-lang.org/t/possible-alternative-compi...


There seems to be something very strange going on with the way you are building LDC.

On the machine I am typing this message from [1], building DMD using DMD in optimized mode takes 58s, plus another 2s for druntime and 7s for Phobos (the two parts of the D standard library, for those not familiar).

A release build of LDC using LDC (CMake/Ninja), on the other hand, takes about 90s, which includes several versions of the runtime (only one is built for DMD by default), the JIT support libraries and a few ancillary tools. This is with debug info enabled, disabling it speeds up the build a bit further.

Since these are different codebases and the LDC build makes better use of the available cores, these are obviously not directly comparable if what you are talking about is compiler performance. However, a release build of DMD using LDC on the same machine takes about 45 seconds – i.e., it is faster and produces a faster binary.

Take these numbers with a grain of salt, obviously, as this was hardly a controlled benchmark. Your statement just conflicts with my experience working on both compilers, and the timings hopefully illustrate why.

---

[1] A 2015 MacBook Pro (i7-4980), so hardly anything out of the ordinary.


I have a very basic Gtk-rs based project with one dialog that takes about 15 minutes to clean compile an un-optimized debug build from scratch.

The C++ version of it, using Gtkmm, is done in a couple of seconds, a few minutes optimized.

Although I guess the Pango bindings are the ones to blame, as they seem to take ages to be done.


> Although I guess the Pango bindings are the ones to blame, as they seem to take ages to be done.

Could you file a bug against Pango? 15 minutes for a build is an extreme outlier.


I have emailed you, lets discuss further to where I should actually file a bug to, not so sure about Pango after doing a timed build.


> I presume you're measuring the speed of building DMD?

yes

> 150 seconds

It takes less time the second time, due to disk caching effects. Though my build machine still doesn't have an SSD because I'm lazy. An SSD would make it click along significantly faster.


The level of optimization they are not doing must be staggering.


Not really. The two main reasons LLVM and gcc do a better optimization job is they have a better function inliner and loop unroller. Neither are particularly computation intensive.

DMD does all the usual data flow analysis optimizations - strength reduction, copy propagation, loop unrolling, dead store elimination, live range analysis, etc.

https://github.com/DigitalMars/Compiler/blob/master/dm/src/d...

The optimizer / code generator is the same one that Digital Mars C++ uses, which was developed in the 1980's (!). Up until 2000 or so, the code generated was about the same as gcc, though DMC++ ran several times faster.

Since 2000, all my attention has been on the D front end, and so the optimizer wasn't getting the devoted attention it had earlier.

Just recently I finished converting the DMD optimizer from C to D, and am doing the code generator. This should make the code much more tractable.


Respectfully—and I truly mean this!—this is nowhere near enough to attain parity with GCC and LLVM. I don't see any SSA form, to name just one of many issues.

GCC 4.0, which introduced the SSA-based GIMPLE, was released in 2004, which is roughly around the same time you mentioned that the optimization quality started to diverge. This is not an accident. At the time, lots of compilers thought that they could continue to compete with GCC without doing SSA-based optimizations. GCC crushed them all one by one.


I forgot to add SROA to the list. DMD does a limited form of SROA I wrote as a proof of concept, it should be generalized.

https://github.com/DigitalMars/Compiler/blob/master/dm/src/d...

I've also written a loop unroller, but it can be improved:

https://github.com/DigitalMars/Compiler/blob/master/dm/src/d...

My evaluation was based on comparing the code gen output of the various compilers, not looking at how they were achieved.

As for SSA, DMD's optimizer is based on an intermediate representation that is a binary tree. A binary tree is a form of SSA - each operator node is a value assigned once and never modified.

> Respectfully

No worries. I'm used to people telling me I can't do things :-)


Is the intermediate representation documented/described somewhere?



Well, I did look at it, but it was difficult to figure out how control-flow is structured. Is every “basic block” a single “el”, or is the CFG structured inside the binary tree? How are variables handled? Are there load expressions? Do you use something similar to phi-nodes to merge values from different branches? How do you do analysis? Do you walk over the expression tree and store the results (e.g. in constant propagation) on the nodes themselves? Do you use a separate cache/hashmap?


Basic blocks form a linked list (block*), and each basic block has a tree of elem's. Control flow is done with the basic block "predecessor" and "successor" lists.


We're doing tons of optimizations, trust me. :) Working on compile times is a constant, high-priority request from users. rustc is also a Rust project, and so we want Rust programs to compile faster too. It's just non-trivial. If you've got a suggestion to make rustc compile quickly with little effort, we're all ears!

D, on the other hand, was always designed for fast compilation (If I recall correctly), and so it compiles quickly. It's important to us, but not as important as other things. You can't be the best at every task, so you have to prioritize. For example, take two implementations of D: DMD and LDC. As I understand it, DMD uses its own codegen, so it compiles quickly, but produces slower binaries. LDC, based on LLVM, compiles more slowly, but produces faster binaries, for the exact same code. My understanding is that LDC is still faster to compile than rustc, but these are the kinds of tradeoffs you have to make.


As I replied in another comment, even C++ still compiles faster in non-template heavy code.

I can iterate faster in UWP, C++/CX projects than what it takes to compile my dummy Gtk-rs project from scratch.

The factors that affect it I guess are the incremental compiler and linker, binary libraries (which cargo doesn't support) and whatever the Pango bindings are doing during their compilation.


> even C++ still compiles faster in non-template heavy code.

But Rust is a template heavy language, and when it comes to compile time of template heave code, Rust is an order of magnitude faster than Clang in my experience (having worked a lot in and with Boost).


I think cargo supporting binaries and sharing build dependencies between projects (cf maven's .m2 directory) will mitigate a /lot/ of Rust's build time issues.


That only helps on the first build, really.


It brings the cost of trying an idea out down considerably. I've had a few times where I think "i know, I'll just try this idea out" and then I make a new project, add a few deps in Cargo.toml, hack some code, and then build. Oops, the first build takes half an hour. Then I think I should try new ideas out in Go or python. Alas.


That’s fair!


Also CI builds...


I have worked in many C++ codebases and compile-time was a problem _everywhere_, very probably the #1 problem. If you want to replace C++ you need to fix the most important problem. No it's not "safety".


I love how the ambiguity has caused BOTH compiler teams to comment on what they're optimizing.


Amusing anecdote - back in 1982, I took a 2 week compiler course from Hennessy and Ullman at Stanford. I decided to implement the data flow analysis techniques in my C compiler. It appeared as Datalight Optimum C.

The benchmarks used by the computer magazines were all dead code, and so the flow analysis deleted them. The journalists sadly concluded and wrote that DC was an utterly broken compiler :-(

This problem persisted until a year or two later when Borland and Microsoft caught up. They had much better marketing teams (!). The benchmarks got changed.


Well, at least today you'd be able to start a Twitter war and with a few strategically placed Reddit and HN posts you'd shame them into fixing their benchmarks :)


In those days, what could you do? Write a strongly worded letter to the editor? Have the magazine print a retraction two months later for their feature article? Not going to happen.


./x.py test src/test/ui --stage 1

And coffee, as that already takes around 5 to 20 minutes depending on what part of the compiler you're touching.


compiling gcc takes even longer


but incremental builds only take a few seconds when bootstrap is disabled...


You can do the same on rustc by just doing --keep-stage 0. This doesn't work with with codegen-like stuff but for a lot of changes it works.


The accessibility of testing code in Rust is one of the best parts of the tooling IMO.


I find this pretty nice in D. You add unittest{} blocks which you compile into or out of your binary with a compiler flag. If you add some appropriately formatted comments, ddoc also turns those into docstrings with the test as example usage.

https://dlang.org/spec/unittest.html


Rust takes a similar approach. For unit tests, you use the #[cfg(test)] attribute (/ pragma / directive) for conditional compilation, and #[test] to mark a function that is a unit test, which is run whenever you run the `cargo test` command. Also, any Rust code in Markdown fences in documentation is, by default, also run by `cargo test`, which you can disable for an individual code block by marking it as `rust,norun` instead of `rust`.


Shameless plug: you can have D-ish testing blocks with a macro I published on https://crates.io as `adhesion`: https://github.com/erichdongubler/adhesion-rs


Doc tests are their own thing, right? When I do `cargo test`, it looks like it's doing two passes: one for unit tests (like described in the article), and one for doc tests.


Yeah, doc tests are part of rustdoc and they essentially strip out the comments and generate test functions using libtest. I'm not very familiar with the internals, but they're here: https://github.com/rust-lang/rust/blob/master/src/librustdoc...


They still use this infrastructure; think of it as a specific way to construct this kind of test.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: