Hacker News new | past | comments | ask | show | jobs | submit login
The Rust compiler is still getting faster (blog.mozilla.org)
324 points by nnethercote 30 days ago | hide | past | web | favorite | 96 comments



The rust compiler now has features that few or none C and C++ compilers have: incremental compilation within a single translation unit, pipelined compilation, multi-threaded and lazy query-based compilation, ...

Implementing each of these features have required whole program refactorings in a large-scale codebase performed by few individuals while hundreds of other developers where simultaneously evolving the software and implementing new features.

Having been a C++ programmer for over 10 years, none of these refactorings would have payed off in C++ because of how time consuming it would have been to track all bugs introduced by them.

Yet they do pay off in Rust because of my favourite Rust feature: the ability to refactor large-scale software without breaking anything: if a non-functional change compiles, "it works" for some definition of "works" that's much better than what most other languages give you (no memory unsafety, no data-races, no segfaults...).

Very few programming languages have this feature, and no low-level programming language except for Rust has it.

Software design is not an upfront-only task, and Rust let's you iterate on software design as you better understand the problem domain or the constraints and requirements change without having to rewrite things from scratch.


The rust compiler team continues to set ambitious goals. Last I checked they wanted to extract much of the compiler front end into independent crates that can be reused by the Language Server. That's a pretty daunting refactoring, but the way they're going seems like it's achievable.


The one thing I’ve always wanted from a language runtime is for the data structures used by the compiler to be exposed in the stdlib. E.g. every compiler uses control flow graphs; so why doesn’t every language give me a batteries-included digraph ADT, that has every feature you’d need to implement a control flow graph, such that the compiler’s CFG is just a stdlib digraph instance? It’d be a great boon to writing your own compilers, interpreters, JITs and static-analysis tools in said language.

(One language that does some of this is Erlang, but AFAIK the stdlib data structures like digraphs, sets-of-sets, etc. aren’t actually the ones the compiler uses, but are rather there for use by static verification tools like Dialyzer. Which means that the Erlang digraph doesn’t know how to topsort itself, even though there’s a module in the Erlang compiler application that does topsort on digraphs. Still feels like being a second-class citizen relative to the runtime’s favoured compiler.)


> why doesn’t every language give me a batteries-included digraph ADT, that has every feature you’d need to implement a control flow graph, such that the compiler’s CFG is just a stdlib digraph instance?

Rust does give you access to its internal data-structures on nightly. These change quite often, so you will need to update code that uses them pretty much every week.

Why doesn't many language does this in some stable form? Because that sets the internal data-structures and APIs of the compiler in stone forever, which makes it infinitely harder to improve the compiler and implement new language features.


Java and .NET do in some form.

Then you have Eiffel, Smalltalk and Lisp variants.


Rust does expose _all_ of its compiler internal data-structures though, not just some subset of it.

Scheme and Lisp, for example, only expose their AST. If you only care about the AST, you can access it from stable Rust, and there are great libraries for working with it, doing AST folds, semi-quoting, etc.

The OP wanted to work on the CFG graph. You could write a library to compute the CFG from the AST, but you don't have to because Rust exposes this as well. The CFG data-structures are much more tied to the intermediate representations of the Rust compiler, and these do change over time as new features are added to the language.

Some tools do use the compiler internal CFGs though. For example, rust-clippy is a linter that's built on top of most of the compiler-internal data-structures, not only type-checking, but also CFGs. The rust-semverver is a tool that detects semver breaking changes between the last released version of a library and the current one, is built on top of the type checking data-structures, and can deal with all kinds of generics (types, lifetimes, etc.).

These tools are typically tied to particular versions of the nightly compiler and do break often, but for example rust-clippy is distributed via rustup, so you always get a version that works with whatever nightly compiler you have, and well, you also get a version that "magically" works with a stable Rust compiler.

Other people have built all sort of tools on top of this, from Rust interpreters, to instrumentation passes that compute the maximum stack size requirement of embedded applications, e.g., by using the CFG to compute the deepest possible stack frame of a program, and the size of the stack frame for that case.


I want to make a distinction between three kinds of data structures.

1. Implementation-specific thing that has no formal equivalent. Structs of structs of structs, with business logic intermingled with the structure. Plenty of examples of this. I’m not suggesting anyone put these in the stdlib; that’d be silly.

2. ADT with no formalism behind it, that implements a particular set of behaviours “for best performance”, with no guarantee about the implementation or the time/space complexity, and a set of exposed operations that only state what they do, not how they do it, such that you can’t guess what algorithm a given operation is going to use.

Example of #2: Objective-C’s NSDictionary. It only guarantees that it “acts like” a dictionary; it doesn’t guarantee that it’s O(1), because being O(1) with a high constant can actually be worse for performance in some cases. So instead, it makes no guarantees, and tries to optimize for performance by actually switching between different implementations as the data held reaches different size thresholds.

I’m not suggesting anyone put these in the stdlib either, if they currently live in a specific application, because this widens the scope of their stability requirements. Right now only the compiler maintainers can optimize this ADT into doing something entirely different, and then ensure that the compiler (the only consumer) is happy with the result; if the ADT were available in the stdlib, they wouldn’t be able to test all consumers, so they’d have to be much more conservative with their changes, lest they break an edge-case. (Changing how sorting an array ADT works, for example, can break code that assumed that sorting either nearly-sorted or entirely-randomized sets was optimal.)

And, of course, sometimes in a compiler the Control Flow Graph ADT will be of this type. If it is, fine, don’t export it. But usually—because of how compiler maintainers interact heavily with compiler theorists—it’s instead the third kind:

3. Formal data structures, where the name comes from a paper which defines the expected behaviour and a base-level implementation; where either that implementation, or another paper’s pure improvement on said implementation (without changing any of the semantics) is what you can expect to find. As well, the names of all algorithms implemented against the ADT are also well-defined in papers, such that you can know what algorithm you’re using by the name of the function. (Usually, in these cases, you’ll have multiple algorithms that do the same thing differently living as sibling functions in the same module.)

Examples of #3: the geometric primitives and index search-tree types that PostGIS exposes to C-level code. Any implementation of vector clocks. Or, my favourite: regular expressions (i.e. deterministic and nondeterministic finite automata.)

Also, example to make the naming point: for a particular use-case, the use of a segment tree might be an optimization over use of an interval tree. But in code that uses these types of formal data structures, you’d never find an ADT named “IndexTree” that could happen to be either; instead, you’d expect separate SegmentTree and IntervalTree implementations, and then maybe a strategy-pattern ADT wrapping one or the other. But the concrete implementation of the formalism is very likely to exist, because people tend to just sit down and implement these things by translating pseudocode in papers into modules; and then tend to want things to stay that way, because keeping a 1:1 mapping from the module to the paper, and then linking the paper in module-doc, is one of the only good ways to get new maintainers to understand what the heck has been implemented here.

It’s #3 that I would suggest is a good candidate for stdlib inclusion. These data structures don’t change in a way that breaks their guarantees, because their existence is predicated on a particular formalism, and nobody cares about improvements to the performance of a formalism that break the formalism. (Fun analogy: nobody would care about improvements to speed running a video game that required changing the code of the game. You’re trying to optimize that game, not some other one!)


> It’s #3 that I would suggest is a good candidate for stdlib inclusion

You started arguing that the Rust compiler should expose its internal data-structures, which it does, and somehow ended arguing that the Rust standard library should expose spatial data-structures for geometry processing.

I have no idea how you got from one to the other, but writing a huge wall of text full of disorganized thoughts shows very little appreciation for the time of those participating in this discussion thread, so I won't interact with you anymore.


This is unnecessarily mean, and ironically you're the one showing little appreciation since you didn't even read derefr's initial post closely enough. They argued that the data structures (maps, graphs, etc) and algorithms used by the compiler could be included in the stdlib, not that the particular internal instantiations of them should be accessible. The whole point was the compiler uses a directed acyclic graph, so why isn't there a nice directed acyclic graph in the standard library?


The argument is correct in that the concept of vectors, deques, hash-tables and graphs hasn't changed much and an API for them could be exposed.

The standard library exposes an API for some of them, and you can access the internal data-structures of the Rust by adding a line to your program to import them (requires unstable Rust) or by adding a line to your Cargo.toml to load them from crates.io.

The argument is flawed in that these are set in stone in any practical way. This is because Rust binaries do not run on a whiteboard, they run on real hardware, which means that there are thousands of different ways to efficiently implement these data-structures depending on what your precise use case is, and most of them affect their API design. For example, while Rust is lucky enough to provide a HashTable with a Google Swisstable-compatible API, C++ std::unordered_ containers are not, and cannot be upgraded.

The argument is also flawed into assuming that Rust is a static language. It isn't, the language is evolving, and as the language evolves the requirements on the internal data-structure changes, resulting in changes to the algorithms and data-structures being used (not only their implementation).

For example, at the end of last year Rust shipped non-lexical lifetimes, which uses a complete different borrow checking algorithm, data-structures, etc. The old ones just are not compatible with it.

Then there is also the fact that many people work on improving the performance on the Rust compiler. This means that the graph data-structures used are often not generic, but exploit the graph structure and the precise type of the graph nodes. As these data-structures are parallelized, made cache friendly, allocation-free, exploit usage of other platform-specific features like thread-locals, atomics, ... their APIs often changes. Also, often somebody just "discovers" that there are better algorithms and data-structures for implementing one pass, and they just change them.

Putting these in the standard library incurs a massive cost since this makes them almost impossible to evolve inside the Rust compiler, adds a huge maintenance cost, etc. And for what value? If you want to precisely use what the Rust compiler uses, you can already add a line to your program to import that from somewhere else that makes it clear that these are not stable. If you want general graph data-structures, chances are that your graph won't look like the ones used by the rust compiler. There are hundreds of crates for manipulating graphs in crates io, depending on how big the graphs are, whether you can exploit some structure, the common operations that you want to do with them, etc.

Sure, on the whiteboard, all of them probably have the same or similar complexity-guarantees, but on real hardware constant factors make a big difference for real problems.


These are not disorganized thoughts, and derefr is still arguing for the same thing as before.


Salsa is an example of logic they are writing for the Rust compiler that is being kept in a generic crate for anyone to use

See https://github.com/salsa-rs/salsa


Surely the risk there is exposing implementation details in the standard library? An separate library that's shared with the compiler may make sense, but going into the stdlib tends to come with a promise of stability.


The thing is a) these structures change all the time and b) published APIs shouldn't change all the time. Which is why Erlang uses a public model and a private model and why most don't bother.

Doesn't mean I wouldn't like it myself, but it is a genuine drag on continued compiler development.


There are business-logic data structures, and then there are formal data-structures from compiler theory. I only really want access to the latter.

The nice thing about these formal data structures (or formal ADTs, I should say, though the papers themselves never tend to think of themselves as introducing an ADT) is that successive papers that find better algorithms, or make small changes to the data structure, pretty much always hold the set of operations required of the data structure constant (since that’s the only way to be able to compare these successive implementations on their performance/complexity.) So a “family” of papers about a given data structure is really a family of papers about an implicit ADT whose defined interface is the set of operations from the first paper that later papers compare themselves on.

Personally, I feel like all “formal” data structures of this type could benefit from stdlib inclusion (e.g. interval trees; ropes; CRDTs; etc.) but that’s just a personal preference.

But when a compiler is a built-in part of a runtime, with its modules already exposed as stdlib modules in a sense (being there as modules in a package namespace that you don’t have to retrieve, and which is always locked to the version of the compiler installed) such that people can actually reach in and use the compiler’s ADT for formal data-structure Foo—and people do!—and yet this growing dependency doesn’t cause the language maintainers to become worried about how they’re creating an implicit stdlib module by exposing the compiler internals in this way, but forgetting to track its usage for breaking changes... then I start to feel like it’s a bit ridiculous.

To me, language projects should be run under the Microsoft philosophy: you don’t get to decide what your stable interfaces are. If your customers are all using and depending on feature X, then feature X should be treated as a stable interface. It’s up to you as a language maintainer to discover your stable interface requirements, and promote the things people are “treating as stable”, into being managed as stable in a language-evolutionary project-management sense.


> should be run under the Microsoft philosophy: you don’t get to decide what your stable interfaces are

The philosophy only works because private APIs are hidden enough to reduce the effort required in maintaining backward compatibility. The fact that MS works (or rather, worked, it was much more vital in the Windows 95 transition) hard to maintain compatibility even on private dependencies doesn't mean that private doesn't mean anything.

I think a bigger (and certainly more visible) element of the MS philosophy is in maintaining compatibility for public, stable APIs. For example, look at how Windows 10 still has all these old Control Panel dialogs. That's not incidental compatibility with undocumented APIs; it's designed for extensibility, the maintenance of which holds the whole platform back to a certain degree.

I worked on the Delphi compiler and runtime library. Versioning was an exceedingly important concern. Patch releases couldn't break APIs because it could affect our partner market, if we affected the dependencies of third-party components, those third parties would need to re-release. The ecosystem only worked because there was a clear separation between public and private. You need private APIs because you need the flexibility to change things - if you can't change things, you must create everything perfectly first time, and that's just not possible.

You might want to get access to the compiler internals, but if you build something nifty with that access, and that nifty thing gets widespread use, you will hold back the entire ecosystem. You will be the cause of warts like Windows 10's 20 year old Control Panel dialogs.


I don't think it's that clear.

If you'd like to do topological sorting you'd use one algorithm if that topological sorting is for a set of fixed tasks. Online topological sorting (that is - delta-based algorithms) used in databases would use a different algorithm and will also have different performance constraints.

I could see that in the long run we'll eventually figure out how to abstract all of the "complete/delta" stuff. I don't think we have yet figured this out yet.


> Why doesn’t every language give me a batteries-included digraph ADT?

I think such a type would be less useful than you'd think, for precisely the same reason why a linked list in the stdlib is pretty much useless: almost always, you don't want the stdlib allocating nodes; you want an intrusive data structure instead. Really, the problem is that the compiler needs to store extra data with the nodes that the stdlib can't know about, but you don't want two separate types `stdlib::GraphNode` and `compiler::ControlFlowNode` because you usually need to be able to convert between these types in both directions. (one direction can be handled by embedding one of the types in the other; but the reverse direction will require overhead for an extra pointer, or horribly unsafe pointer arithmetic)

Of course in Rust, there could still be a digraph trait and the stdlib could still provide generic algorithms.

Though it's also not so rare in compilers that nodes are members of multiple graphs simultaneously (with different edges in each, e.g. control flow nodes are typically not just part of the control flow graph, but also belong to a dominator tree). It's non-trivial to create a graph abstraction that can handle all these cases while remaining efficient (you don't want to put nodes in a HashSet just to check whether a graph algorithm already visited them), so it's not surprising that compiler developers don't bother and just write the algorithm directly for their particular data structures. In the end, most graph algorithms are only about a dozen lines, much simpler than the abstractions that would be required to re-use them across completely different graphs.


> In the end, most graph algorithms are only about a dozen lines, much simpler than the abstractions that would be required to re-use them across completely different graphs

I guess that depends on the language; in dynamic, pure-functional languages (like Erlang) there’s nowhere you can go for further efficiency beyond “digraph expressed as labelled V and E sets held in dictionaries”, so the data structures themselves actually are quite generic, and you would never really need a separate specialization of them.

You’re probably right that in systems-level languages where you’re effectively programming a pointer / RAM word machine, you can get better specialized data structures per use-case, so writing the self-hosting compiler isn’t going to just “throw off” a handy reusable stdlib digraph library as a side-effect.

But in the cases where you’re not using a systems-level language—i.e. the cases where the graphwise algorithms you’re writing are being written against the same stdlib ADT anyway, even if you reimplemented it compiler-side—it doesn’t make sense to me to hide these algorithms away inside the compiler application package. They’re generic algorithms! (And since the ADT is a formal one from a paper, they’re probably very stable ones, too!)

As a restatement of my original premise: if your digraphs are generic, and topsort is a generic function on digraphs with only one obvious implementation given the digraph ADT you’re operating against; and given that you’ve implemented topsort for these stdlib digraphs; why aren’t you putting that algorithm in the stdlib, as a function of stdlib digraphs?

(And it’s not just a question of only putting the most common stuff into the stdlib. Erlang has lots of very arcane digraph operations only useful in proof-verification sitting around in the stdlib in the digraph_utils module. There’s no reason that operations only useful in compilers wouldn’t belong in there equally well. But instead, they’re in the compiler. Kinda weird, if you ask me.)


Yet Boos.Graph (a c++ library) is a generic library of graph algorithms that, in the spirit of the STL, abstracts away the graph representation. Whether the graph is represented via direct pointers or adjacency matrices (however represed) they can be adapted to work very efficiently with the library.


> you don't want the stdlib allocating nodes; you want an intrusive data structure instead.

Don't C++'s std::list and Rust's std::collections::LinkedList effectively implement the moral equivalent (insofar as memory layout is concerned) of an intrusive list, though?


Yes, but they hide the memory layout as an implementation detail, thus negating most advantages of an intrusive list.

You can't call `std::list<Node>::erase()` with a Node* , and you can't write a function that converts from Node* to `std::list<Node>::iterator` in standard C++, even though it's really just a matter of subtracting a fixed number of bytes from the pointer value. So you instead need to store the `std::list<Node>::iterator` in the `Node`, wasting memory.

You can kind of cheat by always using `std::list<Node>::iterator` instead of Node* , but that means you can't use normal methods, because the `this` pointer will lack access to the iterator. And you always need to pass around a pair of `std::list&` and iterator, because the iterator alone isn't sufficient to delete the node or insert elements before/after it. Usually the code ends up being a lot simpler if class Node just reimplements a linked list.

And that's still the simple case where each node is only contained in a single data structure at a time, which (at least in compilers) is rarely sufficient.


There's boost::intrusive::list which IME actually works quite well. There is a node type which elements derive from or contain, so you can get an iterator from an element, as well as the other way round. To include an element in multiple lists, just aggregate the node type multiple times.


Boost.Intrusive is beyond awesome. You can add nodes not only to multiple lists, but also to maps, set, hash_maps to build multi-indexed data structures (and with delete hooks all indices can be kept consistent).


E.g. every compiler uses control flow graphs; so why doesn’t every language give me a batteries-included digraph ADT, that has every feature you’d need to implement a control flow graph, such that the compiler’s CFG is just a stdlib digraph instance?

By the same token, I'd like to see dynamic languages with a tracing JIT, show what types are used in the actual runtime. There was an article posted to HN a few years back, where some researchers noted that most dynamic language servers seem to go through an initial startup period, where there's some dynamic type shenanigans, then settle down to a state where the types are basically static.



> Yet they do pay off in Rust because of my favourite Rust feature: the ability to refactor large-scale software without breaking anything

I think crater also deserves some credit here - given how much Rust source is in Cargo, it's very useful to be able to run a refactored compiler against all those packages and see which don't compile or start failing their unit tests.

[1] https://github.com/rust-lang/crater


> ... the ability to refactor large-scale software without breaking anything ... and no low-level programming language except for Rust has it.

Ada has had the same characteristics since '83. Thanks to strict typing (among other features), Ada programmers have enjoyed "safe" code refactoring for decades.

But it is nice that new languages like Rust are finally picking similar ideas and design choices.


Ada languished for years with expensive, proprietary compilers and a community that largely ignored open source culture. Is it any surprise that Rust is having more success breaking into the modern mainstream software world?


I fail to see how your parent implied surprise.


I don't feel the comment was made in an adversarial way. Just pointing out some reasons why Ada didn't become widespread. There are plenty of decent languages with great high-level constructs and features that failed to gain/maintain meaningful industry penetration after C hit the stage. I'd say Ada is just another victim of that success, along with the issues he mentions. It's unfortunate, but that's how it is. Maybe Rust can reach a wider audience in a way Ada was unable to, while providing similar safety guarantees.


Visual C++ is part of those few.

Rust is still far from Delphi, Eiffel, .NET Native experience though.

Although it is great that it keeps improving.


Is there a blog post about Visual C++ being multi-threaded end-to-end ? (e.g. doing parsing, type-checking, etc. of a single TU in parallel using all cores ?)

That would be super interesting to read because it is mainly a C++ compiler and some C++ features like two-phase lookup and macros make it quite hard to do things in parallel. You have to do things in a certain order but I suppose that if its query-based as well it will work.

The only option I know is /CGTHREADS but that only uses multiple-threads for optimizations and code generation which is something that the Rust compiler has been able to do for a very long time (e.g. there these are called codegen-units and LLVM supports these so it is quite trivial for a frontend to do so as well).


As the documentation says: "The /MP (Build with Multiple Processes) compiler flag specifies the number of cl.exe processes that simultaneously compile the source files. The /cgthreads option specifies the number of threads used by each cl.exe process." I've never used /CGTHREADS but /MP is working well (except for the precompliled header file).


The documentation also says that /cgthreads is the number of threads used by the code generation and optimization passes of the compiler, not by the whole compiler.

A compiler needs to do a lot of stuff beyond code generation and optimizations (e.g. in debug builds optimizations are even often disabled). For C++ you have overload resolution, template argument deduction, template instantiation, constexpr evaluation... and well parsing, tokenization, type checking, static analysis (e.g. for warnings), etc.

Parallel optimizations and codegen is trivial when compared with an end-to-end multi-threaded compiler. All LLVM frontends for all programming languages do parallel optimizations and parallel codegen, it's only an LLVM option away. You enable it, and it's done.

> The /MP (Build with Multiple Processes)

This is one process per translation unit, each single translation unit is then compiled with a single thread until optimizations and codegen.

Often you need to finish compiling/linking some translation units before continuing.

The Rust compiler has pipelined compilation: compilation of a translation unit starts as soon as what this requires of its dependencies is already available, before its dependencies have finished compiling. It also compiles each translation unit itself using multiple threads end-to-end, so that if one of your translation units is bigger than the others, you can speed the compilation of that one by throwing more threads at it.


That's for different source files though, right? Try having a 10Mb auto-generated C++ file, see how fast it compiles, regardless how powerful your computer is.


No need to autogenerate anything: expand all headerfiles into a .cpp file, and you easily end up with > 10Mb files.

Visual C++ pre-compiled headers is quite amazing and works really great, avoiding this almost completely.


Visual C++ MSDN blog has a couple of blog entries how they refactored the compiler to use AST, added incremental compilation and incremental linking.

But as far as I remember it isn't fully multi-threaded across all phases.


Visual C++ may be great, but in practice, if you use it with MSBuild, you can't even build modules in parallel without manually tuning how many cores to allocate.


Might be, but it already helps and there is IncrediBuild as part of the package.


In my experience VC++ is order of magnitude slower than GCC or clang though. Might have been a bad build system though.


Only when incremental compiler, incremental linking and pre-compiled headers are disabled.


It's not just the compiler but also the system SDKs that play a significant role in the compile times.

In an apples-to-apples comparison (ie: same pre-processed code content) from circa 2015, Visual Studio's compilation was about the same speed as clang and faster than GCC, without the use of incremental compilation / incremental linking / PCH.

It's very easy to dig yourself into a hole when developing for Windows because Microsoft has historically had terrible discipline for minimizing the amount of code brought in through the Windows platform includes as well as the language headers. PCH's ease the penalty significantly but are a bitter pill to swallow if you're trying to cut down on the number of symbols exposed to each translation unit.


Kind of true, but the common use of distributing binaries means that one can also take care to modularize the application across lib, DLL or COM modules, thus reducing even more what actually gets compiled from scratch.

Naturally one can do that across UNIXes as well, they just historically seem not to have invested that much either in incremental linking nor in a proper pre-compiled header model, hence no one really uses them. At least that is my perception.

In any case modules are in now, so lets see how they evolve.


Wow, this is really encouraging. I was debating whether to use Rust or Kotlin for developing a new programming language, and after reading your comment, I'm leaning a little bit closer to Rust. I had previously broken down the pros for Kotlin to 6 points:

1. You get access to all of the JVM libraries.

2. You don't have to work within the confines of the borrow checker.

3. Kotlin “Common” targets 3 platforms: LLVM, JVM, and JS (whereas, Rust only targets LLVM).

4. Kotlin is probably a more terse language, and suitable for doing algorithm / problem-solving interviews, and therefore a good one to be fluent in.

5. Kotlin's greater industry traction means that it might be more useful professionally. Outside of Android, I've also heard of servers/back-ends being written in Kotlin.

6. ANTLR is well-documented, compared to LALRPOP (the best existing Rust parser generator), and ANTLR targets/generates code for several mainstream languages (versus only Rust with LALRPOP). ANTLR is also probably a more useful skill to have for future jobs/projects.

But despite of all the pluses of using Kotlin, I'm still leaning a bit closer to Rust, because of all the good things I'm hearing about it.


This is great! Many dev hours are spend waiting for the compiler, every second counts!

The second order effects are even worse. After a minute, the programmer will start thinking about other things, running flow.

If compiles regularly take 5min, devs will leave their desks (and honestly, who can blame them for it).


I just use `cargo watch -x test -d 5` to run all my tests after I pause modifying code for 5 seconds.

My editor (emacs) uses `cargo watch -x check -d 0.5` to run `cargo check` (which is blazing fast for incremental edits) to type check all the code and show "red squiggles" with the error messages inline.

So my interactive workflow with Rust is only edit-"type check"-edit-"type check" where "type check" takes often less than a second.

Asynchronously in the background, the whole test suite (or the part that makes sense for what I'm doing) is always being run. So if a test actually fails to run, I discover that a little bit later.

I don't know any language for which running all tests happens instantaneously. With Rust, if anything, I write less tests because there are many things I don't need to check (e.g. what happens on out-of-bounds accesses).

This is the best workflow I've ever had. In C++ there was no way to only run type checking, so one always had to run the full compilation and linking stages, where linking took a long time, and you could get template instantiation errors quite late, and well, linking errors. I don't think I've ever seen a Rust linking error. They probably do happen, but in C++ they happen relatively often (at least once per week).


> In C++ there was no way to only run type checking

gcc has -fsyntax-only. Despite the option name, this also includes type checking and template instantiation. AFAIK it reports all compiler errors, though it skips some warnings that are computed by the optimizer (e.g. -Wuninitialized).


Is there an easy way to tell CMake or Makefiles to use it ?

I never invoke clang or gcc directly. When using cargo, I use `cargo check` instead of `cargo build`. But in C or C++ depending on the project `make check` might not exist, or it might build all tests and run them, or do something else entirely like checking the formatting using clang-format.


> - CMake > - Easy

Pick one. I'm sure there is though.

It wouldn't be hard with make but then again I'm much more familiar with it than cmake.


So I manage to get it to work in the command line using `CXXFLAGS="$CXXFLAGS -fsyntax-check" cmake ... && make` !


One issue with that is that is, unless there is magic handling of the flag in cmake, it might still attempt (and fail) to run the linking stages. You might need custom targets jut for this.


Dump a compilation database with -CMAKE_EXPORT_COMPILE_COMMANDS=1 and wire up a script to call those commands but with the syntax only flag.


> In C++ there was no way to only run type checking

-fsyntax-check on GCC and IIRC clang as well.

If you use Emacs you can integrate it w/ flycheck.


I do use emacs! Can you integrate it with make and cmake via emacs somehow ?

Maybe doing `CXXFLAGS="$CXXFLAGS -fsyntax-check" CFLAGS="$CFLAGS -fsyntax-check" make ...` ?


You can easily get linking errors when using bindgen to link to C libraries (usually in the form of -sys crates). In fact, I once spent a fair bit of time trying to figure out how to link to FFmpeg (libav*) on Windows, without success.


> start thinking about other things, running flow.

I've found it a helpful practice as a programmer to be intentional about this.

With a little awareness, you can identify situations where it really would be best to busy-wait while something compiles. (If the wait is not all that long and switching tasks harms focus.)

And with a little mental discipline and practice, you can train your mind not to wander. You don't totally blank your mind out, but you also don't let unrelated thoughts distract you. Just continue to think about the same thing you were when the compile started. Don't shift gears mentally, just ease up on the mental gas pedal.

It's so easy to let anxiety or guilt about wasting 2 minutes lead you into giving up focus, which is more precious. It's a false economy, and thus doing something else with those 2 minutes is actually more of a temptation than a smart idea.

(Of course, it's better if the tools are just fast! But sometimes you can't have that.)


Reading this reminded me that I'm only here because I was waiting for a compilation & flashing to finish. I will not tell you how many minutes ago.


Incremental compiling has helped a lot here. Even if the first compilation takes awhile, the next one will be much swifter.

It's especially useful when only making minor changes to the code (which is pretty common).


Faster compiler is nice, but you know what's faster? Not having to compile anything. I'm also looking forward to crates.io serving precompiled crates (https://www.ncameron.org/blog/cargo-in-2019/)


I'm a Rust noob but I think skunkopalypse was making some good points. I wish people didn't vote down his/her comment to death and actually replied to it so I could learn why he/she wasn't right.


I too think that precompiling crates won't really solve the problem.

If you compare the compiled artifact (rlib) of a crate with the source code, you'll quickly see that the compiled artifact is much larger than the source code. libglutin-0c732c31a1d003fb.rlib has 8.1 MB while glutin-0.22.0-alpha1.crate has 53 KB.

Most people have shitty internet. And often it's nothing you can do about it because you have shitty ISPs. You can buy a computer with good CPUs and those are usually cheaper in comparison than good internet for a year, at least in many rural areas in the US. It's not just the US, some other countries have it even worse.

Now of course if you have good internet and a bad CPU it's a good deal, so there should definitely be an option to use it, maybe even with autodetection. But I think cargo has too much dependency on the internet, not too little. There should be no manual input required to turn off precompiled crates downloads if it is faster to compile the crates locally.


Looking forward to it.

If I would enjoy compiling everything from scratch I would be using Gentoo.


I'm curious to see what direction they take this.

Will they have a trusted compile farm, only supporting a subset of targets, or involve some kind of distributed trust model supporting whatever people use? Will it be greedily populated with a specific subset of targets / features or lazily populated based on combinations people actually use?


Ew, gross.

Crates.io is already a security trainwreck in progress. Do we really need to add even more attack vectors?

It's also solving a non-problem. I modify source code downloaded from crates.io zero times per day, so I compile each crate only once. Compile times matter for code I write myself: I modify (and therefore compile) that code dozens of times per day.


Does LLVM still take up much of the overall time spent by the Rust compiler? I was thinking of getting involved over there as the most effective way to make speed-ups happen.


I remember reading that it does, but because the LLVM IR generated by rustc is verbose. If less IR was generated, LLVM would have less work to do.


I think that's one of the eventual purposes of MIR (by both optimising Rust upstream from the LLVM IR generation and having a much simpler language to generate IR for), but I don't know if there's been work towards that yet.


I thought it was two factors (1) unoptimized IR and (2) large translation units (crate rather than file).


> Does LLVM still take up much of the overall time spent by the Rust compiler?

The faster Rust gets the more overall time LLVM takes.


Would it be possible to write a rust compiler that skips llvm entirely?


Cranelift is being created as an alternative to LLVM, targeting a different set of requirements. Last I heard talk about Rust using it, the thought is that it would fit in well for fast debug builds but not be a good fit for optimized release builds.

See https://github.com/bjorn3/rustc_codegen_cranelift#not-yet-su... for progress


Probably yes, the LLVM linker is really slow


Really? I seem to recall lld being comparable to cp in speed, with the caveat that compacting debug strings can take a long time (if you enable that option).

Reference: https://fosdem.org/2019/schedule/event/llvm_lld/


Depends on the platform and the build type.

For example, on macOS in debug builds, the compiler and linker are reasonably fast, but then 2/3rds of the compilation time is spent in "dsymutil", presumably chewing through megabytes of the debug info.


Rust is only using LLD on ARM targets.


It's used on wasm as well.


I wish compiling the Rust compiler itself was faster. It literally takes days to compile on my old laptop.


It is! The article is about the performance of the compiler, not the machine code it generates.


Try on arm! You'll wish you hadn't! >.<

I think I spent 3-4 days compiling rust, almost forgot I started it at the time.


Wait, what language were you compiling it on?


Rust has been bootstrapped for a long time now.


Maybe they should make compiling the Rust compiler faster. Right now it takes longer and uses more CPU and memory than compiling a whole LLVM toolchain, then using that toolchain to compile a whole kernel and OS.


Compiling the Rust compiler also requires compiling LLVM, so by definition it will always take longer than that.


[flagged]


Gee, I wonder why the developers of a very complex, high performance browser and now also of a low level systems language with the following motto: "A language empowering everyone to build reliable and efficient software." didn't think of that!

They probably just forgot to un-comment

// let compileSpeed = fast;

in src/compiler/main.rs

Their resources were limited and they chose to focus on more important things at the time. Now they're focusing on compilation speed. And those more important things Rust offers aren't really offered by any other mainstream language.


For most software I would agree with you. Usually you have some CRUD application where people don't know how to index and use hashing.

But the Rust compiler is an amazing static analyzer. Running one on C++ to get the same level of memory safety will often take hours on sizeable projects.


Yes that's true. Coverity and similar are even slower.

But then rust errors out on a lot of things the C++ analyzer wouldn't, a lot of those things are actually being reasonable code.

For example I discovered recently that you cannot swap two pointer variables in rust without using unsafe. Surely there is some deep reason for that, that is really important until they can figure out how to fix it in some future Rust version, which they undoubtedly will.

It's not that swapping is an obscure operation, many important and fundamental algorithms depend on it.

I'm probably overly cynical there, but I sometimes feel that all these arbitrary restrictions that make code hard to write make people feel better about rust because if they get their code to actually compile it's a sense of achievement that makes them proud. Kind of like a journey to find yourself or something like that.

But then i see that there is actually some progress in Rust to relax some of these restrictions (e.g. non lexical lifetimes in the borrow checker). If my theory above is true that would actually lower Rust's popularity over time. True? Probably not. But it's an interesting thought.


>For example I discovered recently that you cannot swap two pointer variables in rust without using unsafe. Surely there is some deep reason for that, that is really important until they can figure out how to fix it in some future Rust version, which they undoubtedly will.

Wouldn't this work? https://doc.rust-lang.org/std/mem/fn.swap.html

>But then i see that there is actually some progress in Rust to relax some of these restrictions (e.g. non lexical lifetimes in the borrow checker). If my theory above is true that would actually lower Rust's popularity over time. True? Probably not. But it's an interesting thought.

I'm not sure. Sometimes these restrictions are there, because it's too difficult to decide what should happen. I don't think it's malevolence on the language creators side. And I do think usability is always better.

I find that I'm running into way fewer such situations now than last year though, so I think a lot is happening here.


>Wouldn't this work? https://doc.rust-lang.org/std/mem/fn.swap.html

If you click the [src] link on that page, you'll see that the implementation of std::mem::swap uses unsafe. I'm no Rust expert, but I'm also not aware of a way to swap pointers without unsafe.


Ah I thought you meant without using unsafe yourself.

The goal of unsafe is to reduce the attack surface where you need to check your code more thoroughly. So by encapsulating this in a library function that's well tested you don't need to create such a section yourself. But if you do need unsafe it's not wrong, and if only a small part of your code needs unsafe then it's much easier to verify that your code is doing what it should.


A static analyzer which requires correct code to be rewritten in a certain way to satisfy the analyzer does not deserve to be called amazing.


Don't all static analyzers do that? Or they're just ignored, which happened in almost every large project I've ever seen.


Exactly. C++ code has so much variability that 90% of static analyzer warnings are worthless. So people ignore them and create terrible unsafe code.


My comment had nothing to do with C++ or any other particular language, you're using it as a red herring to divert attention from the fact that the Rust analyzer requires changing the way you write code, making it not awesome, but ordinary.

And speaking of C++, it's obvious that Rust programmers take a very adversarial stance towards C++, but you should be instead thankful that it exists, because without it many of those marketing articles would lose their meaning.

"Slow compiler is now less slow, but probably still slower than those of other languages you know" doesn't quite have the same effect.


C++ is the obvious benchmark language because it's widespread and has the same benefits (manual memory management, native, high control).

High reliability, static checked any language requires you to write code a certain way. C++ is just the most prominent and meaningful example here.

ADA would be another good comparison, but I haven't worked with it on large projects so I can't say much about compile speed. It has a lot of similarities in being a safer C++. It felt similarly restrictive to Rust though.

I think the fact of the matter is that it you want safety you need some kind of restrictions and your compiler needs to do some extra work (ADA and Rust) or you need to look at the whole toolchain that o of necessary to achieve safety (in C++ this is very long if you run your tests with Static analyzers, ASan and Msan, too which remove some of the bugs Rust tells you about).


    - correct
    - cheap
    - fast
Pick two




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: