On the other hand, the error messages I ran into along the way were quite good at pointing me in the right direction about how to fix them, which saved more time than the extra compile time cost, relative to the Python-based alternative I had been trying to set up before that.
We would also need to pay for the cost and such of building, hosting, and distributing all of that...
There are other middle grounds too. There's certainly interest, it's just not trivial. If it was, we'd do it!
Contrast this with C and C++, where it's common to #ifdef in an entirely different program depending on the flags.
It's certainly possible to do something like this in Rust, but in practice it's rare.
Good CI configuration can definitely help to catch these issues, but it's obviously an extra effort for projects to set up. The alternative could be to simply avoid feature flags and always compile everything, which depending on the project might or might not be the better option.
On the other hand, I'm guessing that by "custom flags" you're referring to compiler-level flags that influence codegen, which rustc doesn't have that many of, and most of the ones that rustc does have are for controlling things that people might reasonably expect to be nontrivial work to change in the first place (e.g. linker/symbol options, cross-compilation/platform options, LLVM/gritty optimization/instrumentation options). Of the rustc codegen options that ordinary users might want to play around with, I see only two: one for turning off arithmetic overflow checks (which makes all integers act like their overflowing equivalents from the stdlib), and one for determining the general strategy of what happens when a panic occurs. The latter is unlikely to cause problems because the default behavior is a superset of the configurable behavior, and for the former any silent problems in misconfigured users would manifest as panics for everyone else, so there's a good chance the problem would be fixed upstream regardless.
The most valuable company in the world has significantly more resources than we do.
I'm not sure that's really relevant.
* apple has resources, lots
* apple wants to promote the use of swift, making it easier and more convenient is a good way to do that, binary dependencies reduce the complexity of the build process because you don't need to build dependencies
* apple has a vibrant ecosystem of small closed-source shops, binary-only distribution is useful for those, as well as for themselves
* promoting binary and eventually dynamically linked dependencies might mean the ability to dedup' on-system dependencies
And yes of course it is given that apple's resources is vast to say the least.
I understand that very well. My point is that they wouldn’t be doing it regardless if they didn’t want to, and if they really want to (for reasons I outlined) they have the resources to make it happen essentially regardless of the constraints or hardware breadth.
Apple also relies heavily on dynamic linking in general, so binary dependencies are likely going to dynamically link their own dependencies, thus removing a lot of the variability that would otherwise require recompilation.
Apple is doing this, because Swift is their next-gen systems programming language.
Swift supports static linking just fine as well.
Really, it is all a matter of which demographics Rust wants to be present.
And with Rust now being adopted by Microsoft and Google, I just see this need only increasing.
It just looks like everything is source code when not taking the effort to read through all dependencies.
It doesn't force companies at all, only those that are comfortable shipping source libraries end up adopting such languages.
I used to work for a company that shipped encrypted Tcl source code and provided the necessary interpreter hooks to access the code in its encrypted form.
Yes, and the others are left behind and don't get to participate in (and infect) the ecosystem. Sounds like a win to me.
> I used to work for a company that shipped encrypted Tcl source code and provided the necessary interpreter hooks to access the code in its encrypted form.
Surely.. if the interpreter can decrypt the code then so can the user? Minification and obfuscation are of course still possible, but the whole encryption thing seems pointless.
"Infect" the eco-system? That is not the way make business.
There’s that but there’s also issues like ABI stability (and the lack thereof), compilation flags, hosting (infrastructure and its cost), distribution mechanisms, …
There’s an issue on cargo dating back to 2015 (#1139) but there’s a lot of efforts needed to think about the problem, then actually solve it.
I think it is only a matter of time until such cargo gets its "Maven".
Incidently one of Swift announcements at WWDC was the support of binary packages in Swift Package Manager.
Of course, if Cargo supported pre-compiled dependencies, I am guessing that it would be smart enough to only recompile the dependencies that are using non-default features.
What is it that triggers such a "full" build though? Obviously in some CI scenarios you might start from scratch, but in an edit/compile cycle, you'd never be hitting those 5 minutes, correct?
If you clean your build folder, things will need to be rebuilt from scratch. Likewise if you change your compiler version.
If it does, then the long compile times are almost never encountered for neither developers nor CI. So are they really problematic?
How hard is the caching to set up, especially in a CI setting?
Cargo caches by default and you can specify the path that it stores artifacts. Most of the cache story is about what your chosen provider uses to specify build steps, etc
Rust critic: Build times are comparable to C++!
On one hand, compile time would be lowered because compilation units are larger and less can be done in parallel. But in future, incremental builds probably become quicker because it’s easier to tell what’s actually changed and what that affects.
On the other hand I tried introducing Rust for a small part of a larger ecosystem and the cold compile times were so bad we rewrote the functionality in C. It shaved minutes off our CI build times, which costs actual money.
Yea, we have this issue (as a shop now using Rust for all of our backend).
I have a couple of hacks in place to cache the majority of the build thankfully, only needing to compile source code unless something changes. When our build cache works our builds take ~60s. When it doesn't, ~15m.
What did you wind up doing to cache your builds? I've tried a few different hacks but none have stuck.
As far as what we did to cache, nothing fancy - using Docker build layers. I add my Cargo files (lock/toml), include a stub source lib.rs or main.rs to make it build with a fake source, and then build the project.
This builds all the dependencies. It also builds a fake binary/lib for your project, so you need to strip that from the target directory. Something like `rm -rf target/your?project?name*` (I use ? to wildcard _ and -)
If you do that in one layer, your dependencies will be cached with that docker image. In the next layer you can add your source like normal, compile it, and you'll be set.
We lose our cache frequently though because we're not taking special care to centralize or persist the layer cache. We should, for sanity.
Do you use digests (@sha256:...) in the Dockerfile source image (FROM ...) to ensure you are always using the same layer?
If not, that is probably the reason why your cache is falling so often.
I think that's happening is our garbage collection on old docker layers is being too aggressive. But because it works so well between commits, pushes to CI and etc - I don't worry about it. The majority of the time I want a cache to work, it works. So it's been a low priority thing for me to fix haha.
edit: Oh, and I forgot, we may have CI jobs running on different machines. Which of course would also miss caches, since we're not persisting the layers on our registry. I'm not positive on this one though, since like I said it never seems to fail between commit pushes _(say to a PR during review, dev, etc)_. /shrug
Alternatively it is also quite common to use dynamic libraries, or stuff like COM, XPC.
Rust is slower overall, which is why people tend to complain. And if you start messing around like in C++, then you get even crazier times.
But in neither case it is a dealbreaker compared to other languages. Go proponents claim compilation speed is everything, which is suspicious. I do not need to run my code immediately if I am programming in C++ or Rust. And if I am really doing something that requires interactivity, I should be doing it another way, not recompiling every time...
I've worked in C++ code bases with just a few 100k loc where one starts architecting the software to avoid long compile times. Think about how insane that is, you choose to write and structure code differently as punishment for the sin of writing new code. Not to improve the software performance or add new features.
The worst example of this is the pimpl pattern. You make th explicit choice to trade off compile times to hide everything behind a pointer dereference that is almost guaranteed to be opaque to the compiler, even after LTO, so the only "inlining" you may see is from the branch predictor on a user's machine. That's bonkers!
Messing with the type system is using it for things you really should not in any reasonable project. For instance, some of the Boost libs with their overgeneralizations that 99% of users do not need.
I think they're talking about the STL.
Recompiling fast helps the most when learning to program, but not for actual applications with some complexity.
Many applications do not even have meaningful output by just running it, for instance they may take a long time to compute something meaningful.
Run the code (with some debug logging statements, very likely), find a small mistake, make a few characters worth of correction, press your IDE's keyboard shortcut for recompilation and re-running.
If the last step is slow you can lose momentum and motivation to iterate quickly on the problem at hand.
Sure it doesn't apply to a lot of projects, that's true. But it's not charitable to claim it only applies when learning.
with an n tier web application you wont often be able to do that anyway.
2) They use sccache locally for caching binary builds artifacts.
3) In-office they use stuff like distributed compiles/a beefy compiler machine on the network rather than necessarily the one they are using
% curl -Lo mozilla.zip https://hg.mozilla.org/mozilla-central/archive/tip.zip
% unzip mozilla.zip && cd mozilla-central-*
% find . -type f \( -iname '*.cpp' -o -iname '*.cc' -o -iname '*.cxx' -o -iname '*.c' -o -iname '*.h' -o -iname '*.hpp' -o -iname '*.hxx' -o -iname '*.hh' \) | wc -l
% find . -type f -name '*.rs' | wc -l
% find . -type f -iname '*.js' | wc -l
Some examples of prior art in the context of strongly typed compiled languages with compilers designed for interactive development from the get go.
Mesa/Cedar environment at Xerox PARC, and how Oberon and its descendants used to be integrated with the OS (Native Oberon and others)
Energize C++ and Visual Age for C++ version 4, although quite resource intensive for their time.
Eiffel, with its MELT VM on Eiffel Studio for development and AOT compilation system C and C++ compilers for deployment.
C++ Builder and Delphi environments, although still batch, provide a similar workflow.
It doesn't need to compile dependencies because the public parts of units are a serialized symbol table with full definitions, rather than simple object files.
And, LOL, just now finally read the readme, didn't even know I could archive the the cache over the network.... #foreverN00b that's gonna be awesome.
This saves me building build-script dependencies, building build-script, and running running the build-script. In addition, by default, these are built and run in the same mode (debug/release) as the target binary, making build-scripts even slower when doing release builds.
Wrote https://github.com/crate-ci/codegenrs to help with this.
It frustrates me that I've never seen a good code-gen story for python
- Some do runtime code-gen which is hard to learn from, debug, and verify
- Some do install-time code-gen, slowing down install and and making it hard to inspect and verify
- Some check codegen in but without any story to ensure it does not drift from the generator
It's not everything. It's projects that use a lot of monomorphized generics, or compile-time macros, etc. And to be sure, this situation is improving over time - as new generics-related features are added, hopefully more of the redundant code that gets output when using these features will simply be cleaned up automatically, without adverse impact on compile times.
An incremental release build where nothing has changed (I just ran `touch src/lib.rs`) takes 25 seconds. Compiling the VM from scratch including all dependencies takes 1 minute and 28 seconds.
That's not too bad, but it could (and in my opinion should be) much better. Debug builds are usually faster to compile, but tend to run too slowly to be useful for anything but debugging some VM bug.
cargo clean; cargo build;
Commenting out two lines of code in one of the services + cargo build;
cargo clean; cargo build --release
change 2 lines, cargo build --release
I don't find this egregious.
ChromeOS Linux, which is debian
Intel i7 4 core
Building LLVM from source, I was told in another forum, took 26 minutes across 16 cores on a top-end machine (ie. several hours on one core). I guess I don't need to build LLVM(?)
Not fast by any means but big difference between 6 hours and 1.
I can build LLVM, Rust and Firefox in less time on my R7 1700.
How is Chromium so bloated??
We're showcasing the Libra project to start with and would be up for adding other projects that the community is interested in.
Unless run-time linking is faster, they both result in the same delay between the end of code generation and the first execution of your code. Static linking says you pay that cost once, in the compiler, instead of every program launch, of which there will be at least one.
With dynamic linking the cost of linking is deferred until runtime, and parts of the linking process can be done lazily, or not at all, as functions are actually called
Anecdotally Google's build system Bazel makes good use of the same thing for tests. The article also mentions that issue:
That includes every time you rebuild to run a test.
I don't remember the details since it was long time ago, but dynamic linking is a huge win here because of the lazy linking. The first function call is slow but the rest are fast. Loading is faster because you never call most functions.
Rust seems to have a static dependency problem similar to Google's from 10 years ago (a big pyramid shape rather than using dependency inversion).
edit: related post about the Chrome/Ninja build:
Here's one cool hack. You can, by flipping a flag, instead build this tree of libraries as shared objects. The result isn't something you'd ship to users, but it speeds up linking time during development considerably as you're passing smaller subsets of the files to the linker at a time.
This leads to a cooler optimization. When you change one source file deep within the tree, naturally you need to rebuild its object file, then the library, and then rebuild any binary that depends on the library ...
So instead, when building a shared object we write out both the shared object and "table of contents" file (generated with readelf etc.) that lists all the functions exposed by the resulting library. Then make dependent binaries only rebuild when the table of contents changes.
> rustc is notorious for throwing huge gobs of unoptimized LLVM IR at LLVM and expecting LLVM to optimize it all away.
is seen as a problem with Rust, rather than LLVM. Isn't the whole point of having something like LLVM to have a high-quality backend where all optimizations only have to be implemented once, and then they just light up for all the languages that target it?
Furthermore, like Rust, D has a mature LLVM backend. When compiling D programs with DMD and LLVM in -O0, LLVM only took slightly longer.
- Episode 1: The Rust Compilation Model Calamity https://pingcap.com/blog/rust-compilation-model-calamity
- Episode 2: Generics and Compile-Time in Rust https://pingcap.com/blog/generics-and-compile-time-in-rust
- Episode 3: Rust's Huge Compilation Units https://pingcap.com/blog/rust-huge-compilation-units
It might be a good idea to implement a cheap/fast optimization phase according to rustc assumptions, specifically for rustc, to clean up the mess. Keeping the rustc emit layer concise and clear should really be a goal, in my opinion. Obvious code is easier to audit.