~12 min for a full release build?
~5 min for a full dev build?
What looks like several seconds for an incremental build?
I suppose it's good that a (former?) core contributor and large user of Rust holds the language to such high standards, but this doesn't seem especially bad.
I'm kind of curious how much faster compiling a large Go codebase is. How fast, e.g. does the entirety of Kubernetes compile? I'd imagine it's probably under a minute, but is it several seconds?
I was curious too, so I just tried it. On my 6-core previous-gen MacBook pro, I get 2m18s. That's for SLoc = 3296486 (assuming all deps are vendored and non-deps are not-vendored, which I think is true), so about 16K SLoc/s.
Or to put it another way, if Rust compiled as quickly as Go, we'd expect to compile a release build of TiKV in 2m.
And to be honnest, I have no idea what the compiled Go code looks like. As much as I have no idea what actual instructions are executed by the python interpreter.
"The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?"
At the moment, Go has no generics, which translates to "slow programmers", but not "slow compilers". It's likely that the simplicity of the Go language as a whole -- which leads to many complaints -- is a major factor in the fast compile times.
EDIT: Lest it seem like I'm bashing Go here, the quote above is from one of the core Golang developers (in 2009 no less), and is mirrored in this from the proposal  from another core Golang developer. It's meant to be a shorthand for, "Generics do actually save programmer time and effort, and Go programmers are working harder than necessary because the language doesn't have generics yet." The point of this post is to point out that generics have a cost (slow compilation time) and that fast compilation time has a cost (no generics -- at least, not without inventing a new way of doing generics).
It's possible to love something while still seeing its deficiencies and wishing its improvement. In fact, I'd argue that's the only way to truly love anyone or anything.
Here's what I mean. Say you have a Vector class that can operate on ints or floats. You could make that a generic, in which case the compiler can either (a) duplicate the code for each type (monomorphize) or (b) do dictionary passing and get slower runtime. But if your language doesn't have generics, you have exactly the same problem: you as the programmer must (a) duplicate the code for ints and floats or (b) use an interface and get slower runtime. Not having generics doesn't solve anything. It just means that you, the programmer, have to do things that the compiler would otherwise do for you.
(^) Of course, you then have the burden of keeping multiple monomorphized implementations consistent when you make a change that needs to apply to all of them.
Also note that one of these assertions is measurable, and the other an unsubstantiated opinion.
I think the assumption implicit in that statement is "generics over value types compiled AOT". But that's not the only way to do it.
The rest of the time you're dealing with references anyway, so there's no boxing.
But that's because your language is already leaving some performance on the table!
If you want a language that's as fast as possible, you want something like
GenericContainer<Foo> foos = ...;
for(var f in foos) f.DoSomethingFooish()
I don't think you can have a generics system capable of achieving that level of performance w/o also brining w/ it the downside of more code generation & extended comp time.
(P.S. Also, Java is getting support for user-defined value types, right? How will those interact w/ the generics system?)
In C++ you see std::vector with large-ish values all the time, even when it doesn't really have any memory layout justification because that way you get semi-automatic memory management and with pointers you don't. This can easily lead to large amounts of pointless code bloat, hurting icache hit rates, compile times, binary sizes and more, even in cold paths where memory layout is the least of your concerns.
Not sure yet how generics in Java and value types will interact. There have been some prototypes of generics specialisation so it'll probably end up like C++ but, hopefully, with way less use of value types - restricted only to places where they make a real improvement. That'll be a lot easier to measure in Java because as long as you stay within the value-allowed subset of features you will be able to convert between value and non-value types transparently without rewriting use sites. So you can just toggle it on and off to explore the tradeoffs between code generation and pointer indirection.
A clean dev build for one of my more experimental projects that uses generic heterogenous lists takes five minutes to compile for fewer than two thousand lines of code and a single test case. In that project, however, I'm pushing Rust's type resolution algorithm to its limits (basically as a giant constraint solver looking for a unique solution).
12 mins for a full build of 2M lines of code on a single box is quite reasonable, and it is great if they are pushing for better performance.
 because of aggressive caching if is hard to really do a clean build, the closest you can get is touching one of the 'god' headers included everywhere.
Heck, if you've ever had a complex codebase with LTCG, I've seen cases where linking alone took ~25 minutes.
As a baseline, a non optimizing compiler for a simple language should be able to do 1 million lines of code per second. Of course, most languages are not simple.
Incremental compilation is an essential operation. I probably do it more than 100 times a day. Just like editor responsiveness, it can almost not be fast enough, and if it takes too long it can bring me out of the flow. I would say that over 0.1 seconds any speed improvement is welcome. More than 3 seconds is definitely a nuisance. More than 15 seconds is extremely frustrating when dealing with certain kinds of code.
I've bisected large codebases for bugs several times where compiles take hours.
I ask because there's various unexpected things that can make large codebases compile more slowly out in the wild that don't show up in smaller codebases (as well as non-linear scaling of certain components), often making simple extrapolation of how quickly a smaller codebase compiles to a larger one inaccurate.
One million lines is a good if arbitrary point where a lot of those can be sussed out.
It seems to be discussing the speed of building Kubernetes on their build-servers, targeting multiple architectures at once. Unfortunately all the build servers are internal to Google so I don't know exactly what's being included (or what's being run), but this comment (https://github.com/kubernetes/kubernetes/issues/27444#issuec...) details the breakdown of time spent:
30m of build
7m on cluster startup
12m on tests
7m on cluster teardown
Of course everything can’t be done in parallel.
If we keep complaining about Rust build times being "slow" they'll keep making it better!
Glad a core contributor has such high standards though.
Shame on you.
(AFAICT there's no easy way to measure the LoC actually compiled - but one rough way to estimate it would be to take all the .o files listed in the build process and count the LoC in the corresponding .c files and the .h files they depend on).
I take it you can't just run it through the preprocessor then count the lines of code that are output from that?
The equivalent to compiling code in machine learning is training a model. Even on good hardware you can spend hours training a single model. Some of the really big pre-trained models like BERT can take days being trained on a farm of top-of-the-line purpose-built GPUs, which is why people almost never re-train them from scratch without similarly huge amounts computing power and very specific needs.
Edit: yes I know this article was written by someone who knows what they're talking about (and as they're a steward of the project I understand why they're calling this out for improvement, and somewhat exaggerating how bad it is). I was talking about the occasional comments I see on hacker news about how people aren't even bothering to try rust because compile times are "unusably" bad. 5 mins for a dev build of 2 million lines of code is not anywhere near reason to not even bother trying the language. No personal project or learning scratchpad is ever going to come close to 2 million lines. And it's not worse than many other languages that are used for large projects.
On common laptops that you buy at the shopping mall, not compiler rigs.
I am not doing nothing special, other than all my third party artifacts are binary dependencies, not overusing templates, incremental compilation and incremental linking.
Cargo currently doesn't do binary dependencies, so already here I have to wait for the world to compile.
And while incremental compiling is already supported, lld still isn't supported on stable.
Plus I do have experience using other AOT compiled languages with modules support, since back in the day.
So we know what are talking about, and plenty of people on Rust team are quite aware of the problem, however not every can be fixed at the same time.
Wow. And C++ projects can be quite slow to compile
My experience is the opposite. Most rust devs don't work on very large projects from what I can tell, so compile times tend to stay manageable. It's a small percentage that work on projects large enough to hit this problem, and when you hit it it hurts.
This typically makes the first build of any project considerably slower than for other languages, since you have to also build all your dependencies, but I think it allows for a really simple and predictable way to work, not to mention support any kind of architecture, across all packages.
It's obviously a trade-off, but I think it's a trade-off which is clearly worth it.
If I plan to do some Rust coding on the go (no pun intended), better do a full build at home before packing.
A 5m C++ build turns into 30m Rust one, for the same project, ported across languages.
Not to mention that it means on a large team project everyone one is compiling the same stuff over and over again, given that cargo does not yet support code cache servers.
Yes there are some workarounds like sccache, but they are extra stuff one needs to install.
That's a bit dramatical. You typically only compile your dependencies once during the project lifetime.
If you on your netbook have done a initial full build, you'll only get incremental builds from there on, just like with C++.
But unlike C++, dependency management isn't hell, and you are almost always guaranteed to have your dependency supported on the target you are building, OOB, without any fiddling or setup.
It's a trade-off, but I definitely think the Rust-team made the right decision here.
Compiling from source also does not solve the problem that a crate author hasn't taken a specific OS into account.
Yes I am being a bit dramatic, but this kind of issues do impact adoption, and there are many industries where shipping source is just not an option.
Come on. What do you find most developer friendly?
1. "cargo build && done"
or 2. Try to uncover what dependencies this project really has, and then proceed to map out what the packages (and dev-packages) for those dependencies are called on the linux distro you are using (debian, ubuntu, fedora, arch, etc), not to mention what they are called on your specific version of that distro, root up, install all that stuff.... and then try ./configure yet again?
> but this kind of issues do impact adoption
Indeed. In 2020 I wouldn't bother adopting any kind of language which prefers the latter flow to the former one.
And it is more like "cargo build && off to lunch".
I am usually on Windows, and most commercial vendors nicely sell us their already compiled binaries. No need to hunt for anything.
Regarding Linux distros, if it isn't on the official repositories, Conan provides exactly the same experience as cargo, only faster because it supports binary libraries.
Don't use the operating system binaries unless you are packaging your application for use by that same repo. Instead, use Conan or you will find yourself in dependency hell trying to get users running it on Fedora, Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04, Debian, etc where they all use different versions of the dependencies you want.
The official repositories are for the sysadmin and for other packages in the official repositories.
There is nothing hell into C++ if you are using the right tools: meaning proper package manager, meaning Nix/Guix, Conan or Spack.
It is currently much more hell to integrate Rust+Cargo behind other build system ( behind pip/gem/npm for instance ) than it is with C++.
For comparison all rust projects uses cargo.
The ergonomic value of that is something you simply can not overstate.
Turbo Pascal 7, which was already relatively complex, was already compiling several thousand lines per minute on a single core 4Mhz computer bounded to 640KB.
More modern examples with complexer type systems, are languages like Delphi, FreePascal, Ada/SPARK, D, F# via .NET AOT, OCaml.
So yeah cargo is nice, but not if I have to build everything from scratch, Rust isn't a scripting language.
FWIW, Rust builds faster than node applications as my laptop can only handle so many iops.
And Rust compiles are in line with Ocaml (probably faster by now due to the optimization work over the past year)
But when DFA is a required semantic feature of the language, it has to be done for debug builds, too, and so they're going to be slow.
Debug code generation, on the other hand, involves none of that and the compiler just blasts out code expression by expression. The time goes up linearly with the number of expressions.
DFA (and the O/B DFA) is done at the function level. It can be done globally, but I try to keep the compile times reasonable.
Brute force but maybe that is what is needed.
I decided to redo the DFA from scratch anytime the AST changed, and have had pretty reliable optimization as a result.
Besides the tone, my main gripe with the article and some discussions I've seen elsewhere is mixing implementation trade offs with design trade offs. For example, LLVM and not doing your own optimization passes can be important for time-to-market. The only reasonable alternative that I can think of without sacrificing time-to-market is between LLVM or a C backend. Delaying Rust would have made it irrelevant.
Now for some context for those not as familiar with Rust:
> Stack unwinding — stack unwinding after unrecoverable exceptions traverses the callstack backwards and runs cleanup code. It requires lots of compile-time book-keeping and code generation.
This is for asserts (panics) and can be toggled with a flag. It isn't inherent to the language though some older code uses it extensively (like rustc) because it predates the current language design (from what I've read).
> Tests next to code — Rust encourages tests to reside in the same codebase as the code they are testing. With Rust's compilation model, this requires compiling and linking that code twice, which is expensive, particularly for large crates.
This is in a list of negative impacts of features but people outside of Rust reading this might not catch the why this is done. Tests inline to your code have access to your private interfaces. External tests are for integration testing and build against your library like anyone else would.
Sometimes its llvm, sometimes its rust macros encouraging HUGE functions which then leads to slow llvm optimizations. Sometimes it's large use of generics. Sometimes it is the linker. There are various ways to hit slow compile times.
My bad for not being more clear — I just wanted to point out that Rust doesn’t have anything like the full, accidentally Turing complete template metalanguage that C++ has.
That's irrelevant. Being Turing complete does not mean being slow. Having template with duck typing does not imply being slow. That's completely irrelevant and pretty fanboy.
The main reason C++ template are slow to compile is that they have to be header-based..... Meaning parsed again and again and again in every translation unit.
Which is by itself pretty insane, and when you realize that, you realize that C++ compilers are in fact pretty fast compare to the job they do.
Ideally c++ modules might solve that on the long term.
Right, the original Concepts designs in C++0x had type checking of template definitions. It had a huge negative impact on compilation time and was one reason why it was dropped.
Mind you, had it been accepted it is possible it might have been optimizable, and certainly retrofitting it didn't help, but yes, you are right, not checking is not going to slow things down.
Really, package each module on their own lib/dll, no need to compile everything from scratch other than self flagellation.
A full build only becomes necessary when a new version of a low level module gets released into the library staging area.
Rust's generics are pretty complete/full/equivalent to C++ templates. The type system is already turing complete, and even if we assume that generics alone aren't turing complete on their own on stable (I suspect this is a bad assumption!), they soon will be thanks to features like const_if_match, const generics, etc. Even ignoring those features - check out the typenum crate for some of the nonsense that can be done with stable rust already.
I did get your fundamental point, but the counterpoint I'm trying to make here is that the differences are perhaps fewer than one might hope, and every single concern gpderetta raised about C++ templates can - at least theoretically - apply to Rust generics as well. While Rust generics might not be abused as much as C++ templates in practice - so far, anyways - that's mostly because Rust macros are more way more powerful than C++'s, very turing complete, and are abused for complicated stuff instead.
I was reading about the rust analyzer recently, which is a new language server for Rust that is explicitly designed to address compilation latency in IDEs. This also happened with Java back in the day when IBM developed their own incremental java compiler for use inside of Eclipse (technically this predates their IDE; I was using an early version in 1998). It gives that IDE an edge over things like Intellij in terms of compile latency, which is orders of magnitudes slower in intellij (measured in seconds instead of ms.). Intellij does a lot of work to hide the issues through elaborate caching, lots of things happening asynchronously, etc. They even attempted to integrate the eclipse compiler. But it's very noticable if you are used to fast feedback on your code correctness.
Another important aspect that they are trying to address in the Rust Analyzer that the Eclipse Java compiler also addresses is handling partially correct code. If you are editing, the end state is of course correctly compiling code. But in between edits when it doesn't compile is exactly when you need your IDE to be helpful. Eclipse used to be really good at this and update in real time on basically every key-press. The red squiggly lines disappearing basically means "now your code is compiling fine". Intellij works a lot slower and worse, ends up actively lying quite often with both false positives and negatives being very common (the dreaded reset caches feature is a top level menu item for this reason).
So, it's an important problem. Rust is optimized for run time safety and performance. The same infrastructure that enables that should in principle also be able to enable a great developer experience when it comes to IDE friendly features.
 This article what I'm referring to above: https://www.infoq.com/news/2020/01/rust-analyser-ide-support... (interview with one of the Rust Analyzer devs)
Autocompletion in VSCode works, instantaneously, all the time. Errors are shown in real time. Compilation is fast. Error messages are human readable. Updating Go or dependencies doesn't break existing code.
>Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn't need to — with most Rust projects being statically linked, the machine code isn't needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.
It was my first thought as well but I have no knowledge how Rust linking works so I don't know how reasonable caching module compilation units would be.
On the last major production codebase I did this for, linking a single artifacts went from ~60 seconds (BFD) to ~10 (Gold). Multiply by nearly 100 artifacts (several dozen libraries, several dozen tools, per-library unit tests, benchmarks, applications, etc. all statically linking some common code), and some basic incremental "I touched a single .cpp file" builds went from 30+ minutes to maybe 5 minutes for a single platform x config combination.
I don't understand why so many absolutely wants to inflate this number, as it doesn't mean anything about the product.
This inflation also leads to making part of the analysis of this article just wrong.
Running scc in TIKV's "src" folder:
Lines Blanks Comments Code
78442 7259 5476 65707
By having a mix of interpreters, repls, JIT and AOT compilers, one can mix and match, using the fastest ones for development and the slow ones for the final release build.
To my knowledge D has neither of those so it's not really in the running here.
Still waiting for Multicore OCaml.
And it is safer. :)
In any, case multicore runtime is getting closer, https://discuss.ocaml.org/t/multicore-ocaml-january-2020-upd...
Best IPC is no IPC, wouldn't you agree? In-process function calls can't be beaten.
Give me a mix of OCaml and Rust with the runtime of Erlang/Elixir and I ain't learning another programming language until the day I die!
Thank you for the link, I found it really interesting.
I'm just saying that in these same times many problems fall into the category of embarrassingly parallel and there's no reason to wait 4s for a result that can easily take 0.5s.
But you're also correct.
Anyway, the official road map states something along the lines of "Multicore: probably next release" (but that was also said for previous releases)
I really like OCaml. It's mind-bogglingly fast and well-made in basically almost every regard I can think of. The lack of proper parallelism nowadays however is a huge NOPE.
I know JaneStreet and Inria have a lot of valid usages for it and don't care what the rest of us think but it's very sad to have one of of the highest quality languages and compilers be left to fringe usage of several organisations only. :(
On the other hand, something like Chromium (25M lines of code) will take about 8 hours, and bring your machine to its knees as it consumes ALL available resources (granted, last I did this I only had 8GB of RAM, and I was running my desktop at the time... including Chromium). I don't remember exactly how long Firefox takes to build, but I remember it was significantly less time (maybe 3 hours?).
So... it depends? On a lot of things?
(btw, LoC numbers were pulled from the first legitimate looking result I could find on a quick search... take with a grain of salt... also, compilation times are a rough approximation based on my observations... that it with a truckload of salt)
Does Chromium build use LTO (I'm pretty sure FF does)? That's also a huge resource sink a doesn't parallelize as well (a lot of the optimization will be delayed to linking)
And you didn't see the problem!?
When Prof. Wirth was making the Oberon compiler he had a heuristic that any language feature which made the compiler slower at compiling itself was reworked or discarded.
With a lot of the other things that are a part of the design of the Rust language itself, I'm happy with how they are, even if that slows down complication somewhat.
I didn't know this. Is Servo the best example of development/coding standard/features/implementation for Rust?
I believe the two main factor are the fact that:
- many Rust users come from languages with a faster writing->running cycle (there is a surprising number of Python users starting to use Rust)
- Rust takes prides in its speed, if something is slow it is thus seen as a failure that should be fixed (even if it has perfectly reasonable reasons to be slow)
C++ is horribly slow if you overuse templates, and not take advantage of binary dependencies.
If every module compiles to its own library (not object file), got proper translation units, and a fast linker, compiling the stuff you're currently working on is relatively fast.
If adopting Rust means buying new hardware, then they will just keeping using their existing options regarding programming languages.
By the way, it is possible to connect a laptop to whatever monitors and keyboard you want
I think I'd struggle to bring that around meeting rooms all day. I have a monitor and keyboard to plug in on my desk, of course...
I’m not sure why you’d need that explained.
This reads as a dismissal. That you can only be a serious developer if you have a permanent setup where you sit down, and don't move from it.
Many devs are not just sitting at a desk or cubical writing code.
They may need to interact with clients, move between locations, even across cities. They may even be in the horrifying hot desk situation.
In all of those, working with a laptop makes sense.
A dual core laptop with 2GB of RAM is a very serious computer, it's a supercomputer compared to what I had 20 years ago for doing essentially the same tasks.
Slow software is not the machines fault.
How does a single laptop core compare to a single core of the fastest workstation money can buy? It's the same silicon with a touch higher power budget. Unless you time it, the difference is imperceptible.
A large rust project should be split into multiple crates. It makes the code much cleaner, flexible and compiles faster.
A stronger machine absolutely makes a difference. Single thread performance of a desktop compared to a laptop also does.
What desktop are you comparing to what laptop? The top end i9 has eight cores, and a max turbo just 100mhz shy of the top end desktop part. The turbo is less aggressive on the laptop, but there's just not that much difference between them for low thread counts.
I wasn't really intending to get in to an argument about whether rustc threads well, because although I've used Rust a fair bit I haven't built any large projects in it. If people have and it uses all their cores, I totally believe them.
What I was trying to do is point out that the correct response to "Rust doesn't thread well" is not "but have you run it on a workstation?". Laptop chips for the first time are extremely competitive for single threaded workloads, because the individual cores in large chips now have equivalent power budgets, you just get a bunch more of them.
"Actually, it does thread well" is totally fine as a response, and I'm not going to claim otherwise.
Compiling multiple crates at the same time is very common in my project.
There will definitely be workloads that benefit from the new cache hierarchy, but there are also those that suffer.
If you've measured it on a high end laptop for comparison then, of course, the proof is in the pudding and I'm not going to argue (or if you see >~10 threads in use that's proof enough).
I think the 5 GHz i9 with fast desktop memory (hardly a big investment for a company paying programmer salaries) would absolutely demolish the average laptop CPU with slower memory and bus speed. Yes, even in single thread; probably factor of 2+.
Moreover, even though parallelism is limited in this case, it surely will use more than one thread, and in other cases it's less limited and you still have this real issue to confront: why pay more for a slower system? Seriously, what are the overriding considerations?
The fastest laptop chips are extremely fast at executing lightly threaded programs. I'd own a workstation if it sped up my tools. But it won't.
Because it's something you can absolutely do to improve your computing life with a simple, rational calculation, which I will outline now for expensive (relative to US hardware prices) Germany.
PC dev box parts, prices sourced from geizhals.de:
* Ryzen 9 3900x, 490 euros: https://geizhals.de/amd-ryzen-9-3900x-100-100000023box-a2064...
* Top end X570 mobo, 200 euros: https://geizhals.de/gigabyte-x570-aorus-elite-a2078208.html
* 32 GB DDR4-3200, 144 euros: https://geizhals.de/g-skill-ripjaws-v-schwarz-dimm-kit-16gb-...
It's missing other stuff, but those can be quite cheap and don't need upgrading as often, and you maybe don't need the fanciest motherboard. (Going with Intel at this point is a complicated topic due to security issues and mitigation performance hits, to be balanced against having the most obscenely fast performance possible for a bit more money.)
Anyway, let's call it ~2k euros for a truly badass 12c24t 32GB memory dev box.
Compare this to, say, the common and much-loved Macbook Pro: that's 1.5k to 3.2k euros (just checked). The former for a super weak computer, the latter is 2.3 GHz 8c(16t?), 16 GB 2666 MHz memory. That's really weak computing power for the amount you're spending (yes, I understand you're not buying pure performance, but that's what's being compared here), and you don't even really "want to do better" with higher clocks etc because, speaking in purely physical terms, you need more volume and cooling to get the higher throughput. Laptops also start to throttle way sooner than computers with big coolers (that needn't cost much- I'm using a 15 euro cooler on my i7 8700k at 4.7 GHz which is passive without significant load).
So that's it really; for about the price of a weak Macbook Pro, you could have an ultra powerful dev or build box. You can get some really amazing screens for little money these days, and re-use it between upgrades, along with several other components; how much of your laptop do you usually re-use when upgrading?
I've now spent a truly excessive amount of time justifying my point of view, in the hope that I don't sound completely delusional. 1.5-2k euros once every few years for your developers to have peak dev performance is IMO an easy sell; if your laptop is a thin client you can have it additionally and it won't need upgrading for ages, because it doesn't need to do any heavy lifting.
> They're different markets.
False dichotomy, they can be (and usually are) additive, because you can rdesktop and use remote CI. What I'm questioning is the relevance of laptops as primary build computers.
Just look at the CPU usage on a >= 16 cores machine while building the rust compiler itself. The only time all cores are used, essentially, is while building LLVM.
I absolutely disagree. The mental overhead and time it takes to synchronize my work over multiple machines stands in no comparison to the few thousand dollars I'd save every few years by having desktop computers everywhere. If compilation is slow, use a distributed compiler infrastructure (if your language supports this, mostly writing C++ here).
If you are doing serious work, do it in the cloud. If you have an actually big codebase to compile, you need to scale that horizontally anyway and an expensive workstation under your desk will take hours anyway.
In some sense I think we agree, except the cloud is just someone else's powerful computer. Laptop as thin client via rdesktop is how I developed (also C++) for years, but only when I was at home away from my work computer. With the cloud you're always away from your work computer.
> If you have an actually big codebase to compile, you need to scale that horizontally anyway and an expensive workstation under your desk will take hours anyway.
Agreed, and that's further along the same vector: if you have performance issues with compilation, consider also (not only, before someone jumps on me) using bigger machine(s) for the job.
Anyway, I'm going to stop here. Feels like I found the most hated tech opinion ever... at least eventually there were some thought-out replies (and thank you for yours).
Speaking of mobility, a 9980HK in a laptop is basically the same as a 9900KS for a lightly-threaded workload like (apparently) rustc. You might be able to reduce compile time by increasing fan speed, though.