I think what people are trying to get at in this thread is that they'd be happy if their Go compiles took 0.5 seconds instead of 0.1 seconds, if it also meant that it lessened the need to manually employ the techniques in the OP. Of course, keeping such tradeoffs within reason is a difficult prospect.
I'd like to believe this is the case, but has that ever actually happened - as in a programming language seeing that much speedup in its compiler? There're quite a few cases of languages that are generally taken to have long compile times (C++, Haskell, Scala, etc) but I don't know of a case where any has been able to improve by as much as an order of magnitude.
As for Rust specifically, a benchmark I saw a while back estimated that modern rustc compilation is about 3x faster than it was as of the 1.0 release in 2015. Other "cheating" approaches to reducing build times for Rust are incremental recompilation (which is kind of like built-in ccache), parallel codegen units (the original rustc, ironically, was single-threaded), a compilation mode that stops after typechecking (thereby avoiding codegen and LLVM entirely; this is invoked via `cargo check` and suffices for much of one's interactive development), and an alternative debug-build backend (Cranelift, still very WIP) which is optimized for extreme compilation speed (like a browser JS engine) rather than compilation quality (LLVM being largely designed for the inverse).
Indeed, the first step to making a fast compiler is "design your language from the start such that it can be compiled quickly" (which can be seen in the design of Go). I can think of a few things in Rust that might have been designed slightly differently (in the name of compilation speed) if the authors had the compiler-building wisdom they do today. But there are still plenty of opportunities to improve here.
Also, taking ‘its compiler’ literally: Turbo Pascal was over ten times as fast as earlier pascal compilers for the same hardware, which were multi-pass.