The counterexample to this in my daily life is working with an IDE that does a pretty good job of interactively invoking the complier as I change code. However, my tests aren't invoked with each compile pass, and the work they do doesn't look like a long running live service, or a giant spark job, or whatever. They're mostly aimed at checking correctness, and spend at least as much time creating and checking values as invoking my real code. If my real program is going to use many threads, do a lot of io and waiting for other services to respond, or decompress and deserialize big chunks of data, it's hard to see how a very fast local test ever could be a close proxy for that performance.
Author makes another mistake which is widespread: that fast-moving teams need languages with fast compilers. Sort of a half-truth. The alternative I push is a combo of REPL or fast compiler with ultra-optimizing compiler. Most of the iteration happens with the fast one. Everything that seems kind of finished gets compiled in background or on build server with more optimization, followed by lots of testing. It gets integrated into the new, compile-quick builds. Plenty of caching and parallelization in build system, too.
See Section 2 in the pdf:
Indeed, one of the old stories about the Oberon compiler (I can't vouch for how strictly he stuck to it) is that optimizations had to "pay for themselves": Once recompiled with the optimization, the compiler needed to compile itself as fast as before the optimization was added.
At least one of his thesis students did write papers on applying other optimizations that had no hope of making it into the standard Oberon compiler for that reason. I believe it was either Michael Brandeis PhD thesis or one of his other papers. But even though his optimization pass was "too costly" it was impressive how small it was given it also beat contemporary versions of gcc in terms of resulting performance.
I love diving into old HN threads... Especially seeing my own comments on the top of the page. Less so to have my slow pace on my personal compiler project highlighted on them (apparently a GC was the next step for my Ruby compiler back in 2017 - it's just now sitting in a working state in my home directory; not yet committed). It's definitively not as fast as a Wirth compiler - in large part due to the sheer volume of small objects created. As much as I love to work with Ruby, as a fan of Wirth's work it also makes me want to bang my head through a wall...
Compiler Construction - The Art of Niklaus Wirth 
FFF97 - Oberon in the Real World. 
And of course it will mean the result is slower, but it represents a belief that the speed and simplicity of the compiler matters more for a whole lot of uses.
Thankfully nothing prevents us from having multiple compilers, or settings, so we can get both very fast compiles and optimizations. Indeed, several of Wirth's thesis students did optimization work that plugged in optimizations in the Oberon compiler.
It worked out great, but I’ve noticed that the compilation time of translation units that use the templates have been growing linearly with features added. It’s not a lot in the big scheme of things, but it made me wish I could reason more about C++ compile times when I use its various features.
Or get a Python2 compiler to native code.
`-N` disables optimization and `-l` disables in-lining.
I also found this page which lists some optimizations done by the compiler.
edit: You can see what the compiler emits as assembly code with `-gcflags '-S'` so you can compare the optimized vs unoptimized assembly.
is maybe a good starting point.
Isn't this subtle discouragement from adding powerful features if there's no way to make them super fast? I want to automate everything I can. It's really hard to make a computer slower than doing it myself; I'm made of meat.
Eg such powerful features that would cross the barrier may well only be placed after the barrier (eg -o2)
So then you have gc++ with minimal optimizations doesn’t sit at behind the minimal barrier (instantaneous-feeling, whatever 200ms or so), while go does its best too (perhaps the counterargument being that it doesnt offer much for when you want to target barriers 2 and 3 instead of the minimal.
Tweaking the code until it produces something which looks right (and/or passes the tests) as the primary means of achieving correct operation is guaranteed to produce poor quality code.
The only way to produce correct and reliable code is to do that by design and deliberation. Tests merely check a tiny part of the input vector space; they are safety against brain farts and gross errors. If you build code carefully, you spend most of the time thinking and compilation is something which happens not too often.
Having a slow compiler teaches to be careful; rapid-fire hack-compile-test-fix-compile-test-fix cycle positively encourages sloppiness.
Another reason why compilation can be slow is simple bloat, indicative of poor architecture or poor design or use of inappropriate tools. If you need to write 200 KLOC and most of that code is variations on the same theme, you needed to start by finding or designing a DSL which takes the repetitiveness out or doing some other form of meta-programming.
Perhaps it should be a tool, that can be executed on git repositories for existing projects and plot a compile time graph to identify fishy commits...
sounds interesting though.
In rust for a hello world rocket app (just learning rust), it's 0.17s for debug build, 0.18s for production. I have no real frame of reference other than OMG it's so tiny (executable and running in-memory). I'm also starting to work with ARM/Pi devices so Rust is making a lot more sense for that space.
It would be amazing to have a compiler respond in 0.3s. I don't think I've ever experienced that. I've had scripting languages take longer to spin up. My current work test suite takes twenty minutes. I've had C++ programs take over ten minutes just in the linker.
The C++ demo would build in 5 minutes on my machine.
The [Object] Pascal demo would build in 5 seconds.
The C++ package got shipped back. Ain’t nobody got time for that.
Id rather people spent time making compilers better than just faster.