
Fast Compilers for Fast Programs - pplonski86
https://crawshaw.io/blog/fast-compilers
======
abeppu
This person wants compilers to be interactively fast and believes that the
outcome will be developers empowered to create fast programs. But I think the
key set up during the intro is assuming that a) the compiler runs your tests
and b) that test performance is a good proxy for real performance.

The counterexample to this in my daily life is working with an IDE that does a
pretty good job of interactively invoking the complier as I change code.
However, my tests aren't invoked with each compile pass, and the work they do
doesn't look like a long running live service, or a giant spark job, or
whatever. They're mostly aimed at checking correctness, and spend at least as
much time creating and checking values as invoking my real code. If my real
program is going to use many threads, do a lot of io and waiting for other
services to respond, or decompress and deserialize big chunks of data, it's
hard to see how a very fast local test ever could be a close proxy for that
performance.

~~~
openfuture
If you want your tests to run on compile time then what you want is static
typing.

~~~
bluGill
Static types are very useful, but they do not catch all problems.

~~~
mring33621
But they catch many more problems than dynamic typing.

~~~
fao_
That's a bit of a domain error. Nobody said that static types don't catch
problems, and nobody brought up dynamic typing (until you). The parent poster
tried to make a false equivalence between _proper tests_ , and static typing.
The former is a category that includes the latter.

------
nickpsecurity
This author either doesn’t know about Niklaus Wirth or forgot to mention him.
Wirth’s main metric for assessing language complexity was how fast the
compiler compiles in general and compiles itself. He ditched anything that
slowed that down. The resulting compilers could never be as fast as C/C++.
However, Oberon-2’s fast development pace with safe code was a major
inspiration for Go. If anything, Wirth’s legacy went mainstream when someone
finally did something non-academic with it.

Author makes another mistake which is widespread: that fast-moving teams need
languages with fast compilers. Sort of a half-truth. The alternative I push is
a combo of REPL or fast compiler with ultra-optimizing compiler. Most of the
iteration happens with the fast one. Everything that seems kind of finished
gets compiled in background or on build server with more optimization,
followed by lots of testing. It gets integrated into the new, compile-quick
builds. Plenty of caching and parallelization in build system, too.

See Section 2 in the pdf:

[https://news.ycombinator.com/item?id=15704806](https://news.ycombinator.com/item?id=15704806)

~~~
gnufx
Of course, there's no good reason why compiling the compiler should be
relevant to typical applications. I'm glad GCC doesn't drop optimizations for
fast numerical Fortran that surely won't speed up the compiler.

~~~
vidarh
It's not so relevant for compiler suites that handle many languages, but for
Wirth-style compiler the compiler is largely a bunch of branches and IO, so
while it won't be representative for everything, it is _quite_ representative.

And of course it will mean the result is slower, but it represents a belief
that the speed and simplicity of the compiler matters more for a whole lot of
uses.

Thankfully nothing prevents us from having multiple compilers, or settings, so
we can get both very fast compiles _and_ optimizations. Indeed, several of
Wirth's thesis students did optimization work that plugged in optimizations in
the Oberon compiler.

~~~
gnufx
I don't necessarily disagree, and I grew up with rather fast compilers on
rather fast interactive systems; they may even have been Wirth-style, but I
didn't study the source. It's just that it's not a useful metric for, say,
numerical code that doesn't look like a compiler. Also I suspect the
difference in hardware architecture in those days was relevant.

------
shereadsthenews
The focus on Go compiler speed is kinda weird to me. Other languages sometimes
offer flexible tradeoffs to developers. You can build and test C++ programs
fairly quickly if all the optimizations are disabled, or you can spend a long
time building a release binary with optimization, load-testing, profile-guided
recompilation, link-time optimization, and post-link optimization. But Go only
offers the first thing: a fast compiler with almost no optimizations.

~~~
bluGill
C++ with all optimizations disabled is still slow. g++ takes 2.8 seconds to
compile a simple helloworld on my machine - or .14 seconds the second time
after caches are warm... Python2 can build and run helloworld in .028 seconds,
or .011 once caches are warm. Of course the C++ hello world runs in .002
seconds once build (caches don't seem to make a difference but that might be
because it was still in the cache from building)

~~~
dman
What version of gcc? What OS? I am showing ~0.4 seconds on a machine that is
3-4 years old. [PS: I think even 0.4 seconds is too slow]

~~~
bluGill
4.8.4 Note that only the cached numbers are repeatable. I'm running in a VM,
which is likely to be a factor. (IT... the machine exists to run the one VM)

~~~
dman
I empathize very strongly with you. At a past gig I was in an almost similar
position.

------
erik_seaberg
> The stuttering command line is a subtle UX poke, hey, you just ruined a nice
> program.

Isn't this subtle discouragement from adding powerful features if there's no
way to make them super fast? I want to automate everything I can. It's really
hard to make a computer slower than doing it myself; I'm made of _meat_.

~~~
setr
I think he’s suggesting that the tradeoff is larger than compile-
time/performance; there are certain compile-time barriers that should be
accounted for.

Eg such powerful features that would cross the barrier may well only be placed
after the barrier (eg -o2)

So then you have gc++ with minimal optimizations doesn’t sit at behind the
minimal barrier (instantaneous-feeling, whatever 200ms or so), while go does
its best too (perhaps the counterargument being that it doesnt offer much for
when you want to target barriers 2 and 3 instead of the minimal.

------
averros
If compiler speed is your problem, then you're not doing development right.

Tweaking the code until it produces something which looks right (and/or passes
the tests) as the primary means of achieving correct operation is guaranteed
to produce poor quality code.

The only way to produce correct and reliable code is to do that by design and
deliberation. Tests merely check a tiny part of the input vector space; they
are safety against brain farts and gross errors. If you build code carefully,
you spend most of the time thinking and compilation is something which happens
not too often.

Having a slow compiler teaches to be careful; rapid-fire hack-compile-test-
fix-compile-test-fix cycle positively encourages sloppiness.

Another reason why compilation can be slow is simple bloat, indicative of poor
architecture or poor design or use of inappropriate tools. If you need to
write 200 KLOC and most of that code is variations on the same theme, you
needed to start by finding or designing a DSL which takes the repetitiveness
out or doing some other form of meta-programming.

------
sandreas
This is a really interesting point of view... i don't think that there is a
VALID possibility to determine always matching example projects, neither i
think, a COMPILER should collect statistical data over time (what, if i lose
this data?), but i like the approach.

Perhaps it should be a tool, that can be executed on git repositories for
existing projects and plot a compile time graph to identify fishy commits...

sounds interesting though.

~~~
tracker1
Man, I sometimes feel old seeing these kinds of complaints... and I do
appreciate it. When I start certain kinds of projects with an interactive
build/rebuild the first pass is S-L-O-W but interactive changes are instant
(ie: node/webpack).

In rust for a hello world rocket app (just learning rust), it's 0.17s for
debug build, 0.18s for production. I have no real frame of reference other
than OMG it's so tiny (executable and running in-memory). I'm also starting to
work with ARM/Pi devices so Rust is making a lot more sense for that space.

------
pjc50
No indication of how large these programs are ...

It would be amazing to have a compiler respond in 0.3s. I don't think I've
ever experienced that. I've had scripting languages take longer to spin up. My
current work test suite takes _twenty minutes_. I've had C++ programs take
over ten minutes just in the linker.

~~~
shakna
tcc [0] confines you to an older subset of C, and I don't believe it's still
under serious development, but it is fast. Fast enough you can use it like a
scripting language for most use cases.

[0] [https://bellard.org/tcc/](https://bellard.org/tcc/)

~~~
flukus
I find GCC compile times to be unnoticeable and perfectly usable as a
scripting language, certainly competitive with the startup time of many
scripting languages. By the time you're getting to noticable compile times
your well into "this should be a proper project" territory, but everything
from small scripts to simple GUI programs compile quickly. I wrote my own
shebang scripts but there's a more full featured implementation here:
[https://github.com/RhysU/c99sh](https://github.com/RhysU/c99sh)

------
walshemj
I am probably going to get downvoted to hell here but have we lost sight of
making haste slowly here.

Id rather people spent time making compilers better than just faster.

