
Compiler Optimizations Should Pay for Themselves (1994) [pdf] - networked
https://pdfs.semanticscholar.org/8043/5e17ef603de14514dac7d9a9856a4bfc6863.pdf
======
nickcw
This sounds very similar to the criteria used by the Go team as to whether to
add compiler optimizations or not, namely the optimization should speed up the
compiler by at least an amount that the extra computation of the optimization
takes.

I don't think the go team worry about the size of the object as much though.

~~~
david-gpu
Imagine using that yardstick to guide what optimizations to implement on a
FORTRAN compiler -- it would be absurd! Most workloads don't have a lot in
common with a compiler.

~~~
tzar
Imagine advertising FORTRAN as a developer-friendly language with fast
compilation times! I'm sure the Go folks are using this metric because it's
compatible with their goals, which aren't everyone's.

~~~
weberc2
The Go team has a whole bunch of benchmarks besides compiler performance by
which they judge the language. Certainly maintaining good compiler performance
means giving up some low-value optimizations, but it doesn't mean compiler
performance is their only benchmark.

------
aseipp
Chez Scheme essentially follows this rule, and has since its inception, and it
is an incredibly impressive project IMO. It can compile itself in just a few
seconds and has a fairly simple architecture (despite having 30+ distinct IR
languages in the compilation pipeline, and Scheme being a relatively high
level language already, on a ~100k(?) codebase) and it has very good overall
performance for generated code that "scales well". It tries not to be too good
at the expense of other things, or bad/unwieldly in others. The goal is more
that "results are predictable and not fragile", but in practice it means you
apply rules like this. It's like putting a monetary budget on feature creep,
to me. Chez is a very pleasurable Scheme to work with.

Apparently the only time they really violated this rule was when
rearchitecting their compiler from using 5-6 distinct IR languages to over 30
of them using the "Nanopass" approach. So I assume the compile time simply
went from "a few seconds" to "a few more seconds", but that's probably OK to
do once over ~30 years, I guess. :)

I was told once (by one of the developers of) the Microsoft C# compiler team
followed a similar rule where they never made a release that regressed their
internal compiler benchmarks, i.e. they were only allowed to get faster.
Obviously the .NET runtime does a huge amount of heavy lifting in terms of a
C# programs performance, making the compiler a bit simpler, but it still means
things like optimizations must pay off in speed _and_ effectiveness. They even
rewrote the compiler from C++ to C# at one point and _still_ met this goal
somehow, apparently, and once you're at that point -- bootstrapping yourself
-- you begin the virtuous cycle of self-improvement. I'd be interested to know
if any C# developers have seen the Microsoft .NET compiler get slower --
everything I've heard is that it's very fast and nice to use.

------
throwaway613834
This only works for compilers written in the same languages as the ones they
compile...?

~~~
to3m
Better than that - it only really works for languages that are only used to
write their own compiler ;)

------
moomin
This is the “the compiler represents a typical workload” fallacy.

~~~
chrisseaton
Similarly, I think one reason that the programming-language community likes
functional programming more than most people is that a compiler is
fundamentally one big pure function.

~~~
nimish
Well, it turns out languages written for PL and compiler research make writing
programming languages and compilers easier...

------
CalChris
This article was written in 1994 and its advice seems appropriate. For 1994.
In 1993, the SPARCstation ZX had a maximum of 96 MB of memory. So code size
was important back then. My iPhone has 2GB of DRAM now. You can get a 4.19TB
EC2 instance [1].

Yeah, this might have been wise in 1994 but this is just spectacularly bad
advice now.

[1] [https://techcrunch.com/2017/09/14/aws-now-offers-a-
virtual-m...](https://techcrunch.com/2017/09/14/aws-now-offers-a-virtual-
machine-with-over-4tb-of-memory/)

