Hacker News new | past | comments | ask | show | jobs | submit login
Fast Compilers for Fast Programs (crawshaw.io)
73 points by pplonski86 on May 13, 2019 | hide | past | favorite | 45 comments



This person wants compilers to be interactively fast and believes that the outcome will be developers empowered to create fast programs. But I think the key set up during the intro is assuming that a) the compiler runs your tests and b) that test performance is a good proxy for real performance.

The counterexample to this in my daily life is working with an IDE that does a pretty good job of interactively invoking the complier as I change code. However, my tests aren't invoked with each compile pass, and the work they do doesn't look like a long running live service, or a giant spark job, or whatever. They're mostly aimed at checking correctness, and spend at least as much time creating and checking values as invoking my real code. If my real program is going to use many threads, do a lot of io and waiting for other services to respond, or decompress and deserialize big chunks of data, it's hard to see how a very fast local test ever could be a close proxy for that performance.


This. Integration and load tests do so much work that the compiler is lost in the noise. Running forty (hopefully) concurrent requests through a 200 OPS service does not tell me whether it's going to make prod flaky or more expensive.


If you want your tests to run on compile time then what you want is static typing.


Static types are very useful, but they do not catch all problems.


But they catch many more problems than dynamic typing.


That's a bit of a domain error. Nobody said that static types don't catch problems, and nobody brought up dynamic typing (until you). The parent poster tried to make a false equivalence between proper tests, and static typing. The former is a category that includes the latter.


This author either doesn’t know about Niklaus Wirth or forgot to mention him. Wirth’s main metric for assessing language complexity was how fast the compiler compiles in general and compiles itself. He ditched anything that slowed that down. The resulting compilers could never be as fast as C/C++. However, Oberon-2’s fast development pace with safe code was a major inspiration for Go. If anything, Wirth’s legacy went mainstream when someone finally did something non-academic with it.

Author makes another mistake which is widespread: that fast-moving teams need languages with fast compilers. Sort of a half-truth. The alternative I push is a combo of REPL or fast compiler with ultra-optimizing compiler. Most of the iteration happens with the fast one. Everything that seems kind of finished gets compiled in background or on build server with more optimization, followed by lots of testing. It gets integrated into the new, compile-quick builds. Plenty of caching and parallelization in build system, too.

See Section 2 in the pdf:

https://news.ycombinator.com/item?id=15704806


> Wirth’s main metric for assessing language complexity was how fast the compiler compiles in general and compiles itself. He ditched anything that slowed that down.

Indeed, one of the old stories about the Oberon compiler (I can't vouch for how strictly he stuck to it) is that optimizations had to "pay for themselves": Once recompiled with the optimization, the compiler needed to compile itself as fast as before the optimization was added.

At least one of his thesis students did write papers on applying other optimizations that had no hope of making it into the standard Oberon compiler for that reason. I believe it was either Michael Brandeis PhD thesis or one of his other papers. But even though his optimization pass was "too costly" it was impressive how small it was given it also beat contemporary versions of gcc in terms of resulting performance.

> https://news.ycombinator.com/item?id=15704806

I love diving into old HN threads... Especially seeing my own comments on the top of the page. Less so to have my slow pace on my personal compiler project highlighted on them (apparently a GC was the next step for my Ruby compiler back in 2017 - it's just now sitting in a working state in my home directory; not yet committed). It's definitively not as fast as a Wirth compiler - in large part due to the sheer volume of small objects created. As much as I love to work with Ruby, as a fan of Wirth's work it also makes me want to bang my head through a wall...


Niklaus Wirth fans will appreciate the (apparently out of print) book "The School of Niklaus Wirth: The Art of Simplicity", written by some of Wirth's students about his and their research work with languages, operating systems, and the Lilith computer.

https://amzn.com/1558607234


I own this wonderful book, some of the chapters are available as publicly available papers. Like

Compiler Construction - The Art of Niklaus Wirth [1]

FFF97 - Oberon in the Real World. [2]

[1] https://pdfs.semanticscholar.org/036f/c4effda4bbbe9f6a9ee762...

[2] https://www.researchgate.net/publication/221349845_FFF97_-_O...


Thanks! I no longer own this book, so these papers online are a treat. :)


Of course, there's no good reason why compiling the compiler should be relevant to typical applications. I'm glad GCC doesn't drop optimizations for fast numerical Fortran that surely won't speed up the compiler.


It's not so relevant for compiler suites that handle many languages, but for Wirth-style compiler the compiler is largely a bunch of branches and IO, so while it won't be representative for everything, it is quite representative.

And of course it will mean the result is slower, but it represents a belief that the speed and simplicity of the compiler matters more for a whole lot of uses.

Thankfully nothing prevents us from having multiple compilers, or settings, so we can get both very fast compiles and optimizations. Indeed, several of Wirth's thesis students did optimization work that plugged in optimizations in the Oberon compiler.


I don't necessarily disagree, and I grew up with rather fast compilers on rather fast interactive systems; they may even have been Wirth-style, but I didn't study the source. It's just that it's not a useful metric for, say, numerical code that doesn't look like a compiler. Also I suspect the difference in hardware architecture in those days was relevant.


True. I didn't agree with his metric. Mine came from my experience with BASIC and Lisps. It was that the compile should happen in no more than a few seconds if iteration speed was the goal. My metric was about keeping the mind in a state of flow for maximum creative output. Otherwise, it could take all night if I wasn't staring at the screen waiting on it.


The focus on Go compiler speed is kinda weird to me. Other languages sometimes offer flexible tradeoffs to developers. You can build and test C++ programs fairly quickly if all the optimizations are disabled, or you can spend a long time building a release binary with optimization, load-testing, profile-guided recompilation, link-time optimization, and post-link optimization. But Go only offers the first thing: a fast compiler with almost no optimizations.


C++ with all optimizations disabled is still slow. g++ takes 2.8 seconds to compile a simple helloworld on my machine - or .14 seconds the second time after caches are warm... Python2 can build and run helloworld in .028 seconds, or .011 once caches are warm. Of course the C++ hello world runs in .002 seconds once build (caches don't seem to make a difference but that might be because it was still in the cache from building)


What version of gcc? What OS? I am showing ~0.4 seconds on a machine that is 3-4 years old. [PS: I think even 0.4 seconds is too slow]


4.8.4 Note that only the cached numbers are repeatable. I'm running in a VM, which is likely to be a factor. (IT... the machine exists to run the one VM)


I empathize very strongly with you. At a past gig I was in an almost similar position.


C++'s templates (code!) in headers can cause a lot of pain on the compilation front. A simple #include <boost> can add many seconds to your compile.


Recently at work I was able to use C++ templates to automate a lot of repeated boilerplate code I was responsible for. The reason was to minimize the amount of (error-prone!) rote work that had to be done as the project would continue to scale.

It worked out great, but I’ve noticed that the compilation time of translation units that use the templates have been growing linearly with features added. It’s not a lot in the big scheme of things, but it made me wish I could reason more about C++ compile times when I use its various features.


You can make use of libraries and external templates introduced in C++11.


Well that's easy. Python doesn't compile anything. The old adage, 'the secret to writing fast code is writing code that does less' is true.


Sure it does — its just not compiling ahead of time; or rather, compiling + running is behind a single command


And the compilation is depressingly literal because optimization is so difficult with Pythons level of dynamics.


Python2 is an interpreter. If you want to do a proper comparisasion you need to compare with a C++ interpreter.

Or get a Python2 compiler to native code.


My comparison is valid. Engineering is about tradeoffs. When considering python vs C++ cost of compiling is on factor, and cost of the run time is another. There are many other factors that you need to consider when deciding what language to use. The ability to compile python to native code is one as well.


It is not, because it is not the same execution model that is being compared, it is apples vs pears.


The Go compiler contains lots of optimizations, especially considering its age.


Can you share some more details about what those optimizations are? Can you point us to a link?


You could try compiling with and without `-gcflags '-N -l'`

`-N` disables optimization and `-l` disables in-lining.

I also found this[0] page which lists some optimizations done by the compiler.

[0] https://github.com/golang/go/wiki/CompilerOptimizations

edit: You can see what the compiler emits as assembly code with `-gcflags '-S'` so you can compare the optimized vs unoptimized assembly.


https://quasilyte.dev/blog/post/go_ssa_rules/

is maybe a good starting point.


go is indeed slow, cd benchmarkgames website


> The stuttering command line is a subtle UX poke, hey, you just ruined a nice program.

Isn't this subtle discouragement from adding powerful features if there's no way to make them super fast? I want to automate everything I can. It's really hard to make a computer slower than doing it myself; I'm made of meat.


I think he’s suggesting that the tradeoff is larger than compile-time/performance; there are certain compile-time barriers that should be accounted for.

Eg such powerful features that would cross the barrier may well only be placed after the barrier (eg -o2)

So then you have gc++ with minimal optimizations doesn’t sit at behind the minimal barrier (instantaneous-feeling, whatever 200ms or so), while go does its best too (perhaps the counterargument being that it doesnt offer much for when you want to target barriers 2 and 3 instead of the minimal.


If compiler speed is your problem, then you're not doing development right.

Tweaking the code until it produces something which looks right (and/or passes the tests) as the primary means of achieving correct operation is guaranteed to produce poor quality code.

The only way to produce correct and reliable code is to do that by design and deliberation. Tests merely check a tiny part of the input vector space; they are safety against brain farts and gross errors. If you build code carefully, you spend most of the time thinking and compilation is something which happens not too often.

Having a slow compiler teaches to be careful; rapid-fire hack-compile-test-fix-compile-test-fix cycle positively encourages sloppiness.

Another reason why compilation can be slow is simple bloat, indicative of poor architecture or poor design or use of inappropriate tools. If you need to write 200 KLOC and most of that code is variations on the same theme, you needed to start by finding or designing a DSL which takes the repetitiveness out or doing some other form of meta-programming.


This is a really interesting point of view... i don't think that there is a VALID possibility to determine always matching example projects, neither i think, a COMPILER should collect statistical data over time (what, if i lose this data?), but i like the approach.

Perhaps it should be a tool, that can be executed on git repositories for existing projects and plot a compile time graph to identify fishy commits...

sounds interesting though.


Man, I sometimes feel old seeing these kinds of complaints... and I do appreciate it. When I start certain kinds of projects with an interactive build/rebuild the first pass is S-L-O-W but interactive changes are instant (ie: node/webpack).

In rust for a hello world rocket app (just learning rust), it's 0.17s for debug build, 0.18s for production. I have no real frame of reference other than OMG it's so tiny (executable and running in-memory). I'm also starting to work with ARM/Pi devices so Rust is making a lot more sense for that space.


No indication of how large these programs are ...

It would be amazing to have a compiler respond in 0.3s. I don't think I've ever experienced that. I've had scripting languages take longer to spin up. My current work test suite takes twenty minutes. I've had C++ programs take over ten minutes just in the linker.


I remember back in the early 90s having a copy of both Borland packages Pascal and C++ for Windows (3.x). They had similar demo programs. I compared one of them (forgot which)

The C++ demo would build in 5 minutes on my machine.

The [Object] Pascal demo would build in 5 seconds.

The C++ package got shipped back. Ain’t nobody got time for that.


We'll see if it holds up in the real world but Jon Blow has been working to keep Jai compiles on his game in it (I think it's at like 80k lines of code? might have that number wrong) under a half second, mind you that's not an optimized build, though you can run it through LLVM at a slower pace.


tcc [0] confines you to an older subset of C, and I don't believe it's still under serious development, but it is fast. Fast enough you can use it like a scripting language for most use cases.

[0] https://bellard.org/tcc/


I find GCC compile times to be unnoticeable and perfectly usable as a scripting language, certainly competitive with the startup time of many scripting languages. By the time you're getting to noticable compile times your well into "this should be a proper project" territory, but everything from small scripts to simple GUI programs compile quickly. I wrote my own shebang scripts but there's a more full featured implementation here: https://github.com/RhysU/c99sh


I am probably going to get downvoted to hell here but have we lost sight of making haste slowly here.

Id rather people spent time making compilers better than just faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: