Hacker News new | past | comments | ask | show | jobs | submit login
A quick look at destination-driven code generation (bernsteinbear.com)
60 points by tekknolagi on Nov 11, 2023 | hide | past | favorite | 10 comments



It is interesting to see that this approach has been used by Niklaus Wirth in most of his compilers, see "Compiler Construction - The Art of Niklaus Wirth" by Hanspeter Mössenböck [0]. This technique was also described by David R. Hanson in "Code Improvement via Lazy Evaluation", 1980 [1] and "Simple Code Optimizations", 1983 [2].

[0] https://github.com/lboasso/oberonc/blob/master/doc/Moe00b.pd...

[1] http://storage.webhop.net/documents/lazyevaluation.pdf

[2] http://storage.webhop.net/documents/simpleopt.pdf


Wirth’s book on Compiler Construction is available at https://people.inf.ethz.ch/wirth/CompilerConstruction/index....

The relevant section is 10.2. But its mechanism is the “inverse” of destination driven code generation as I understand it. In Wirth’s Item approach the callee picks a location and tells the caller where to look. Whereas in destination driven code generation the caller tells the callee where to put the result.


You are right, but as you can see at page 11 (Figure 4 rightmost AST) of The Oberon System Family paper [0], Wirth's approach was extended to "hint" destinations as well.

[0] https://www.research-collection.ethz.ch/bitstream/handle/20....


Thanks for the pointer! I didn’t know that.


> Nothing too fancy. No control-flow. No function calls. No data structures.

If that's the problem statement, then you may as well perform register allocation: all you need is Sethi–Ullman's algorithm [0][1], and it's quite simple.

Register allocation only becomes complicated when the control flow comes into the picture, especially the loops. Even then, there are some more or less obvious ways to do it, if you don't want to do graph colouring after Chaitin et al. (remember, people had to write register allocators before Chaitin's publication in 1981). For instance, you can do what the very first FORTRAN compiler did: you look at registerts as a cache for memory, in which case the Bélády's algorithm can be used for spilling. And with loops over arrays you can leverage the knowledge that you're looping over arrays quite nicely [2].

[0] https://en.wikipedia.org/wiki/Sethi%E2%80%93Ullman_algorithm

[1] https://dl.acm.org/doi/pdf/10.1145/321607.321620

[2] https://archive.computerhistory.org/resources/text/Fortran/1...


The syntax-tree driven approach is considerably simpler than Sethi-Ullman, and faster. If you read the blog post, you can see it's based on a 1990 paper that references the 1970 paper that introduced the Sethi-Ullman algorithm. So neither the blog author (hi Max) nor the authors of the 1990 paper were unaware of it.


It would be great to have a simple Virgil x86-64 backend based only on the destination-driven code generation technique, and compare the performance of the generated code (and compilation speed) with the official optimizing x86-64 backend. Would the implementation effort be comparable to the JVM backend?

BTW I really like the design of Virgil described in "Harmonizing Classes, Functions, Tuples,and Type Parameters in Virgil III", bravo!


Thanks. Interesting thought. The current Virgil compiler translates from the AST directly to SSA quite early. Specialization, analysis, data representation choices, and lowering all happen on SSA. Translating directly to machine code from the AST and using destination-driven allocation could maybe work for a simple subset with easy lowering, which would give an indication of the compile speed improvements possible.


Hi Ben!


Reminds me of HOAS techniques.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: