When used at the end of a function, when all locals go out of scope, it is actually doing nothing.
I would appreciate some introductory notes or links to prerequisites though. That would perhaps entice an audience who has no previous experience in writing compilers.
The book is freely available here: http://people.inf.ethz.ch/wirth/CompilerConstruction/index.h...
For a more comprehensive example, you can have a look at my compiler for the full Oberon language targeting the JVM. It is based on the principles of that book: https://github.com/lboasso/oberonc
I'd keep an eye on http://www.craftinginterpreters.com/contents.html, although it hasn't gotten to compilation yet (I think).
At this point it's not well-documented, but the layers are numbered in order: l00, l01, l02... Each builds on top of the previous one.
EDIT: I just added a mini evaluator for debugging purposes and so people can see what the language is like.
: The book is free as well: https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html...
SICP being freely available in both book form, and the lectures  makes it one of the best resources out there.
(I just never found it's section on compilers that helpful. Again, probably learning styles.)
I enjoyed this read because it was a very user-friendly overview. After reading, I knew exactly what major pieces did what and therefore what to search for next on my own terms.
I also recommended the same Wirth book as lboasso, and a couple other books, down at the bottom of the page.
Crenshaw's tutorial fits this perfectly: https://compilers.iecc.com/crenshaw/
x86 version here: http://www.pp4s.co.uk/main/tu-trans-comp-jc-intro.html
Aren't these the same thing?
This particular project luckily turned out to be quick to get working. Also, I kept the scope very small, calling it a day right before getting big ideas.
Isn't JIT compilation the process of compiling some sort of intermediate code into machine code at runtime? For example, compiling JVM bytecode into machine code during execution, and using the cached machine code transparently on subsequent calls to that segment of code.
Not to detract from the article, or the interesting techniques and explanation, but I didn't see any compilation other than by gcc, which IMHO makes this AOT rather than JIT compilation.
What am I missing?
Perhaps I should have included a small example of compiling a linear list of abstract instructions into machine code. But that again would consist of compiling small templates in a coherent way, just like many actual compilers do.
Anyway, point taken, and maybe I'll expand the article or follow up on it.
The article author seems to be using GCC not at runtime, but more as a generator for templates (at the machine code level) for the rest of the software to use. You don't really want to be assembling a template at runtime in this context.
.Net doesn't work that way, for instance. Unlike Java's HotSpot, .Net never interprets CIL instructions. They're always compiled down to native code for execution.
At least, that was true in 2009, according to Jon Skeet - https://stackoverflow.com/a/1255832/2307853
I wasn't aware of this. To be clear, it's not 100% accurate to say "never", because it depends on the .NET environment, but point taken.
I now understand that .NET (on desktop, anyway) compiles all intermediate code (CIL) to native code on every run. This is a way different kind of JIT than Java HotSpot, which executes intermediate code (bytecode) initially, profiles intermediate code execution, and only compiles the hot spots to native code after a while.
It seems like what "JIT" means has evolved since the time it meant "JVM HotSpot".
There's a very interesting snippet of an interview with Anders Hejlsberg at https://stackoverflow.com/a/1255828/4158187, which exposes his reasoning.
".Net" and "Common Language Runtime" refer specifically to Microsoft's implementation. The spec is called the "Common Language Infrastructure".
Wikipedia seems to use "Common Language Runtime" in both ways - the Mono article uses it in the generic sense, whereas the CLR's own article explicitly states that it refers to .Net's implementation.
.Net also has ngen.exe which isn't really JIT at all, it's traditional 'offline' ahead-of-time compilation. In a sense that's rather like the traditional Unix compile-and-install model. One sees the oxymoron "pre-JIT" used to describe this.
This is another difference from HotSpot, which historically never cached native code to disk, though other JVMs have always been able to do this (Excelsior Jet, and GCJ, for instance).
Apparently AOT may be coming to Java though - it may even already be here, I'm not quite sure.
https://www.infoq.com/news/2016/10/AOT-HotSpot-OpenJDK-9 , http://openjdk.java.net/jeps/295
I recall once reading that one of the reasons HotSpot didn't cache native code, was the issue of securing that cache. Not sure what they've done to address that, nor what .Net does; it seems a valid concern.
That's an interesting read re. Hejlsberg. Worth noting that Mono takes them up on that and does feature an interpreter.
Edit: I'm wrong about the terminology! https://news.ycombinator.com/item?id=15688290
Congratulations on the article.
Does anybody know the easiest way to compile to JVM bytecode?
I have a Scheme interpreter written in Kotlin, what is the best way to compile it, instead of interpreting? Where do I start?
Shen to JVM bytecode compiler in Java 6 https://github.com/otabat/shen-jvm
Shen to JVM bytecode compiler in Java 8 with indy https://github.com/hraberg/Shen.java
*by "poking around" I mean I typed the code into a text editor for further study:
Or have a lookt at Kawa.