The compiler implements the "Operation offsets" optimization as described in the article. It is actually pretty straightforward. This function converts a map of offset => delta to x86-64 instructions:
https://github.com/brianquinlan/brainfuck-jit/blob/master/bf...
and the function following it generates that table.
A silly Brainfuck interpreter I wrote one evening converts BF to JavaScript and runs eval() on it. The result was that I got for free all of the compiler optimizations and JIT capabilities in the Javascript engines for a trivial amount of effort. I found the resulting execution faster than the various hand rolled C interpreters I found online. The best part was that I spent probably less than an hour implementing and testing my "interpreter".
I have a (at the initial release, and possibly still) "state-of-the-art" Brainfuck compiler to C [1], which can compile a "Hello, world" program into a single `puts` call.
Some ProTip(tm): JITing Brainfuck doesn't fare a lot; for example, resetting the cell is an O(n) operation in BF, so JIT alone doesn't reduce this into O(1). Some kind of scalar evolution has to be implemented to reduce this O(n) factor. More sophiscated liveness and range analysis might be required to go further (esotope-bfc didn't get there, anyway).
What always strikes me (I also wrote a bf parser a while back) is the pay-off in how simple it is to implement vs how hard it is to write in it. This seems a very obvious and simple thing to notice: the less tokens/rules your language uses the easier it is to write a parser for it, but the harder it is to write a program in that language.
The implications seem more subtle and are more interesting. The very simple phenomenon implies that application development time later can be saved by compiler development time now. If so then what is the most optimal language/compiler? What is the best distribution of time? Is it perhaps so that there exists a language that is trivial to implement, yet also super powerful and expressive to program in (don't say lisp here, though it does almost satisfy the second constraint imo).
>Is it perhaps so that there exists a language that is trivial to implement, yet also super powerful and expressive to program in (don't say lisp here, though it does almost satisfy the second constraint imo).
Lisp. Satisfies both constraints.
But seriously, Forth is another interesting language that strikes a good balance between ease of implementation vs ease of use and expressive power. But even more than Lisp, it takes some serious getting used to.
The language IO has a very simple grammar[1] and ignoring libraries is a pretty productive language.
I suspect that you are on to something, but instead of the grammer being the guide instead the abstraction that the language gives you, but with a correlation between abstraction and the complexity of the grammar.
It's natural that it's simple, since (and I suspect that here I'm about to get skewered) Brainfuck is basically a straightforward implementation of a Turing machine, just as Lisps are basically straightforward implementations of the lambda calculus.
Lest you think I'm completely talking out of my ass, I defer my defense to this random wiki page:
> P′′ is a primitive programming language created by
> Corrado Böhm in 1964 to describe a family of Turing
> machines.
> [...]
> Brainfuck (apart from its I/O instructions) is a
> simple informal variation of Böhm's P′′. Böhm gives
> explicit P′′ programs for each of a set of basic
> functions sufficient to compute any partial recursive
> function -- and these programs are constructed entirely
> from six P′′ words precisely equivalent to the
> respective Brainfuck commands +, -, <, >, [, and ].
Agree, thats pretty much my argument for Brainfuck being a simple programming language. Thou I phrased it more as analogy then equivilance.
Brainfuck is a simple language in the same way a Turing Machine is a simple computer, conceptually simple leaving the complexity in the laborious task of writing any program
Yes, I wrote a virtual NAND gate programming tool a while back (think 2d Minecraft "redstone" style programming) and it was immensely fun to program and playing with it taught me a ton about the gap between simple logic and actual computing (which is a huge matter of scale).
Implementations are available in the repository linked to from the article. Not quite "ready for production", but hopefully a useful starting point. https://github.com/matslina/bfoptimization
It is pretty trival to write a BF compiler to C, each instruction has a clear C equivilant and the parser is dead simple to write as well. It is less then an hours work to write a naive BF->C compiler
Since bff4lnr is an interpreter, this really becomes apples and oranges. But for fun, dbfi.b on bff4lnr (at -O3) vs all at -O0 and -O3: http://imgur.com/uEEgVr0
Conclusion: if you want speed then compile.