"MCNP is written in the style of Dr. Thomas N. K. Godfrey, the principal MCNP programmer from 1975-1989 ... All variables local to a routine are no more than two characters in length, and all COMMON variables are between three and six characters in length ... The principal characteristic of Tom Godfrey's style is its terseness. Everything is accomplished in as few lines of code as possible. Thus MCNP does more than some other codes that are more than ten times larger. It was Godfrey's philosophy that anyone can understand code at the highest level by making a flow chart and anyone can understand code at the lowest level (one FORTRAN line); it is the intermediate level that is most difficult. Consequently, by using a terse programming style, subroutines could fit within a few pages and be most easily understood. Tom Godfrey's style is clearly counter to modern computer science programming philosophies, but it has served MCNP well and is preserved to provide stylistic consistency throughout."
Thanks, now I know that LLInt (the new interpreter) was written by Filip Pizlo. Note that you mention him in the article only related to DFG JIT. I didn't try to investigate the source files or checkins, I tried to understand only what you write, and that was obviously not clear enough.
I hope you see that I'm interested in the topic but read about it the first time. I hope you also can imagine that there are more readers like me. And they would benefit from the correct summary too.
Please do write what else I misunderstood, that is exactly the reason why I wrote the short summary, to get the feedback, not to claim that I understand more than you or any insider. It's short, counting the words some 20 times shorter than your article, so I hope it wouldn't be hard for you to point to any other inconsistency. Summarizing helps if the result is something relevant and clear, in a few sentences instead of 1700 words. Without the summary, the most important points can be overlooked/misunderstood by anybody not "close to the sources."
Well, I guess the one thing I would correct is the "why". The LLInt doesn't just produce assembler to be fast, though it is faster than the old interpreter. The real reason it produces assembler is to control the stack representation, so that it works better with tiering, exceptions, and the optimizing compiler (DFG). Otherwise, interpreting was a lose, because tiering up cost too much.
As you can see, the situation is a bit complicated. If I could have made the article shorter, I would have :)
Thanks, I understood that the main benefit was a tightly controlled CPU stack and that it's exactly what would be impossible to achieve with C++ code. Now we don't have to agree if that goal can be called "speed." I believe it can, since otherwise just a "good old" interpreter would be enough, no need for "simple JIT," DFG JIT and the control of the CPU stack.
And thanks for writing the article (and the other ones on the same subject) I've really learned a lot!
Just to further make it easy to confuse the two, the Forth-inventor Chuck Moore is also involved in chip design these days (at Green Arrays - a company doing embedded heavily multi-core chips running a forth-like instruction set)
I also wondered for a while (checking wikipedia and work...). May he rest in peace, even if from that little article I didn't get to know what advancements he was responsible of (and I'm now quite curious about)
Much love for lisp machines, but stack machines do not offer any performance advantages over register machines, and actually make optimization much harder. See Ungar's 1993 thesis on the Self 93 compiler for an early realization of this, where he examines what microcode / register windows / etc could do for him, and how he was able to do just as well with registers.
Several RISC chips for Lisp Machines were under development. Xerox, Symbolics (Sunstone), University of California (SPUR), had projects for that. The AI winter then killed it. The Lisp Machines then were ported as emulators to ALPHA (Symbolics), SPARC (Interlisp) and other processors.
"Also, note that the Sunstone project did address many of the competitive concerns, especially the continual mention of Sun in this analysis. The Sunstone project included a chip design for a platform meant to run Unix and C, as well as Lisp. It was a safe C exploiting the tagged architecture, for example, to allow checking of array bounds. And the Sunstone project was being produced on-time. But to back up the analysis of Symbolics’ priorities, it was cancelled as we were getting the first chips back from LSI Logic."