It's fairly easy to make them non-brittle, because it's very easy to rig a testing framework to run both the hand-optimized "jet" and the pure code, and compare them.
Standardizing performance is a subtler and more interesting point. It's a fairly safe bet that anything in the kernel that needs to be is jet-propelled. Above that layer, who knows? Good old normative text may handle it.
But in general, the feeling should be like the difference between hardware GL and software GL. The developer doesn't see it and the user sees it only as fast versus slow.
What about when the pure code is too slow to run? Or when the pure code is wrong but the jet is right - so that the nook spec is now not sufficient to determine semantics.
Let me bury that criticism though by saying that this is the most beautiful piece of work I have seen in some time. If more people were willing to be a little crazy we might not be quite as tangled in spaghetti.
One, it takes a good bit of work to boot Arvo beyond a merely correct nock interpreter. The Hoon type system, for instance, is enormously painful if not jet-propelled.
Two, the kind of bug you're describing is in fact the nastiest class of bug. The best way to get around it is to always make sure you write the pure code first. But this isn't possible in a variety of circumstances - and it sure wasn't possible when making Hoon first compile itself. So, as inevitably in the real world, there is no substitute for not screwing up.
I think it's a fascinating tradeoff. A typical approach with JIT-compilation is to write relatively inefficient code and hope that the JIT compiler picks up on it and optimizes it for you. Their approach seems to instead be to say "if you generate particular sequences of instructions, we'll detect those and instead execute a more efficient version, every time".
IMO this is a really cool idea because the performance optimizations are quite a bit less brittle than the ones you get with a JIT-compiled virtual machine.
> A typical approach with JIT-compilation is to write relatively inefficient code and hope that the JIT compiler picks up on it and optimizes it for you. Their approach seems to instead be to say "if you generate particular sequences of instructions, we'll detect those and instead execute a more efficient version, every time".
I don't see the "instead" there. Those seem to be two ways of saying the same thing.
Let's say that my VM of choice has a tracing JIT, and it will attempt to optimize traces up to 1,000 instructions long. Let's further say that in the current version of my code, the body of my hot loop is 980 instructions long. Then in a new version I add another few operations which push it up to 1,010 instructions. Suddenly the JIT stops trying to optimize that portion of my code and performance tanks.
Meanwhile some other guy wrote his code using a VM which used this sort of "Jets" approach. It's probably not as fast or as versatile overall, but when he adds another few dozen instructions to a hot loop he can do so secure in the knowledge that all the preexisting code will continue to execute just like it did before.