OK, yes, I get it, every major programming language can implement an emulator for DCPU-16 in a few dozen lines. Can someone hurry up and post the [consults the Trendy Language Calendar] Node implementation so we can call it a day on the DCPU-16 implementations?
(Incidentally, I'm not saying DCPU-16 is uninteresting, and if somebody's got something more interesting than a straight-up implementation I'm still all ears. But... Church-Turing, you know?)
I would be a lot more interested into high level languages for DCPU-16.
Also, most DCPU-16 implementations I have seen so far are bytecode interpreters. I would love to see an actual JIT translating emulator that generates x86 or even better llvm code. The performance difference should be considerable.
I was going to write one using DynASM (it would be very short, simple, and fast) but it was unclear to me that DCPU-16 had any substantial programs written for it where you could show off a speed difference.
I have been looking for an excuse to write an article about how to use DynASM though.
This is true - I actually miss-read your comment and thought you said instruction pointer instead of stack pointer... It seems to be a bug in the implementation (from my reading of the spec, this should not happen).
>In 1988, a brand new deep sleep cell was released, compatible with all popular 16 bit computers.
So I've assumed from the start that the D is for Deep, because the whole story is about this deep sleep stuff. But yeah, that's really just speculation. I don't think there's any official answer to it.