Hacker News new | comments | show | ask | jobs | submit login
DCPU-16 implementation in f# (msdn.com)
51 points by balakk 1814 days ago | hide | past | web | 21 comments | favorite



OK, yes, I get it, every major programming language can implement an emulator for DCPU-16 in a few dozen lines. Can someone hurry up and post the [consults the Trendy Language Calendar] Node implementation so we can call it a day on the DCPU-16 implementations?

(Incidentally, I'm not saying DCPU-16 is uninteresting, and if somebody's got something more interesting than a straight-up implementation I'm still all ears. But... Church-Turing, you know?)


I agree.

I would be a lot more interested into high level languages for DCPU-16.

Also, most DCPU-16 implementations I have seen so far are bytecode interpreters. I would love to see an actual JIT translating emulator that generates x86 or even better llvm code. The performance difference should be considerable.




I was going to write one using DynASM (it would be very short, simple, and fast) but it was unclear to me that DCPU-16 had any substantial programs written for it where you could show off a speed difference.

I have been looking for an excuse to write an article about how to use DynASM though.


I'd like to see an implementation in OrgASM, but it's hard to type with just one hand.


JIT compiling DCPU-16 bytecode using LLVM is something I plan to do later if I get time. I imagine by then someone else will already have done it, but if not, then its something I plan to do.


Come on, isn't it nice to see a Rosetta Stone for something that isn't math-oriented?

I think one of Notch's goals with DCPU-16 was bringing back the feeling of programming for computers with very limited specs and little complexity. I think this is proof that his plan is working.


Apparently the Stack Overflow questions are already rolling in: http://stackoverflow.com/questions/tagged/dcpu-16

I imagine this must be slightly entertaining to witness for Notch.


It's great to see what idiomatic code looks like in all these different languages. F# shines here.


Please forgive me my ignorance, but why are these DCPU-16 implementations in <put your favorite language> [in less than X lines] popping up in recent days?


It's an in-game CPU of Notch' (creator of Minecraft) new MMO:

http://0x10c.com/doc/dcpu-16.txt http://0x10c.com/

Since the DCPU-16 spec has only been announced recently, there is currently a surge in interest.


Most of these implementations change the stack pointer as a side effect of decoding a skipped instruction:

IFN a, a Set Push, 1

Is this the intended behavior?


Yes, I think so because branches skip full instructions and instructions may be between 1 and 3 bytes but exactly how many is unknown until the skipped instruction is decoded.


The PC needs to be incremented past the "next words". But it seems odd that a un-executed push instruction would still modify SP.


This is true - I actually miss-read your comment and thought you said instruction pointer instead of stack pointer... It seems to be a bug in the implementation (from my reading of the spec, this should not happen).


I believe you're right @MarkSweep. Thanks much for pointing that out! Fixed.


Who is going to do this in brainfuck?


Does anyone know which implementation of the DCPU-16 is the shortest?


What does the D in DCPU stand for? Notch's DCPU spec does not say.


This is in the storyline:

>In 1988, a brand new deep sleep cell was released, compatible with all popular 16 bit computers.

So I've assumed from the start that the D is for Deep, because the whole story is about this deep sleep stuff. But yeah, that's really just speculation. I don't think there's any official answer to it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: