Hacker News new | past | comments | ask | show | jobs | submit login
LLVM backend for DCPU-16 (github.com)
95 points by codezero on Apr 8, 2012 | hide | past | web | favorite | 46 comments



Just a suggestion: there's a lot of cool stuff going on with this project, and it would be a shame if interested people missed out on it. At the same time, posting every cool thing to HN could flood the front page for months :) So let's move discussion of new implementations, assemblers, and disassemlers (at least) to the DCPU subreddit? http://www.reddit.com/r/dcpu16


Thanks. I didn't know there was a subreddit for this.

Oh dear. DCPU emulation on DCPU? http://www.reddit.com/r/dcpu16/comments/rz6fd/dcpu_emulation...


I'm also working on a compiler targeting the DCPU-16. It doesn't compile C code, instead there's a simple C-like language supporting modules and pointers.

It's not really finished and can surely produce better code, but I wanted to share it anyway.

Here's the fib example from the LLVM backend's README: https://gist.github.com/2336867. My compiler currently manages to compile fib to 30 instructions (0x2E words) compared to 38 instructions (0x41 words) for the LLVM backend.

Here's the source: https://github.com/zr40/dcc


Too bad it doesn't follow the calling conventions proposed on #0x12-dev: https://github.com/0x10cStandardsCommittee/0x10c-Standards/b...


(dcpu16 llvm backend developer here)

That ABI is hard to implement in LLVM, mostly because of SP can't be addressed as [4+SP] in DCPU-16. So, DCPU16 LLVM backend uses C register as a frame pointer (to store local variables and other data) and SP as a stack pointer to store ret addresses.

There're other "flaws" in that ABI which increase the cost of developing an LLVM backend.

I am going to make the supported ABI closer to #0x12-dev ABI in v0.0.5 and report them the most annoying features of their ABI.


I don't think it makes any sense to follow any community-agreed guidelines until Notch finalizes his design.

Do you plan to make a short description of what exactly it took to create the backend?

I thought this would be an excellent excuse to learn about llvm, so been reading about its internals the past few days, and even though I realized that starting with MSP430 this would be relatively simple for someone who knows llvm, I am apparently too late to the party :) Nonetheless I would be interested in some overview.


Great work :) I just spotted something in the compiled example and wonder if this:

    SET  J, [4+C] ; The Notch order
    ADD  J, 1 ; The Notch order
    SET  [4+C], J ; The Notch order
Is not transformed to

    ADD  [4+C], 1
due to some assumptions in llvm, or do you not handle the literal values as arguments yet?


It's just because the focus is now on generating valid assembly (there're tons of bugs so far) and I have not paid any attention to optimization yet.

Thanks for reporting, tracked by issue https://github.com/krasin/llvm-dcpu16/issues/67


Dude seriously this is awesome. Good work. DCPU-16 is perfect for learning about all these cool aspects of computer science. It's so stripped down (lacking, for example, instruction pipelining) that it's easy to get into and see what's going on at a very basic level. I look forward to looking over the repo later today (after I've slept).


Why should it? A GitHub calling itself a standards committee is hardly, y'know, a standards committee.


Meh. It would not surprise me if notch fails to consult this "standards committee" when he finally communicates how this CPU will interface with game components.


Maybe the current drafts are flawed (e.g. I don't see a point for a file system draft at the moment), and maybe the people aren't the best, but the point is that we need to communicate and collaborate. I know we all like to say "why should these guys decide?", but if we want to be able to use libraries from different assemblers we need some standards/drafts.

And it's not really a "committee". Anyone can join the discussion. If the "committee" fails to understand your point, you are free to follow your own standards. If lots of people are unsatisfied, someone will fork it.

Just remember to communicate before you write lots of code/libraries/tools that only work with your own standard.


> but the point is that we need to communicate and collaborate.

You are missing the point.

>> It would not surprise me if notch fails to consult this "standards committee" when he finally communicates how this CPU will interface with game components.

> I know we all like to say "why should these guys decide?"

Except that in this case these guys shouldn't really disillusion themselves in thinking they are deciding. It's not their call - it's an in-game CPU and the standards are going to be what the implementors find convenient. The odds of notch finding this standard favorable to my standards are about the same as him favoring my standards.


> The odds of notch finding this standard favorable to my standards are about the same as him favoring my standards.

Huh? Notch doesn't have to give a shit about these standard. He's writing the emulator (and memory-mapped I/O); we still need conventions on top (calling external libraries; linking libraries and so on).

> the standards are going to be what the implementors find convenient.

Yes, of course. All I'm saying is that instead of each assembler/compiler/linker using its own "most convenient convention" we can at least try to unify them. And someone needs to write down drafts.


There's also the "those who can, do, those who can't, form standards committees" angle. (Of course, this does not apply to real standards committees that are formalizing post-hoc standards. But folks who form a standards committee to tell everyone else how to do their freely offered work before a de-facto standard is even in place? Please.)


Standards based on real working implementations generally make better standards. I'm sure the github conventions will evolve to better reflect what works in the real world. If they don't everyone will just ignore them.


Excuse the ignorance, but what is the big deal about DCPU-16?


A lot of us intially entered the computer field to do something like program our spaceships' guns and shields, so this rings an important string inside us. Also, the CPU is so simple that everything around it (the spec, various emulators, assemblers and compilers) is quite accessible even to people who have little experience with this low-level stuff. In short, if the DCPU-16 and related technology do not fit the "fun hacking" label, then I don't know what does.


It's a toy CPU that's perfect for learning. It's about to be part of Notch's new game (he created Minecraft). So it's a great combination of fun + educational. Honestly, I'm surprised no one has made one of these CPUs in Minecraft yet. I can guarantee that it's possible.

In other words, this has a lot of people excited about a game that doesn't even exist yet. Oh, and given the popularity of Notch's games and the buzz for this one, I'm sure that a few people will manage to turn a tidy profit based on their fun. There are more than a few highly popular Minecraft websites, YouTube channels, etc. So there are a lot of things that appeal to hackers. It's a great toy system to play with, it's going to be an important part of an interesting game, etc.

Note that some people have already learned about electronics by building redstone circuits in Minecraft. It abstracts away everything but the logic, so it's a fun way to play with logic gates and whatnot to build something interesting. For example, I have a giant circuit hooked to a redstone clock to periodically flood the spawning pads of the gigantic mob grinder that covers my base camp and increase the number of items I get. It's enormous, but not particularly exceptional by Minecraft standards. It does help me produce a lot of TNT, though. I put a record player in the item collection room to pass the time.


Post a video please :-)


I'll do you one better and give you the whole world:

http://www.mediafire.com/?6ywsyy18qhening

Note that it's in the new format. I deleted the parts in the old format to make it smaller. The mob grinder is the huge thing in the sky. The items fall into a room in the upper level of the house.


Awesome! Can't wait to give it a go :)


Let me know what you think :) There's a lot of stuff in there and you may see that I'm a bit OCD about labeling things. I doubt most people have a sign marking the bedroom. But everything else in the house has a sign, so...

If you're wondering, that railway leads to the dungeon with the end portal. The dragon is dead and his egg warped into the end portal back before I could collect it. Sorry about that. I'm not quite clear on whether I could just delete the region files for the end and make it all respawn, but I might want to try that some time.

There's also a fairly complete map of the area centered on the house in a chest upstairs at the end of the hall. And there are a few random outposts in dungeons. If you ever get lost, there should be lots of markers pointing the way home.


FYI There is a bukkit plugin that allows you to duel the dragon as many times as you want.


Interesting. Might have to look that up sometime. Hope you've enjoyed my world, even though I know it doesn't hold a candle to some of the fancy creations out there.

But hey, I do have a collection of every single color of sheep, neatly sorted into pens :)


Yes, I don't get it either. As far as I can tell it's just that people love Notch (he's pretty cool) so anything he does is automatically cool as well.

It's not like there haven't been games with instruction sets before.

http://en.wikipedia.org/wiki/Core_War http://en.wikipedia.org/wiki/RobotWar


> It's not like there haven't been games with instruction sets before.

So what if it's not conceptually new? Why would people not be excited about a very good-looking entry to a fantastic genre?


It's a reasonable question. When you combine the concept of a very simple CPU + instruction set (which reminds many of us of the systems we used when first learning to hack) with a system based on a much-loved and much-imitated older game, you have a recipe for success among a certain class of geek.

I think it tickles some of the same interest points as Corewars[1], but a little less abstract and with a newly-galvanized and potentially larger community.

[1]: http://corewars.org


It is the basis of http://0x10c.com/, an upcoming game from Notch/Mojang of Minecraft fame.


I'm not sure that answers the specific question being asked here, which I read to be about HN, not the impetus.


For some people, it's "another CPU I can write a compiler for". For others, we love playing Minecraft and are excited for Notch's next game. Still other people are intrigued by the complex gameplay Notch has described and have confidence from the success of Minecraft.

And others will find it evidence of the further sub-redditing of HN.


Right now it's a toy, but there's a whole new group of programmers who have never touched any kind of low level programming before. People are taking new world ideas, and applying them to what one might call old world problems. Look at this: http://dwilliamson.github.com/ its a real time asm editor... there's going to be a lot of really cool toys that come out of this, and potentially those toys might inspire someone to make something that's more than a toy.


Next up: compile a JVM and run the game within the game???

Joking aside, what are some scripting languages with tiny interpreters that could feasibly compile eventually?


Forth, Lisp and BASIC immediately come to mind, and are period appropriate.


Turbo Pascal and C would also be quite cool, as they were heavily used in the 16bit days.


"Binary distribution for Linux x64 is available. (v0.0.2, 170MB)"

170MB to emulate and compile targeting a 128kb virtual machine. Funny how things have bloated.


It’s a reasonable price for the convenient abstraction. What’s interesting is the size of the binaries and the compression ratio of the whole thing (0.2):

    zoul@naima:llvm-dcpu16 $ du -sh *
     286M	bin
      13M	include
     522M	lib
      76K	share
What’s in the binaries so that they are so huge and compress so well?


I'm tempted to point at various graph fragments and other cpu descriptions which are partly auto-generated. It's a guess though, I'd like to know a real answer too. Is is a high price to pay for a framework which allows people to port compilers to a whole new architecture in a matter of days?

Also this binary may actually include all targets, not just dcpu16.


guess: unstripped debug info? llvm is c++ after all...


"Bloat" being "Every feature I'm not using right this instant."


This is so incredibly cool! But how did you implement I/O?


Certain memory areas correspond to the console. If you have a look at Notch's Hello World example you see him putting letters in memory areas.

Now if this is how it will communicate with the rest of the ship, I don't know.


A few days ago he was thinking of a message queue... not sure how it would work though.

http://twitter.com/#!/notch/status/187448370107912192


Since he wants to simulate the cpus all the time and there's no sleep / wait which actually stops execution, tight wait loops shouldn't be an issue. Just wait for some mmaped [msg_no_ptr] to be >0.


This is so cool! Just finished my implementation and it runs the fib example flawlessly†!

https://github.com/tkahn6/dcpu16-haskell/

† Because mine is a pure state machine and Haskell is lazy, I had to introduce a HLT instruction instead of using `:crash SET PC, crash`. Nothing ever gets printed using the latter convention.


You can use a monad to thread the computation for strictness while retaining purity.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: