Hacker News new | past | comments | ask | show | jobs | submit login
Take Over The Galaxy with GitHub (DCPU16 support) (github.com)
255 points by rlm on Apr 10, 2012 | hide | past | web | favorite | 86 comments

I'm having a hard time articulating why I think this is so fantastic. It's great to see people attack a not-so-serious problem with such gusto. I love the passion behind taking things apart just to see how things work.

Thanks, github. You made my day.

The value of just playing around is underestimated. A lot of discoveries have been made by people just playing around.

As someone with not a lot of experience in low level languages and I can't wait to play around with this. I think it will be fun way to learn and maybe even try building hardware implementation using a FPGA.

Hackaday.com will be running a competition for hardware implementations: http://hackaday.com/2012/04/08/getting-12-year-olds-to-learn...

The comments link to information about the Apollo Guidance Computer.



I haven't been thinking about an FPGA myself. Mostly because I don't know verilog/vhdl well enough.

I've been thinking of how hard would it be to take two 32k SRAMs and an AVR and do some fancy bank switching to handle it. I know I could probably manage it by using the onchip support directly but then I'd end up with some memory inaccessible from addressing holes.

It seems like it should be possible to do, and then it would be easy to allow real IO and everything later when that's standardized.

On my list of things to get around too. I don't expect it to be too hard as it's a fairly simple processor as specced. It'll be interesting to see what architecture techniques could be added to improve things while still remaining within spec.

Ok after seeing the assemblers/VM's last week I wasn't expecting to see much new this week.. then I saw this:


C compiler support for dcpu16!

In a way this almost saddens me as by the time the game comes out it looks like the community will have javascript ported to the CPU and no one will actually have to program in assembler as per the original idea... ;)

Don't start worrying about the "purity" and "this was never intended". You don't specify a processor instruction set in the real world without expecting people to write higher level languages for it, so why expect people to write at such a low level in a GAME of all places? I am fairly certain everything is going exactly as planned. Notch has got this.

I second this. If anything, Notch must be feeling immensely satisfied if not somewhat overwhelmed by all the effort that the community has put into his project.

It must definitely be overwhelming. So much hype is building, and he doesn't even have a functional prototype working yet. Greater men have failed to succeed when so much pressure is present.

I love the game idea, and think Notch is a great game designer+developer, and sincerely hope that it will be what we think it will be. But at the same time, I'm not as optimistic as others might be.

Notch apparently loves amazingly slow VMs, so this is the next logical step after Java :D

I second this. Think about all the hype and excitement he is building simply be releasing a processor instruction set. Pure marketing genius =D.

Along similar lines, this morning I was thinking how wonderful it would be if whatever interfaces exist between the ship's computer and the rest of the 0x10c game world are rich enough to make security vulnerabilities in players' DCPU-16 code a real concern. Imagine disabling an opponent's ship by exploiting a buffer overflow in a custom communications protocol implementation, for instance...

This. Also, if it gets one kid interested in asm/ low-level stuff, it's worth it IMO.

We keep abstracting, but we can gain so much by going back to the hardware. Also, exploiting in general is fascinating to me, and I hope this pushes it more mainstream.

That very well may be part of the game: https://twitter.com/#!/notch/statuses/187474819980328962

It could run JavaScript, but this is a 16-bit processor we're talking about here, with minimal RAM. It's more likely we'll be using C and BASIC ;)

Yeah this seems like the exact scenario where hand-rolled assembly and perhaps some hand-optimized C will really shine. You don't see a lot of embedded processors running javascript, for example. If your ship can process data and respond 5% faster than an opponent's, all other things being equal, you will come out ahead.

If the current level of interest persists, by the time the game launches, I imagine that the vast majority of people will be downloading and running programs written by others. These will have been pored over and optimized to an extent that most of us would be unable to achieve by ourselves, and it will not pay to roll your own trivial implementation. I'd be curious to see what Notch can do to still encourage people to learn how the CPU works themselves. If the environment and game dynamics are rich enough, perhaps this will not really be a problem?

I think we're going to see some clever optimizing DSLs (a la FFTW) -- restricted languages for the kinds of embedded programs you write on such an architecture will be easier to optimize than general purpose languages. Particularly if they have a clear cost model.

The Haskell embedding is very likely to head in that direction.

(See e.g. in this style : http://www.fftw.org/faq/section4.html#whyfast or this style : http://www.cse.unsw.edu.au/~chak/papers/polymer.pdf -- code generation + DSL + constraint solver for instruction level timings).

At least, that's what I'd do.

Low-level lisps à la GOOL/GOAL[0] will probably gain some traction as well, or so I hope.

[0] http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp

Did they ever get released to the public?

No, but the design docs are still out there if you contact the former engineers. I would distribute them if I had permission.

I agree that 0x10c DSLs that control game play will be more successful than a C compiler. :)

The polymer article was very interesting, really cool stuff. However, I could not find any mention of using a constraint solver for instruction level timings. Is that in another article, or did I just miss it?

That's not in the polymer paper, but its what FFTW is doing (and the ICC compiler that we relied on for the monte carlo work).

Thanks. I misread the content description as referring to the latter paper only.

> I'd be curious to see what Notch can do to still encourage people to learn how the CPU works themselves.

Have some ship customisation stuff tied to assembly programming. People love changing the colour of carpets or wearing hats or collecting and displaying fossils.

Then allow the community to create simple how to guides - "This program will do $THIS_THING; here's how it works; now try to change it to do something slightly different."

I guess Codecademy should have a DCPU section too.


Agreed that higher-level langs would be helpful for "newbies". I wonder if this was at all inspire by Schemaerse (google it, but TL;DR it's a space game written mostly with Postgres triggers where /you build the game client/ in whatever you want to use to interface with the pgsql :)

Yep, only thing I dislike is that the processor does not simulate the importance of cache. If the in-game processors allowed the selection of different cache-line sizes (both instruction cache and data cache) for different costs (paid via the in-game currency), it would add another level of depth to the game market. It would also mean getting into the nitty-gritty of optimization would be even more worth it.

Why currency and not cycles? The VM is designed to give each CPU ~100k cycles per second (billed per instruction).

For those old grey-hairs around and reading that remember RSX-11M/M+, there was a tool known as TKB.

The task builder.

TKB allowed (much) larger applications to run in the address space of a 16-bit DEC PDP-11, using what were called overlays, and overlay trees.

With overlays, the application call tree within an application was analyzed and implemented to allow various sections of code within a tree of subroutine calls to be paged out to backing storage.

If the underlying "processor" is fast enough and if you have enough swap space available, then you can stuff a whole lot of code into a 16-bit address space. Just not all loaded in physical memory at once.

And debugging it... Shudder...

4096 18-bit words are known to host a Lisp interpreter (http://www.softwarepreservation.org/projects/LISP/maclisp_fa... ), so 64K 16-bit words should be more than enough :)

How about FORTH?

Shhhh.... don't give away our secret weapon!

Turbo Pascal!

New challenge - clone the Turbo Pascal IDE and get it to run on the processor.

For reference, Things That Turbo Pascal is Smaller Than. I am still amazed that the compiler and IDE are only 39 KB.


Turbo Pascal 3.0 was my first contact with Pascal, and while it was great for the time, I doubt I can call it an IDE.

It was more similar to vi + integrated compiler as an IDE.

agreed, but at this rate I figured I'd suggest the moon

He who has the best systems will win, no matter the tools or platform. I love it.

I think it is great for teaching kids to program as well, or be excited about it. Notch is going to do what thousands of teachers can't do because he is using the power of gameplay to drive it.

It's a game. if you can make your code run faster, you'll probably get more magic pixels.

I for one plan on writing the first in-game magic-pixel-collecting DCPU16 botnet.

I want to see a networking protocol between ships so that fleets form in order to take advantage of multiple "cores."

So for example, you'd have the difference between a single-celled organism (standalone ship) versus a multi-celled organism (a fleet), with a fleet of ships delegating work to specific ships. So 10 ships run the "scout" programming in a perimeter, 5 act as resource gatherers, and a few others as transports within the protected space. Perhaps some act as brain cells which tell ships when to change roles.

All of this is happening even when no members of the fleet are actually playing.

This just boggles the mind with possibilities and I can't wait to start playing this game.

Furthermore, you have people trying to break into space protected by fleets by attacking networking protocols--in a game!

I had a dream the other night wherein there was a start-up that took custom code requests for 0x10c players on commission. Requests ranged from optimizing the ship's defense to autopilot and hyperspace jump controls.


For some definitions of viable. Perhaps taking the requests, and converting those into mechanical turk tasks would work. That'd allow the price to be low enough to be doable.

There's a market for minecraft servers, so anything's possible.

Due diligence is called for. Has this business model been successful in other games? Do you have some advantage over the throngs who will gladly do it for free / recognition?

While I can't think of a specific example, there are related precedents: consider the advent of the Mann Co. Store in TF2 where players purchase game items with real cash. There is definitely a model to be made off players with disposable income who want to be the top dog.

Your second question is a reservation that I have as well. We have already witnessed a huge influx of people coding up DCPU-16 software for free, but such programs are only related to the software engineering side of the spec rather than the actual gameplay. Obviously we know less about the latter since few details have been released, but in the competitive game I imagine it could be different. For example, to build a really awesome weapons system and then share it with other people seems a bit counter-intuitive. So there may be room yet for a business built on custom, clandestine code for a player's ship.

Gamers are already a demographic with money to spend. This could be seen as a worthwhile investment to some. If you are interested, email me and we can talk there.

I phrased it as a question because I don't know the answer either. I've heard that second life had a thriving economy of user-generated content and that some players "make a living" from the game; I've never seen hard numbers though.

I doubt the returns reach the proportions necessary to support a startup, especially with the low barriers for entry, competition from free alternatives, and piracy. Maybe it could support a single developer, though; more of a "lifestyle business."

Absolutely. Maybe you can get acquired before the game is actually released! ;)

Time to fill out a YC application!

small business, not startup

Considering that the gold farming business runs revenue in the billions, the moniker of startup could definitely apply to a service catering to MMO gamers.

HN is funny. Last week as each new dcpu emulator implementation popped up, they got fewer and fewer votes and more comments like "oh great, yet another dcpu post. let's call this Dcpu News for crying out loud!" Then github adds syntax highlighting and gets 150+ points. I'm very curious why that is...

The emulators got less votes because it was more of the same exact thing ("someone implemented DCPU-16"). This is news because a big name has taken notice of DCPU-16, going so far as to officially support it. It's a different flavor.

Ah, so dcpu is cool now because somebody cool says its cool. Just seems like bandwagon/fanboyism to me. Call me a dcpu hipster :)

btw, awesome work w/ mappum on the js emulator stuff. you guys update w/ impressive speed.

It's not fanboyism, most like "stop showing me emulators it's not interesting anymore". The subject of the DPCU iself is still pretty interesting.

Also, thanks!

My sentiments exactly, I don't believe that Github adding syntax highlighting of a language is "hacker news". They are supposed to have syntax highlighting for most of the languages. Maybe on HN people are discussing DCPU, but if you look at the comments on Github, they are all "awesome", "+1", really? for adding a syntax highlighting for one more language.

Looking at all the amazing work done regarding the DCPU and the article yesterday on Instagram's technology stack made me realize just how far we've advanced. With the tools that we have available now, it is possible to do things in a few days that it took people years if not decades to achieve.

"it is now possible to do things in a few days that took people ... decades to achieve".

Let's not get too excited here. Name one thing that can be done in days that used to take decades.

The first Fortran compiler's creation took 18 person-years. Now there are C compilers which were written from scratch in person-weeks, like TCC.

But the first C compilers were written four decades ago by a couple of people in a matter of months. And they were not just writing the compilers but also designing the language at the same time.

"When Steve Johnson visited the University of Waterloo on sabbatical in 1972, he brought B with him. It became popular on the Honeywell machines there, and later spawned Eh and Zed (the Canadian answers to `what follows B?'). When Johnson returned to Bell Labs in 1973, he was disconcerted to find that the language whose seeds he brought to Canada had evolved back home; even his own yacc program had been rewritten in C, by Alan Snyder."

-- Dennis Ritchie, who wrote the first C compiler. http://plan9.bell-labs.com/who/dmr/chist.html

Well, but those 18 person-years did their part to allow those person-weeks. That is: I agree, technology got far. But there's a second phenomenon involved: human knowledge. Technology steps happen in generations, often restarting from scratch. Knowledge steps on the other hand most come incremental (well, with some losses here and there). We're standing on the shoulders of giants.

Calculate pi to 700 decimal places.

The implementation that calculates pi took the same amount of time to build 30 years ago as it did today.

The discussion is about how much effort it is for people to implement things, not computers to run them.

Copying the entire contents of a large library of books.

The Singularity is near.

Weird that they didn't include the language link[1] that shows all the repo statistics for the language, like most watched, most forked, newly created, etc.

[1]: https://github.com/languages/DCPU-16%20ASM

Notch tweeted the other day that he was thinking about DCPUs coming with some fan-made open source OS... as soon as someone writes one. This should be interesting.

I wonder if Notch is regretting releasing these details so soon. Now he's already going to be a slave to backwards compatibility and the game is about 0.1% complete.

Seeing that everyone is doing this just for fun, I doubt it. He could change everything tomorrow and I bet people would be excited to do it all over again.

I don't think everyone's doing it for fun, I'm sure there are savvy dudes out there that know that there's a lot of money to be had by being a first mover in the Notch ecosystem.

But even those that are doing it for fun -- you might be underestimating the amount of nerd rage that people are capable of when stuff they don't want to have happen happens. Just sayin'.

now let's just wait for the first O'reilly book on DCPU16 programming

no starch press most likely :D

While its great the GitHub added support for this how about x86-64?


The 64-bit register names are still not handled correctly. It does properly color the 8, 16, and 32-bit register names.

GitHub uses the open-source Pygments (http://pygments.org/) to highlight source code. If you can find the code for whatever ASM highlighter Pygments uses, you could probably fix it yourself. Though I tried searching for it and didn't find it after a while, so it would take some tracking down.

Now all we need is a DCPU16 to x86 translator and we have a whole new stack of dev tools ;-)

I'm a beginner programmer with a little of bit of programming experience, and this idea of programming a game through assembly interests me, but I'm not familiar with assembly. How should I go about learning DCPU16?

So, who's going to be first with a hardware implementation of the DCPU16? :)

If you count Verilog, there are already at least a couple[1][2].

[1]: https://github.com/sybreon/dcpu16 [2]: https://github.com/filepang/dcpu16-verilog

I'm working on a multicycle implementation that I will be able to push to a FPGA. There's no way I'll be able to get the same cycle timing as the specs indicate however. 3 cycles for a divide is very, very generous for such a simple CPU, I'll probably either end up implementing a shift and subtract algorithm (will take more than 3 cycles), or using huge look up tables (probably too big fit in a single blockRAM as well...) to try and achieve it. On the other hand, SHR and SHL are trivial to do in hardware via a barrel shifter, but he assigned 2 cycles for them.

The [next word + register] instruction is also a bit annoying to deal with in the given time tables and a simple register file design, though I haven't thought about the design of that too much.

Would it be a good idea to run the instructions as fast as you can, adding some cycle accounting and using it to generate some external interrupt to help implement quotas? Though when it gets IO support you might have to run things entirely in lockstep again.

Notch said it'll run around 100khz in-game, so it should be a piece of cake for an FPGA to do it in even one in-game clock cycle.

hackaday.com currently has a contest running to answer that very question.

What does the D in DCPU stand for? Notch's DCPU16 spec does not say.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact