I myself can't even manage to finish a simple 2d game I started months ago.
Kudos to the author.
I'm sure we all know SE major students who aim to get a job as a "project manager" right off the bat, not because they actually love/know anything about managing a project, but simply because it is a managerial position where you don't have to code.
Rasterization really doesn't require that much work, but you need to be able to do a world/view-transform, clip triangles/quads, and do perspective correct texturing. The most sophisticated bit of that is a clipping algorithm, which is really easy to implement.
Ray tracing requires you to generate a ray per pixel (essentially the opposite world/view-transform), determine ray/box intersections, and based on the intersection coordinate of ray and box determine a texture coordinate.
As someone who has done both, I would say the two procedures are pretty the same level of complexity (if you stay away from bilinear texture interpolation), but I admit that raytracing feels easier to implement as you avoid the clipping and perspective correct texturing.
However, The Minecraft world is a uniform grid of "boxes", so it contains a lot of quads leading to potentially huge amounts of overdraw which quickly becomes infeasible for a software-rasterizer. So if you wish to rasterize in software, you'll need to do a bit of additional work to avoid drawing a lot of hidden box-sides (ignore shared sides), and you'll never get overdraw down to 0 unless you use additional screen-based data-structures.
On the other hand, the author had to implement a raycasting algorithm on the uniform grid for the raytracer to be efficient. This is actually also a little bit painful.
So for that reason, the ray-tracer is definitely the right decision here.
On a related note, I tried to implement raycasting on a uniform 3D grid on a 486/66 Mhz in 1996... Got around 2-5 FPS in 320x200. So it was completely infeasible back then.
Edit: The link error is probably just a stupid mistake on my part; I was trying to build on a 64-bit machine. It doesn't seem to have any problems on 32 bit.
It's indeed an impressive project, doubly so as it was initally only for a university project. Instead of just making a game in assembly he made the project his own by extending it -- bootable, raytaced, raytraced shadow, textures and so on! =]
Not to discount what he did, but he didn't write the game in Assembly. He wrote it in C then decompiled it to Assembly and used that as a starting point for the Assembly code.
From the article:
>Starting in assembly right away would be a bit too insane, so I first wrote a reference implementation in C
As someone who started in Assembly right away creating several games with a much simpler 6510 CPU, I can vouch for the fact that starting in Assembly would be a bit too insane especially on modern CPUs.
> Then I began slowly porting everything to handwritten assembly.
How did he make the reference code? I'm reading that as he wrote an implementation in C, then decompiled it and then cleaned up the decompiled Assembly language code.
Maybe I'm wrong. How do you take his statement above?
That's impressive too!
And I usually prototype the C in Python first :D
The most common reason is using architecture-dependent instructions that the compiler doesn't generate well, or doesn't generate at all. Examples are SIMD (auto-vectorization is nice, but far from perfect) and DSPs that have specific multiple-and-accumulate instructions or flags that change the behavior of the accumulate register.
In a project I'm currently working on, inlining was still inferior to fully native ASM. LLVM generated unnecessary stack loading in the prologue, and the completely unused memory access had something like a 4% speed penalty.
On a related note, I have a dream one day to discover a real example where undefined behavior can be used constructively as a license for the compiler to optimize: the following post alludes to this idea but assembly dumps at the bottom show that the compiler is not taking advantage of the information encoded into the undefined behavior:
More seriously, an annotation language for expressing properties that are supposed to be true at various points of the program can be useful to transmit information from the programmer to the compiler and enabling optimizations that would otherwise require difficult full-program analysis. And these annotations can be used to analyze the program too!
Though, it does seem to always store the old stack pointer in r7, even though it doesn't restore from r7, and even though my inline assembly block specifies r7 on the clobber list. That might be a bug, but it's a single 'add', so who cares.
So writing in C is fine for smaller projects, but those are easy enough in assembly anyway, especially on the old architecture designed for human-produced asm.
Back when I started programming (1986), Basic, C, Pascal and Forth were seen as prototyping languages, before coding the real applications in Z80, 6502 Assembly.
You can tell where the devs with Ph.Ds in compiler design are putting all of their effort :)
I think that's also affecting where the optimization effort goes to a great extent - it's more likely to be invested on the type of code people are more likely to use in critical inner loops to be run in places that might saturate large numbers of cores...