But even that is still in a lot of development. There's a lot of magic the N64 hardware uses apparently.
In modern architecture, there are programmable ALUs inside the fixed function hardware and its fixed functions are very limited, it's just triangle setup and color/depth buffer, usually. The GPU input is just a list of indices, which is transformed by the programmable processors inside.
The GameCube scene was excellent, however
But I'd bet most titles had their own formats to support vertex deformation and animation. (Anybody know any details?)
It looks like it'd be pretty fun to program the N64 GPU, with its "high-quality, Silicon Graphics style pixels" :)
F3DEX2 ("Fast 3D, Extended, Version 2") is one of the well-documented ones and one of the ones used by most games. You can find a breakdown of the command stream here: https://wiki.cloudmodding.com/oot/F3DZEX
This viewer is actually an .obj model viewer, and has nothing to do with that. For something that's actually an F3DEX viewer online, I wrote https://magcius.github.io/model-viewer/#zelview/data/zelview...
Couldn't you technically just concatenate everything into 1 line and call in "X software in 1 line of code"?
Isn't it a better benchmark to have better code structure even if the project is composed of more lines of code or more files for that matter.
> Isn't it a better benchmark to have better code structure even if the project is composed of more lines of code or more files for that matter.
It isn't about benchmarks, it's about understanding a concept enough to cut junk out and providing clear concise code that does as expected.
You could also measure the compressed size of the code, since compression eliminates redundancies like long identifiers being repeated. But they’ll also sort of “cyclomatically optimize” your code as well—effectively re-rolling any hand-written unrolled loops, and shortening repetitions of a(b(x)) into the same result as a_b(x). You might say compression shows the optimal lexical size of the code. :)
That said, it's novelty and working within set restrictions that makes it impressive.
Not the line count of the code, but the fact that it works within the set restrictions of making an n64 software renderer (novelty) with that line count of code. (also a novelty)
It's not important at all, it's novel. We have had hacky, and even gimped for quick hacks n64 emulation (compare Project 64 to say, Dolphin) since the late 90's-early 00's, but we haven't had a 512 line object software renderer. A novelty for those who like novel programming exercises!
In my programs, I never took lines of code as a restriction. Since if you have a compiled program, the binary size tends to depend mostly on the compiler. I'm more focused on other restrictions, such as memory usage, etc.
"Elegance is not optional.
There is no tension between writing a beautiful program and writing an efficient program. If your code is ugly, the chances are that you either don’t understand your problem or you don’t understand your programming language, and in neither case does your code stand much chance of being efficient. In order to ensure that your program is efficient, you need to know what it is doing, and if your code is ugly, you will find it hard to analyse.” -- Richard O’Keefe, The Craft of Prolog (MIT Press)
As a graphics programmer, I find the example code extremely readable. If you recognize the math, you'll recognize the rest.
Interesting part is in tdraw() which draws one triangle.
First find the bounding box of triangle from view space.
Iterate over pixels in that box.
Map pixel into barycentric coordinates of triangle. (distance from vertexes.)
Check zbuffer to see if visible.
Take dot product of light direction and triangle normal to get shading.
Use barycentric coordinates to interpolate where in texture map to sample shade.
Blend shading and lighting.
Update zbuffer and pixel color.
A model is made of lots of triangles. Each triangle has vertices with additional attributes that map into the texture map.