Hacker News new | comments | show | ask | jobs | submit login
N64 object software renderer in 512 lines (github.com)
130 points by _cwolf 42 days ago | hide | past | web | favorite | 57 comments



On the note of N64, N64 emulation, especially its GPU portions, are still incredibly hard to get right and has been fraught with problems for a long time. On modern systems the performance still can be less than desirable. One of the saving graces to this has been GlideN64, https://github.com/gonetz/GLideN64/releases

But even that is still in a lot of development. There's a lot of magic the N64 hardware uses apparently.


Reading up on the N64 hardware a bit gives the impression that it was ahead of its time, more akin to the shader based pipelines of today than the kinds of hardware that was starting to show up on PCs at the time.

https://en.wikipedia.org/wiki/Reality_Coprocessor


RSP microcode switches are extremely slow -- they're designed to be programmed once when the game boots up, but some games switch back and forth between a 2D and 3D microcode during different modes. So they're not like shaders in that sense -- shaders are designed to be switched every frame. And keep in mind that the RSP doesn't handle blending. The RSP can kind of be compared to a vertex shader, and the RDP is the fixed-function unit that handles rasterization, interpolation, and blending. It's "programmable" in a glTexEnv sense, but you can't upload your own code.


FWIW, a lot of other 'fixed function' GPUs were internally little custom vector processors too, they just weren't exposed to external developers.


This is the impression that most people end up with, but it's not really like shaders at all. My understanding is limited, but it seems the microcode serves a somewhat limited role in the graphics pipeline, something along the lines of decoding data and controlling hardware registers.


In most contexts, the RSP handles all T&L, making it analogous to modern vertex shaders.


This was pretty standard at the time - everything had a programmable DMA controller, which was capable of extensive data transforms, and often there had been some vector ALU stuffed in. The difference with modern arcs is that it was sitting in front of fixed function hardware. After you did whatever you wanted with the data it still went into the fixed function hardware and came out as pixels on the screen. You could have the same data pre-computed and fed to the GPU for the same result.

In modern architecture, there are programmable ALUs inside the fixed function hardware and its fixed functions are very limited, it's just triangle setup and color/depth buffer, usually. The GPU input is just a list of indices, which is transformed by the programmable processors inside.


There's also a weirdly huge amount of drama in the N64 emulation space, so a lot of the more talented people have left over time. It's not that tremendously hard to build an N64 emulator, it was just a weird social space for the longest time and nobody really wanted to put up with it.


What kind of drama is there? Seems like a weird place for there to be drama about anything.


The emulation scenes are... odd. Not quite free software. Egos and entitlement on the part of the users a lot of the time.

The GameCube scene was excellent, however


Because there's just one emulator, dolphin. I'm unaware of any other major ones?


Exactly. The GameCube/Wii emulation scene does not have any of that drama like the N64 scene has, which leads to everyone in the scene being happy to work together on one single emulator instead of everyone working on their own projects.


You can broaden the scope to console hacking in general. Even if drama's your thing, you probably won't last more than a few years.


I wasn't very clear; yeah, exactly! I was talking more about the broader console hacking scene. It's a mess, heh.


Looks like it is just using OBJ files (nothing specific to N64 data formats at all).


From what I can gather N64 uses a binarized OpenGL display list format for models: http://ultra64.ca/files/documentation/online-manuals/man-v5-...

But I'd bet most titles had their own formats to support vertex deformation and animation. (Anybody know any details?)

It looks like it'd be pretty fun to program the N64 GPU, with its "high-quality, Silicon Graphics style pixels" :) https://level42.ca/projects/ultra64/Documentation/man/pro-ma...


Most titles use one of the standard Nintendo-provided microcodes for the RSP (Reality Signal Processor, the programmable part of the graphics chip).

F3DEX2 ("Fast 3D, Extended, Version 2") is one of the well-documented ones and one of the ones used by most games. You can find a breakdown of the command stream here: https://wiki.cloudmodding.com/oot/F3DZEX

This viewer is actually an .obj model viewer, and has nothing to do with that. For something that's actually an F3DEX viewer online, I wrote https://magcius.github.io/model-viewer/#zelview/data/zelview...


Maybe N64-like would have been a more appropriate title. It implements an N64-like pipeline with software texture mapping and Gouraud Shading.


It does say "The N64-like software renderer" on the GitHub page.


I think it was modified after being posted.


Yes it was, to clear things up a little.


I never understood the importance of lines of code for projects such as this one.

Couldn't you technically just concatenate everything into 1 line and call in "X software in 1 line of code"?

Isn't it a better benchmark to have better code structure even if the project is composed of more lines of code or more files for that matter.


They aren't just concatenating though. The code is clean and understandable with short comments explaining a few things.

> Isn't it a better benchmark to have better code structure even if the project is composed of more lines of code or more files for that matter.

It isn't about benchmarks, it's about understanding a concept enough to cut junk out and providing clear concise code that does as expected.


A better measure of “effort to write code” is the number of lexer tokens. So, identifiers count as one (regardless of length), operators count as one, each paren counts as one, and spaces count as zero. Code that has had more “blunt force” applied to it to get it working will end up with more lexer-token length. Code that has had most of its repetition factored away will have few (although, in turn, its cyclomatic complexity will go up, meaning that it might now be harder to read, however much easier it is to maintain once grokked.)

You could also measure the compressed size of the code, since compression eliminates redundancies like long identifiers being repeated. But they’ll also sort of “cyclomatically optimize” your code as well—effectively re-rolling any hand-written unrolled loops, and shortening repetitions of a(b(x)) into the same result as a_b(x). You might say compression shows the optimal lexical size of the code. :)


Yes.

That said, it's novelty and working within set restrictions that makes it impressive.

Not the line count of the code, but the fact that it works within the set restrictions of making an n64 software renderer (novelty) with that line count of code. (also a novelty)

It's not important at all, it's novel. We have had hacky, and even gimped for quick hacks n64 emulation (compare Project 64 to say, Dolphin) since the late 90's-early 00's, but we haven't had a 512 line object software renderer. A novelty for those who like novel programming exercises!


I completely agree. You make a good point! It's definitely a novely.

In my programs, I never took lines of code as a restriction. Since if you have a compiled program, the binary size tends to depend mostly on the compiler. I'm more focused on other restrictions, such as memory usage, etc.


High praise. Thank you.


This is very well written code. There are not a lot of comments yet it's designed in a way that it is still quite readable. I believe this is due to good partitioning of concerns, which also leads to an economy of code. It goes without saying, perfect formatting is a must.

"Elegance is not optional.

There is no tension between writing a beautiful program and writing an efficient program. If your code is ugly, the chances are that you either don’t understand your problem or you don’t understand your programming language, and in neither case does your code stand much chance of being efficient. In order to ensure that your program is efficient, you need to know what it is doing, and if your code is ugly, you will find it hard to analyse.” -- Richard O’Keefe, The Craft of Prolog (MIT Press)


Some problem spaces have boundaries and exceptions. These can be somewhat ameliorated by better representations ... but sometimes (e.g. device driver apis, or context switching code that works on multiple processor families, or (heaven forfend) business logic) ugly is ugly and you can't do much about it. It can't all be elegant global illumination solutions...


All code can be elegant if the language allows it.


Has nothing to do with N64. It's just basic software renderer and a .obj/.bmp loader. I made one of these when I was 12 after reading "The black art of 3d game programming" which was designed for dos games.


As someone who knows nothing about graphics programming, I'd be interested in seeing a walkthrough or "literate" version of main.c.


There are good resources online for software rasterizers:

https://www.scratchapixel.com/lessons/3d-basic-rendering/ras...

https://fgiesen.wordpress.com/2013/02/08/triangle-rasterizat...

As a graphics programmer, I find the example code extremely readable. If you recognize the math, you'll recognize the rest.


Thanks for sharing these links.


Same here. It’d also be great if the variables were a bit easier to interpret.


I know a little bit of how it works...

Interesting part is in tdraw() which draws one triangle. First find the bounding box of triangle from view space. Iterate over pixels in that box. Map pixel into barycentric coordinates of triangle. (distance from vertexes.) Check zbuffer to see if visible. Take dot product of light direction and triangle normal to get shading. Use barycentric coordinates to interpolate where in texture map to sample shade. Blend shading and lighting. Update zbuffer and pixel color.

A model is made of lots of triangles. Each triangle has vertices with additional attributes that map into the texture map.


I'm going to revisit the program with the above in mind. Thanks!


I'll see what I can do.


Is there any reason why most of these 3d engines use one-letter variable names?


Because 'x' is easier to type than "getHorizontalScreenSpacePixelPositionRelativeToBottomLeftCorner()" ?


While true, you end up reading code with your variables more often than you write your variables so I tend to optimize for readability over ease of typing.


In a setting with much arithmetic "x" is also vastly easier to read. See pretty much any mathematics or physics paper or textbook -- made for reading...


Thanks!


Why are they called "scrots"?


"Screencap" would be a less objectionable term.


I see what you did there


SCReen shOT


That's a really unfortunate abbreviation. That word is also common slang for male anatomy in the States.


'scrot' is the name of a popular linux screenshot utility. I have never heard of anyone using it in any other context. Well, until now anyway :P


That's scrote (long O), not scrot (short O).


I find some humour in it.


Never heard that before, thank you!


(deleted)


And this is the case


How many lines of WebGL would it be?


How many lines does it take you to build a pixel buffer in WebGL? SDL2 here is being used to handle input, open a window, and update a texture stretched over the full window using a software-rendered buffer of pixel data.


Probably about the same. Most of the code is defining data types and loading an OBJ file, and modern OpenGL requires quite a lot of boilerplate before anything gets drawn.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: