A bit of shameless self-promotion, if I may: I've been working with No Starch Press to turn this into an actual book, and it's coming out within a couple of months! I'm going to update the website version soon, to reflect (heh) the much-improved quality of the text after months of editing. Stay tuned!
Have you ever written about your game development company and how you managed to get the contracts to produce the Ghost Whisperer and other games?
Ghost Whisperer, CSI:NY, and Murder She Wrote were done in partnership with Legacy , who are in Los Angeles and have the contacts. For CSI:NY it was really exciting that the actors of the show recorded the voiceovers, but we were nowhere near them :(
This line is very motivating to someone who works with data frequently.
This guy graphics! I remember my highest pleasure in programming was attained when I first drew a rasterized triangle. Of uniform color.
But when you've been futzing around with quaternions all day, and your camera finally stops acting like a hyperactive husky who hasn't had a walk in days...well, I imagine that's close to what it felt like when they got the internal combustion engine to stop being an external combustion engine.
This (generally-pretty-great) book, like many educational materials about computer graphics, makes a terminological choice that I believe to be plainly a mistake. It uses the word "rasterizer" to mean "renderer that works by applying transforms to polygons", as contrastive with a "raytracer" (more generally, a physically-based light transport model). (e.g. "Raytracing and Rasterization, which focus on the two main ways to make pretty pictures out of data").
Properly (from a descriptivist standpoint), a "rasterizer" is any algorithm that transforms vector data into raster data, where rasters are arrays of horizontal lines, each horizontal line composed of pixels. RAYTRACERS ARE RASTERIZERS. (I mean, not if you're using the raytracer to detect collisions or for some other purpose; only if you're using it to generate 2d images of any kind.)
Surely there must be some better terminology for the non-physically-based option? I tend to say "polygon-pusher", but this is my own neologism. Does anyone have any suggestions for a more accurate name for this type of renderer?
Saying "rasterizer" in the sense the book does is as common as saying "literally" to mean "figuratively", as common as writing "there" to mean "they're". So I know that probably there's no fixing it. But I'm still hoping.
A ray tracer, on the other hand, is (sometimes) looping over pixels in the outer loop, and then iterating over objects (“traversing the scene”) in the inner loop.
The existing names are also good descriptors in the sense that they describe the core of what must be computed. A rasterizer always rasterizes. A ray tracer always traces rays but does not always result in pixels, the result can end up in a spatial data structure, it can trace resolution independent shadows & reflections, it can save results in a cache that gets rasterized later, it can end up in multiple pixels (splatting, pixel filtering), etc., etc.. Yes it’s very common to render pixels with a ray tracer, but it’s not required or inherently part of the algorithm, while rasterization doesn’t exist without pixels.
You're technically correct (the best kind of correct!). Since raytracers draw pixels on the screen, and the screen is an array of horizontal lines, anything that draws pixels to the screen is a rasterizer.
On the other hand, "polygon-pushers" are rasterizers in a deeper sense: triangles are explicitly sliced into arrays of horizontal line segments, and this shapes how everything else works in them, e.g. the need to interpolate attribute values along the edges of each triangle so you can then interpolate them along each raster, etc. Raytracers do no such thing. So all renderers are rasterizers, but some are more rasterizer than others ;)
I also wonder what happens, for the sake of brainstorming, if you change the output to be something that is not a rectangular array of horizontal sets of pixels. In that case you probably wouldn't call a raytracer "a rasterizer" since there would be no raster data, but a polygon pusher would still fit the name, since it uses rasters internally?
BTW, I'll never accept "literally" used as "figuratively", "there" for "they're", or "could care less" unless you do care a little bit. So I don't think you're a jerk, and your comment prompted some interesting thinking!
I mean, sure, the polygon-pusher presumably does it post-transforms, i.e. in screenspace, and the raytracer does it in some other space (exactly what space is gonna depend on implementation of the camera and stuff I think), but I think it's pretty similar?
But as far as I know the depth-checking in a polygon pusher is totally about the raster, so in that sense, yeah, the (non-wireframe) polygon-pusher has a deeper relationship to rasters.
Worth noting that neither method is particularly physically based, and really just boils down to whether you approach the rasterization task "forwards" or "backwards". Which method ends up faster is essentially a function of how much geometry and how many pixels you have.
I admit, that's a rare example (though not unique; there's a reason wire-frames, as a display mode, came into existence). But I think it's absurd to say that the rasterizer is the heart of the polygon-transforming approach; the heart is the transformation matrix.
At the top level, you split polygon-transforming vs ray-tracing (or geometry-main-loop vs pixel-main-loop, or projecting vs searching-from-the-camera, or whatever). Then you split polygon-transforming by output style, raster display vs vector display?
But I don't think it's a branching taxonomy? Both can raster-display, both can vector-display, and both can be used for things other than display. I feel like it's just a 2x3 matrix, or maybe I'm missing some rows columns.
I also agree with your other comment that, even though it's not strictly necessary to do any rasters at all (Battlezone), in 99.99% of cases polygon-pushers are more rasterizer-y than ray-tracers. Even if you're not texturing, you're doing depth occlusion checking (again, Battlezone and other wireframe renderers don't, but yeah, niche), and although there may be some way to do depth occlusion without rasters (z-buffer), I dunno what it is and I've never heard of anyone doing it any other way.
I dunno, when I've taught the material that your book covers (on rare occasion, in less-formal settings), I have definitely felt this need for a raytracer-vs-polygon-projecting taxonomy, and definitely felt a need to talk about rasterizing, but not felt a need to talk about rasterizing in the context of the taxonomy.
What did you read prior to undertaking the project? How did you know what you needed to build? Have you been working in graphics awhile?
I would totally read any blog post or postmortem writeup you have if you decide to publish your results.
I ended up doing a lot more math than I expected to understand the fundamentals of projection, basically teaching myself linear algebra, but that was pretty fun in itself.
My goal was to build a minimum-viable software rasterizer for embedded situations (STM32-F7). I still haven't gotten around to getting lighting to work efficiently, or to leverage SIMD, but I intend to whenever I get back to the project.
The source is here if you're curious: https://git.sr.ht/~zjm/Moon3D. There's also some screenshots/progress updates on my Twitter: https://twitter.com/zackmichener
I took it well over a decade ago, but the current material is largely similar. Going in you were required to know C and C++, and math pre-reqs (Calculus, linear algebra). Other university classes might skip the rasterizer building, but I think its a useful foundation.
We started with basic OpenGL to render models. Then we replaced the OpenGL with our own rasterizer, adding features as you went. It was a tough class (10 week schedule), but very rewarding. Probably the worst part was debugging rendering issues. You could see the problem, but it was hard to locate where your math went wrong.
I particularly enjoyed the concise explanation and code example in the Perspective Projection section. Often web based explanations are lacking. Not the case here!