Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Quake 1 and 2 model formats were very similar, the interpolation was a rendering feature. After the Quake 1 engine source was released in late 1999 the interpolation was quickly added by fans (although it wasn't as easy as it sounds, as IIRC the original Quake 1 source (not QuakeWorld) didn't track entities across frames on the client-side, so that had to be added first).

The main difference between the two model format was how they encoded vertex coordinates. They both stored X, Y, Z coords as one byte each. But MDL (Quake 1's format) had a uniform scale/offset for transforming these into the final coordinate space, whereas in MD2, each animation frame had its own scale and offset. This seems like an upgrade but when combined with interpolation it could also result in a pretty ugly "vertex swimming" (jiggling) effect when you tried to portray subtle movements, like the idle anims for the player's weapons.

One of the many things I admired from Quake is that there was a pretty uniform scale of detail to everything. There wasn't really anything that had higher polygon detail, texture resolution, or animation rate compared to anything else in the world. Everything looked very solid and consistent because of that. Quantized vertex coords was one of those tricks that seems restrictive but it didn't hurt them with the game they designed



While we're talking about clever quantizing, we should mention the vertex normal encoding. In MD2 (iirc, not sure about MD1) each vertex normal was stored as a byte which indexed into a pre-established array of unit vectors which were more or less uniformly distributed around a sphere. It was a creative way to have good-enough per-frame normals in a tiny amount of space without forcing the engine to do any painfully slow per-frame normal generation (with the floating point division and square root which that entailed).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: