(Has this been posted to HN before? Good stuff in there.)
* "more or less" because the business structure is more complicated than that and there was still an Apogee Software alongside 3D Realms, but for all practical purposes, what was Apogee Games became 3D Realms and then died due to the Duke Nukem Forever project.
Whilst I have been so-so on id games since Quake 3 (Actually Quake 3 was excellent but it was just so overshadowed by the jaw-droppingly awesome Unreal Tournament) I have huge amounts of respect for Carmack as a programmer and also for the way id generally do business.
I remember scanning through the source code for the original quake some years ago (not really reading in depth) but I couldn't help but assume my download was borked and I was somehow missing 90% of it , the core of the engine is tiny, I get the same reaction when I look at the Linux kernel.
I've written business systems that are orders or magnitude larger in terms of code than probably all the id games put together but have nowhere near the awesomeness and variety in terms of functionality.
I wonder whether this is just how he programs from the get go or whether he writes something much larger and clunky in the beginning but is just ruthlessly disciplined at rewriting things to make them smaller and more efficient.
Carmack, as dictator of id code (and thus id design) had the ability to say "we are not adding the following feature to our game because I don't like the compromises it will force in our technology and, thus, in other features we want to add down the road." It's basically as simple as that.
In almost all other organizations, someone holding the pot of money will trump the programming overlord at that point and say, "I don't care, XML is the buzzword this second, and all objects in the game need to be serialized to XML as web services - that's what the market is clamoring for right now" (or whatever the buzzword du jour is). And over time those political technical decisions add up to so much cruft that adding any new thing becomes a technical, jenga-style nightmare.
Not so with id's engines. I actually worked on beta versions of both Quake 2 and Doom 3 code bases while those games were in development, and one striking thing about getting updates from Carmack was his ruthless willingness to eliminate working, finished features / code if he didn't think the complexity or dependencies they introduced were worth whatever feature they added in the overall technical context.
Quake is what you get when a genius programmer who has mastery of his knowledge domains leads. Institutionally, there aren't too many other circumstances where similarly talented and sure-footed and, well, fundamentally, _right_ programmers get to specify requirements like that. I think the comparison to the Linux kernal is apt.
I suppose I am probably jaded because I work developing bespoke corporate software with relatively few users compared to Quake. Therefor each user has huge individual sway over the functionality/design even if they don't really get the nuaces of building usable software. For example they will request additions of buttons in nonsensical places because it helps their personal workflow but then somebody else will come along and use it then complain that there's too many buttons and it's too complicated "can we please make this more like the iPad?"..
I wonder if there are any pieces of bespoke corporate software that aren't in fact giant piles of bloated dogshit?
All the books on this subject recommend doing thorough requirements analysis but that doesn't really work when you have to deal with people who can't really articulate what they want and actually have no idea what might suddenly become important in the future. Plus you have to deal with corporate politics of people with different roles and places in the heirachy who want and need diametrically opposed things, people will also often have good ideas but they won't tell you them until it's too late because they are worried about contradicting their boss.
It would certainly be refreshing to work with a Carmack but I doubt that I would have the focus or brainpower required to keep up!
I don't know C so I can't, let alone appreciate, the source he's released to judge for myself.
Thanks for sharing.
I think id does it mostly because Carmack personally pushes for it. However it arguably makes some sense for them, relative to other game companies, because they've been particularly identified with the "genius hacks to make stuff work" aura, and every source release produces a new wave of discoveries/press about crazy things id did in the guts of one of their engines, which solidifies that reputation. Plus it's probably good for Carmack getting historical credit to put the techniques out in the open once they're no longer most-recent-gen.
wow, that sounds interesting! any links?
If you have not read his work, you are in for a treat. He has a wonderful writing style that is both engaging and illuminating.
There are a number of chapters  in the book that specifically discuss technology and development related to Quake.
 PDF: http://www.gamedev.net/page/resources/_/technical/graphics-p...
Chapter 64 - Quake's Visible-Surface Determination
Chapter 66 - Quake's Hidden-Surface Removal
Chapter 67 - Sorted Spans in Action
Chapter 68 - Quake's Lighting Model
Chapter 69 - Surface Caching and Quake's Triangle Models
Chapter 70 - Quake: A Post-Mortem and a Glimpse into the Future
Carmack's Square Root implementation, which runs four times faster than the assembly language instruction.
On my computer it is indeed four times faster, but the marvellous method is on its own again four times slower than SSE, using the rsqrtps instruction, with the advantage of a way lower maximum error.
To take the square root of an array of 1024 floats:
CPU cycles used: 48160, error: 0.000000
CPU cycles used: 2970, error: 0.000392
Marvellous Carmack method
CPU cycles used: 11330, error: 0.002186
Remember, Quake was released in 1996, SSE came out in 1999.
A vectorized version of the marvelous method would be a fairer comparison. If your data is not very vectorizable and you don't need high precision, it's still fairly marvelous.
The SSE version is 16x faster than the x87 FPU version and almost 4x faster than the marvelous method. Further, the marvelous method is a loop and is much more resource-hungry than an instruction that goes away for 5-10 cycles (reciprocal throughput is now 1 cycle) but leaves you with most of your execution resources to do other useful work.
Vectorizing the marvelous method is very likely to be painful, as it has the treatment of a value as both a float and an int - while the xmm registers are dual-purpose operating on one successively as a int and a float causes some extra latency. The use of bit shifts and float operations require these cross-domain transfers...
Would you mind posting your results here? I'm quite interested if this has the same pattern everywhere.
Input array size is 1024
CPU cycles used: 54514, error: 0.000000
CPU cycles used: 900, error: inf
CPU cycles used: 21514, error: inf
I'm not sure how to interpret the errors. I had to make a few changes myself for this to compile on my machine.
I don't know if it's a conscious policy or not, but it's interesting nonetheless.
They usually dual license the old engines , distribute 1 under GPL and another under a commercial (albeit relatively cheap compared to the newer engines) license.
The GPL license can only be used to create games that are also GPL licensed (so stuff like alien arena I think).
I suppose this could allow someone to start developing using the GPL'd engine and then switch to the commercial engine later once they had something ready to release.
I like to think it is more to do with carmack being curious as to what people will do with his creations.
As the game resources are what the players mostly associate with the game, you can still make and sell your own game based on a GPL'd engine without the GPL really cutting into your profits.
Also because they started the good habit a long time ago and it definitely showed off benefits. (AFAIK, the iPhone port of the DooM engine depends on the open-sourced branch of the DooM engine)
Also, compared to UE3, ID Tech 4 was not used in many games.
It's actually slightly different from that: "if you're going to make a game with id Tech 5 then it needs to be published by Bethesda"
They've since moved on and their most recent game, RAGE, uses the fifth version of their engine, not the fourth.
They're basically open sourcing the engine that they're no longer hyping/selling, while they continue to use their modern engine for business.
How can they? I mean, I guess other companies just lets their engines die instead of open sourcing it.
I wonder how much C++ there is involved in the code.
The Doom3 engine is mostly C++. If I recall correctly, the renderer still looked more like a blend of C with some C++ slowly drifting in (which makes sense, given how good of a C programmer Carmack is), but the rest of the game code is very heavily C++, for some definition of C++, along with a scripting layer.
Having worked extensively with the Quake 2 engine previously, I really didn't like the Doom 3 engine, myself. It was radically less... pragmatic. Earlier id engines, to me, were masterpieces of concision. They weren't amazing feats of black box abstraction by any measure, and they implicitly assumed you needed a lot of domain knowledge to understand what they were doing, but they did exactly what they needed to do with a minimum of fuss. Features were included because they were used, generally with the smallest and simplest code and tool footprint possible.
Not so with Doom 3. Huge amount of code appears to have been written because someone prolific thought it would be lots of fun to write it, many features end up going unused or are in the way, and ultimately (because it's more typically software engineered in style), every feature requires about 30 times as much code to be written properly and has several extra unused layers of abstractions crufting things up. The game code (in C++) in particular bears an unfortunate resemblance to Windows MFC-style C++, with a giant 12 layer deep inheritance graph with everything inheriting from a few enormous base objects.
All of that to my eyes, anyway. But then, I really cut my teeth on Quake 1 and 2, and found that their concision and brevity really helped them stay out of the way when adding new, innovative features.
The funny thing about the code for Doom 3 is that, looking through it all, you'd get the impression that Doom 3 was a radically more complicated game that what you ultimately experience when you play the game.
The best way I could put it is this: Quake 1 and 2 and 3 were well made games that ended up being enormously fruitful for white box reuse. You couldn't do much with them without understanding fair bits of their internals, but they didn't actively resist being understood. Doom3 was built to be used in a more black box fashion (like the Unreal Engine), with all that entails from an architecture perspective. I'm sure others might disagree with me from a code ideology perspective, but it is hard to ignore how fruitful the Quake engines were for licensing, and how much Doom 3 sank like a stone.
The code bases make really fascinating studies in contrast about the consequences of different software engineering values, at the very least.
TL; DR: Quake 1 - 3 remind me of linux code, Doom 3 reminds me of MFC.
What you write about D3 (i.e. idTech4) makes me a bit sad. How comes it? Less influence by Carmack? (Because he still seems to be the pragmatic guy to me.) Too often, when C++ is involved, I see much too complicated code. Whereby I still have the meaning that you can really end up in having simpler code with C++. (Well but that goes into the usual C vs. C++ ranting discussion.)
Do you have any knowledge about idTech5 (or maybe even idTech6) which you can share?
I don't have any knowledge of idTech5, although I would guess it goes further down the lineage of Doom3. id still has to find solutions to the problem of programmers working on teams, after all - no amount of Carmack genius is going to magically fix that all too banal (and inevitable) organizational challenge.
I had my fill of the industry by the mid-2000s and have been working on indie/experimental/violent educational games since, so that's roughly where my big iron engine knowledge fades...
Do you disagree?
Those run on Linux and that is awesome.
Me too! I just heard about the dark mod a while back and tried to get it working. I didn't want to go buy Doom3, because I don't like the game and I couldn't be sure whether or not TDM would work. So I tried with a pirated version and it didn't, so I'm stuck waiting for the open source release of D3 engine (and the subsequent "porting" of TDM).
(For some reason, PC games for me have a terrible track record of not working. I often try a pirated version of the game and if it works and I like it, I might buy it. If the pirated version doesn't work, I won't bother because experience tells me the legal one isn't probably better and I hate begging for refunds from Steam support).
It's fairly certain that id's legals will approve the release as they have done with all their previous game engines. The only problem might be if D3 uses some 3rd party libraries.
While being at it, it shouldn't be too hard to use geometry shaders for the shadow volume extraction in the D3 engine. Hmm, maybe I should do it :)
Previously id was an independently owned company. Now they are owned by ZeniMax, a larger entity.