At this point one of the most experienced programmers in the team, one who had survived many years of development in the "good old days," decided to take matters into his own hands. He called me into his office, and we set out upon what I imagined would be another exhausting session of freeing up memory.
Instead, he brought up a source file and pointed to this line:
static char buffer[1024 * 1024 * 2];
"See this?" he said. And then deleted it with a single keystroke. Done!
He probably saw the horror in my eyes, so he explained to me that he had put aside those two megabytes of memory early in the development cycle. He knew from experience that it was always impossible to cut content down to memory budgets, and that many projects had come close to failing because of it. So now, as a regular practice, he always put aside a nice block of memory to free up when it's really needed.
Yeap, this is a common and very smart thing to do. Similar tricks are to sneak in a 1-3ms delay in the main game loop so you have about 10% of your framerate reserved, and if you're not running on DVD add some sleep() calls when a file is opened so you're not fooled by the virtually non-existent seek times of the devkits HDD.
Heh, fortunately that isn't much of a concern. These are all things that a good programmer would find within 30 mins when they sit down with a profiler to eek out a few more fps or free memory.
The intention is not to hide these from other engineers, but to keep a bit of performance in the bag for when needed at the end of the project. When you absolutely must hit 60fps, or fit everything into 32mb of RAM, it can be invaluable.
[Meta-note - I find it a bit sad that a post gets 100 points of upvote for reposting part of the article with nothing but "Ok, this is just golden." at the top.]
I don't get this though. At that point, the entire team had busted their bust and had actually met their goal. This guy basically took credit for the work that the entire team had done in meeting the memory constraint. He just reserved some bogus memory so that at the last minute he could say, "Hey look! We've magically met the requirement! I'm a genius!"
Well, when I read 'dirty coding tricks' I'm thinking 'ugly hack' not 'lousy cheater'. This wasn't some 11th hour 'patch the symptoms but not fix the underlying problem' story.
I think the idea is to fool the team into 1) more quickly realizing their is a problem and 2) be less likely to come close to the requirement but fail. It is kind of like someone stealthily setting your clock forward 2 minutes because in their experience you have problems with punctuality.
Right - I've found that 5 minutes I compensate for. 1-2 minutes on the other hand is little enough extra margin that I don't, but just enough to save my arse reasonably often.
My business partner moves the time on his bedside clock around by 15 or 20 minutes semi-randomly every couple weeks to keep himself off balance - that didn't work for me at all but it's been doing fine for him for years now.
I always figured that operating systems kept a few hundred MBs of HDD space secret to prevent catastrophes when the stupid user downloads too many "films" and hits nominal zero.
But what would you do when this final few MB runs out?
I beleive it wouldn't be something that would be done, it would follow a similar pattern to RAM memory, ie. Run out, you get a hard OOM error, game over.
The problem with linking to the print version is you don't get to see the comments. My favorite story of all was in the comments and comes from Ken Demarest who worked on Wing Commander:
Back on Wing Commander 1 we were getting an exception from our EMM386 memory manager when we exited the game. We'd clear the screen and a single line would print out, something like "EMM386 Memory manager error. Blah blah blah." We had to ship ASAP. So I hex edited the error in the memory manager itself to read "Thank you for playing Wing Commander."
Here's my favorite war story, from a GameCube launch title.
QA discovered that if you rapidly toggled back and forth between two menus in the front end you might get a hang, though it could take 10 minutes.
I was eventually able to reproduce it in the debugger, and it was bizarre. A local variable getting corrupted to strange values, but there was no memory corruption visible at all, just the register was incorrect, when it was only ever assigned to once.
No matter what test cases I tried to cut the code down into, it didn't crash. Inspecting the disassembly everything seemed fine.
Finally, I noticed something strange. The suspect variable was in r19, which thanks to the allocation strategy was very unusual, it was most often unused. That gave me enough of a clue to create a cut-down test case that included all the function arguments and local variables without the code, to make sure I got the same register allocation.
I was able to write a single main.c that set the variable once and sat in an infinite loop waiting until it changed. Bingo! _Something_ crept in and changed that value, and it sure wasn't our code. There wasn't much of an OS on the Gamecube, but there must have been some interrupt that wasn't cleaning up its changes to r19 like it should.
I sent off the minimal code to a Nintendo developer support engineer, and monkeyed with our code to avoid having so many local variables in that one function, which fixed our problem. The Nintendo engineer fessed up that it must be their issue, but then the report vanished into the Nintendo bunker, never to be seen again...
Despite being a nightmare I'm glad I escaped, there were some sheer moments of coding joy in the game industry.
This is the type of thing that makes me love and hate emulator development. It's amazing how much code can actually depend on behavior like this, and how often bugs in an emulator are cases where things aren't implemented in the same "broken" way as the original hardware/firmware.
It does make it interesting and challenging, though.
I used to work at a major video game studio where one guy in the building was moving from production to production as a "reverse concept-artist"
I explain: Some modelers that design characters for video game sometime skip the concept-art phase (paper pencil drawings) and move directly to 3D CAD. But those rough sketches are really good for gaming magazines or online articles to show the progression of the artistic direction.
So that guy's job was to render the final 3D model of the character in wireframe mode and then re-touching it in photoshop to make it look like an original paper drawing. The result was quite convincing and appeared in several magazines as the initial artistic sketches for the game characters.
The "Velcro" bug seems uncannily familiar. We had extremely similar issues on a game I worked on. Our lead programmer was going down the "dirty hacks" route, which worked in some cases but in others was producing new bugs. In one instance, I think it was a hovership colliding with a tower, there was a back-and-forth between him and QA for about a week, where he'd patch against a specific symptom, and QA would (literally) find a new angle of attack where it would break yet again. The deadline was looming, so one day, he came up to me and asked, "you studied physics at university, right? Can you have a look at this?" So I did. It took me a weekend, but I eventually tracked it down to a horrendously broken box-to-capsule collision detection and response, which I rewrote. (capsule = sphere-capped cylinder) Sometimes, the less dirty fix does make it into the game. :)
I found the Damp falling through the geometry holes kind of interesting. My current company does fluid-flow simulation, and we have large meshes coming in from CAD with precisely this sort of problem. Our solution was to create an algorithm to remove the holes. It probably would have been hard to brainstorm the idea of fixing the geometry algorithmically, but I bet it would not take too long to write up something. Even if the algorithm is very inefficient, you could let it run over the weekend. A throwaway app is probably a small price to pay for clean geometry.
And I was thinking that only newbies like me shipped those horrendous code... :P
At university there was a team (not related to me, but these guys are the perfect example :P) that made a FPS flash game...
For some bizarre reason, the programmer instead of checking if you was colliding with the wall and not allow you go there, he made the inverse, he checked if there was a wall, and allowed you to move parallel to it...
This sparked a bizarre bug: In crossings, you could not actually cross, only turn to the passage on your left or right.
The deadline was closing, and they had no idea on how to fix it...
Then the team writer fixed the issue! He told the artist to draw a animation of hands touching the walls, and then he wrote in the story that the protagonist was blind and needed to touch the walls to know where he was going.
And the pictures were more than just about framerate. It was a reminder that we were all a team and that we were all working very hard. It humanized the problem and really made everyone aware that we had to help each other out to ship the game.
Can someone explain what is the horrible to pack an integer into void* pointer? A void pointer takes 4~8bytes, an integer takes no more than 4 bytes. As long as the program don't passing value from little-endian to big-endian system, it should be all fine except the semantic meaning?
The primary problem from my point of view is that something, somewhere, deep in your system in an unexamined code path, will actually try to dereference your not-NULL pointer. Something you didn't QA and only happens in the field, or after a certain date, or since IIRC we're talking about wireless controllers, only happens when the local Wifi is on channel 11, or... the perversity of the real world can not be underestimated.
In this case, issues of pointer size are irrelevant since I think the target was a specific console, but of course that's still "dirty".
It's not guaranteed by the language that pointers are as big as or bigger than integers. Some architecture somewhere might die horribly on this. comp.lang.c FAQ on this: http://c-faq.com/ptrs/int2ptr.html
You're right -- the general rule is that ints and ptrs shouldn't be interchanged.
However -- In this case, it's likely the number of integers required (controller ids, which I guess might be 1 to 4?) would fit in almost any ptr you can think of.
Conversions aren't guaranteed, but if you copy/pack it yourself there shouldn't be a problem -- e.g. do a memcpy of one byte.
More likely the author was pointing at the fact that this value was later 'free'd' by the event loop. Which is fairly nasty.
The general rule is that you shouldn't go free-ing memory you didn't allocate.
I had one colleague who would insist on trying to free FILE* after he had closed the file.
Nice stories. I don't understand how exactly the solution for 'All Signs Point to "No-No"' works. He says he packs the controller id into the pointer slot, allright, but doesn't he also say that the framework will free() the pointer later?
Perhaps their allocator only allocated and free'd memory in 4-byte aligned blocks, and would mask out the lower bits to make pointers align to those blocks before freeing them?
If you had an allocator like this, you could use the addresses 0 through 3, and presumably the deallocator would mask these all down to zero, and then do a check to make sure it doesn't free 0, and end up doing nothing.
I was wondering about that as well. Maybe there's an invalid memory region on this platform which free() knows about and will silently ignore (for example the first N bytes of the address space), and the controller IDs happen to be in that range? Seems unlikely, but I can't think of anything else to explain it...
He mentioned a "multi heap" system. If he was using an allocator that knew which address ranges he'd allocated from, it's not unlikely it'd work. Alternatively if he used an allocator that'd do suitable sanity checks - free() will need to be able to find the length of the block, for example, and if the length doesn't make sense...
It turns out that the event system would take it upon itself to free() the event's void pointer after processing the event.
You could set the pointer's value to be the controler's ID and then set it to null while you handle the event, and before you pass it down the chain. Or, the OS might be ok (and do nothing) when trying to free memory that's in the middle of a block of allocated memory.
If so his original approach of putting the controller id in a region allocated for the purpose and putting the pointer to that region in the event would have worked just as well and he wouldn't need to do what he did.
Identity crisis doesnt sound like a hack - more like,well, a solution (I cant see how changing the resource indexing was even considered as an approach :D)
We were still 1.5 MB over the memory limit!
At this point one of the most experienced programmers in the team, one who had survived many years of development in the "good old days," decided to take matters into his own hands. He called me into his office, and we set out upon what I imagined would be another exhausting session of freeing up memory.
Instead, he brought up a source file and pointed to this line:
static char buffer[1024 * 1024 * 2];
"See this?" he said. And then deleted it with a single keystroke. Done!
He probably saw the horror in my eyes, so he explained to me that he had put aside those two megabytes of memory early in the development cycle. He knew from experience that it was always impossible to cut content down to memory budgets, and that many projects had come close to failing because of it. So now, as a regular practice, he always put aside a nice block of memory to free up when it's really needed.