Hacker News new | comments | show | ask | jobs | submit login

> but that's no excuse for just pissing away cycles doing pointless stuff.

Yes there is - maintainable code, programmer time and effort. What are you worried about, a large power bill due to your cpu doing what is supposed to be doing?

On an unrelated note this kind of attitude is the first thing I test for during a programmer interview, and is my strongest case for eliminating potential candidates. I made the mistake once of letting one through - his first task was a relatively straightforward modification to a large C program. A week later I was a bit worried he hadn't reported back done, so I went to check up on him and it turns out he was busy changing every line of code to adjust formatting and variable names, not to mention these kinds of pointless micro-optimizations. And he hadn't even checked in once, he was saving the checkin itself for another multiple day effort. Sigh. I tried using the "premature optimization is the root of all evil" line on him to get my point across (and see if he had heard of it), and it was when I saw his eyes flare up in anger I knew he had to go. Sad really because he was otherwise quite bright.

Now I basically put C++/game programmer applications in a "special pile" to be considered as a last resort. I just dont need this kind of arrogance and cowbow mentality wrecking the place. Its like sending a bull into a china shop.

If performance is a requirement, it's a requirement, and you need to bear it in mind. And working in games, it usually is. Virtually every project has problems with performance, and dealing with the issues properly at the end of a project can be very hard. By that point, the code is usually insufficiently malleable to be safely transformed in the necessary fashion, and there's a very good chance that you'll introduce new bugs anyway (or cause problems by fixing existing ones).

So, armed with a few simple rules of thumb about what is particularly expensive (let's say: memory accesses, branching, indirect jumps, square root/trig/pow/etc., integer division), and a bit of experience about which parts tend to cause problems that can be rather intrusive to fix (and object culling is one of these), one might reasonably put in a bit of forethought and try to structure things in a way that means they're relatively efficient from the start. Better that than just producing something that's likely to be a bottleneck, but written in a way that means it is never going to be efficient, whatever you do to it, without a ground-up rewrite. And that's the sort of approach the slide deck appears to be advocating.

Seems uncontroversial, to my mind. Some (and I'd agree) might even just call this planning ahead. Seems that when you plan ahead by drawing up diagrams of classes and objects, because you've been burned by having people just diving in and coming up with a big ball of spaghetti, that's good planning ahead. But when you plan ahead by trying to ensure that the code executes in an efficient manner, because you've been burned by having people come up with slothful code that wastes half its execution time and requires redoing because of that, that's premature optimisation, and a massive waste of time.

As with any time you make plans for the future, sometimes you get it wrong. Ars longa vita brevis, and all that.

> What are you worried about, a large power bill due to your cpu doing what is supposed to be doing?

Video games are real-time. There's a hard limit on the number of cycles you can make use of per frame.

11 cycles spent mispredicting a branch is 11 cycles you can't use for better-looking IK or smarter pathfinding.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact