Hacker News new | past | comments | ask | show | jobs | submit login

It depends on the app. For a game engine (and I don't really mean Stockfish, I mean a game loop with input, network, world scripts, rendering etc.) you have a target frame rate which gives a fixed time budget for everything to happen, and you might divvy that up between different bits of code, where it makes sense to budget at a fine-grained level.

For something with a core loop, and all the value is in the core loop, sure, you can perf test the core loop.

But many applications are big, with lots of entry points, and data being worked with can come in different shapes and sizes. Sometimes it can have different dimensions that stress different bits of code. And it may be difficult to plan the breakdown of a global budget in this context.

Part of the catch-22 of optimization is that you need to profile before you optimize, otherwise you're likely to optimize the wrong thing. If you put your benchmarks at the unit level, you run the risk of having individual bits that are fast, but the whole doesn't run as fast as you'd hoped.




But I stand by my claim. Game engines often ship with some kind of benchmark suite. For example, the "Total War" games often have a benchmark where the camera moves across a pre-set map and a battle takes place.

Now true, there can still be UI issues which need to be optimized, but my point is that if you want to test the speed of your rendering engine, of the animation framework, unit-pathing, or whatnot, its best to build a benchmark where you can repeatedly document and execute the critical code path.

A singular, automatic test, which can be run nearly automatically (without user-intervention)... as well as a documentation regime which can document progress on the benchmark, is key to improving the code.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: