Even for stuff that's fairly testable, like navigation, collision, etc, you have the issue of there being a million edge/degenerate cases, and it's very hard to know this never hits one in the game code or content.
So the best you can usually do is test library code.
Not that this has anything to do with AK, which sounds like plain old sloppiness.
I'm not a game developer, but I am a realtime system developer. We could hem and haw about all the different hypothetical ways all the implementation details can suck, but locking and/or hardcoding for certain framerates is a perfectly fine decision. Doubly fine when those frames aren't graphical.
Easy but contrived example: If I hard code a purpose-specific audio pathway to only operate at 44100hz, that can be a perfectly acceptable design decision given the purpose. How deeply the code assumes that rate can be indicative of code quality, but if the assumption is hard to excise from the code in performance-critical areas, well that happens.
Here is the part where I have to qualify I'm not saying more than I've said. I'm not defending the use of 30fps cap here, or saying that this game's code is any good overall. The gaming customer has higher realtime expectations than 30fps, especially on PC. And although I haven't played this game, the market is showing it is a bad product beyond that.
I would wager that if your fix is to artificially cap performance of your software, then something in your code base sucks majorly. Call me crazy.
In a realtime system, consistent performance is a more important goal that maximum performance. For games that want high-end graphics, the goals are to have maximum performance on data of highly variable complexity, push the limits of what can be done, but rarely (if ever) whiff on the real time deadline. I won't disagree that most game codebases have pockets of major suck or that a locked rate can be a bandaid for sucking, but it is not indicative of such.
(30Hz is somewhat common as rendering these days tends to involve a lot of fixed-cost full-screen passes for lighting and postprocessing. So by halving your frame rate you get over double the effective rendering time, which you can spend on having more stuff, higher res, etc.)
Even for 30Hz games, during development it's very common for the game not to run reliably at even 30Hz, until towards the end, at which point stuff is taken out, and assets and code optimised to make it all work. So it's not a cap that artificially limits performance so much as a floor that the developers are striving to hit :)
(I have no idea what the problem with Batman is specifically.)
I can't say with any confidence what people tend to do for PC - I don't play PC games, and I last worked on a PC game in 2006 - but supposedly one tactic I've heard is used is to interpolate object positions and properties when it comes to rendering. So you run the logic at a fixed rate (to avoid the problems I alluded to in my previous post), and save both current object states and previous object states. Then, when it comes to render, you'll be needing to render the game state as it was at some point between the most recent game state you have and the one before that - so you can imagine your latest state as a keyframe, and the previous state as a keyframe, and just interpolate between them.
I imagine this can be a bit tricky to retrofit, though, so no doubt many ports from console to PC just decide to use the same frame rate as used on the console. But I'm guessing a bit here. Unreal and the like might just do it all for you these days anyway. Maybe the Batman programmers just forgot to tick the right checkbox ;)
Now, for a fast-twitch action game, processing input at 30 FPS is unacceptable; input lag sucks giant donkey balls. But I would expect that a AAA development studio knows that running input-handling, networking, physics, AI, etc in lock-step with the rendering loop is not the smartest idea. These things should run on their own schedules, independent of the graphical framerate.
This isn't a movie, it is active, not passive. "Better than traditional films" is completely irrelevant.
Take a game where you can cap the framerate and bind a key to switch between capped at 30 and capped at 60, if you're playing the game you'll immediately feel the transition and 30fps will feel relatively unplayable.
Yes, of course a frame lock sucks, but it's not the end of the world and many people wouldn't even notice if not for everyone complaining about it.
Yes, at that time (10-14 years ago), 30 fps was considered playable. But that is "playable" and even then wasn't recognised as a great framerate for most of that period.
For the last decade, less than that has been seen as bad. In fact a few years after half-life's release (when it was still popular but hardware had matured) 100fps was the goal, to match the 100Hz that most monitors back then could do. 60fps only even entered the discussion with the switch to TFT/flat panels with their lower refresh rates.
Half-life also exhibited strange behavior if the 100 fps frame cap was lifted by turning on developer mode. If I recall correctly (forgive me as this was almost a decade ago) weapon mechanics or movement speed changed.
The point of the unit test is to test the interface in isolation, not the implementation - e.g. do you really expect when testing a hash-map the elements to be in specific order, or do you test for their presence?
> If you have an unit test on private members, then you can't change the implementation safely without breaking tests.
This makes absolutely no sense
You change the private implementation, you change the unit test. It's that simple
Then you keep your API the same so that external users don't break.
It seems to be from the same people that like to whine about "missing tests and lack of coverage" quite funnily. It seems they like to nitpick and idolize tests instead of shipping
But this is what I seems to be getting out of it: You are given a black box, with inputs and outputs. There is also a spec (it could be in your head for all I know) that defines that for certain inputs, certain outpus are expected. This spec also tries to cover quite a lot distinct cases. Each such representative case of input and output is an unit test. (If your spec was really in your head, your unit tests kind of becomes it, or I like to think about it in this way - a Unit Spec :)).
The tricky part is when this blackbox is internally working with other blackboxes. Unit testing is all about testing the blackbox in isolation from other blackboxes. As such one needs to isolate them away. Currently what I'm using is DI (Dependency Injection) with guice/gin/dagger to achieve that.
Thanks for all comments, it seems I have to fill my gaps in what I know.
The metatesting reason for this confusion is that in the example you used, the scope of unit testing is the same as the scope of integration testing. If a class has 0 dependencies, a unit test is also an integration test, because it tests all the dependencies of the class, of which there are none.
So going back to your original point, your affirmation is true for full integration tests, i.e. tests that do not stub any dependency. If you do not stub your network connection to the memcached server, the same test - albeit with a different setup - can be used for a local hash table implementation.
In short, you are testing the interface of that private method, not the implementation.
It's also a matter of praticality - I simply odn't have time to write tests for each and every method.
That isn't the purpose of all tests.
A complex public API should consist of smaller private parts. When you change those smaller parts of code, you would like to know if you break something and specifically what you broke. Testing of a small, isolated chunk of code is the 'unit' in the term 'unit tests'.
Unit tests on actual units of code allow you to more quickly isolate failures.
If the public test fails, how do you know what specific part of the public implementation was responsible for the change?
- Private methods are just an implementation detail with respect to the public methods. (You really care about whether the public methods reply correctly.)
- Public methods are just an impdet with respect to the module API. (You really care about whether the module's public surface replies correctly.)
- The module's API is JAID with respect to the user/game API. (You really just care about whether the gamer gets the right responses to controller inputs.)
- The user/game API is JAID wrt to the code/gamer interface. (You really just care about whether the gamer likes what you've written.)
- The code/gamer interface is JAID wrt the company/customer interface. (You really just care whether people still give you money.)
- The company/customer interface is JAID wrt the investor/company interface. (You really just care whether the venture throws off more money than you put in.)
Nevertheless, you'd still like to catch the failures as soon and as narrowly as possible!
Beyond that I think the key is to test the things you don't expect to change. The user always needs to be able to log in. But I expect developers will rearrange the private method boundaries.
To counter that I'd say you should have a test case to cover the leap year handling is working as expected. If you aren't testing that since it wasn't in the spec, than why would you have the code at all?