Hacker News new | past | comments | ask | show | jobs | submit login

Steam could be a major force in turning around the recent slew of absolutely craptastic, completely broken, game releases recently.

QA/QC should not happen at the customer level.




"We don't need unit tests because we have integration tests. We don't need integration tests because we have QA acceptance tests. We don't need QA acceptance tests because end users will report any bugs to us. We don't listen to user feedback because declining sales will tip us off to the problems. We don't pay attention to sales numbers because creditors seizing our assets will alert us of the shortcoming."


I know you're joking, but it's pretty much infeasible to test the kind of bugs typical in games using automatic testing. So much is content driven, and the state space is far too large.

Even for stuff that's fairly testable, like navigation, collision, etc, you have the issue of there being a million edge/degenerate cases, and it's very hard to know this never hits one in the game code or content.

So the best you can usually do is test library code.

Not that this has anything to do with AK, which sounds like plain old sloppiness.


Ok, but in this case they chose to put in a 30FPS cap. Ignoring that performance is well below even that, they should have known it was unacceptable.


Why is that unacceptable? From the reports I've read re enabling the 30fps cap removes many of the performance problems. Some of the game logic can be hard coded to accept 33ms frame times, especially if the console versions are capped at 30 and the PC version is a port of them. They made a technical decision to cap it at 30, there's nothing wrong with that. If you don't agree with it, then don't buy it.


I'm a developer, but not a game developer. Making the decision to hardcode game logic to only accept 33ms frame time sounds like a pretty dumb choice. I would wager that if your fix is to artificially cap performance of your software, then something in your code base sucks majorly. Call me crazy.


I'm a developer, but not a game developer. Making the decision to hardcode game logic to only accept 33ms frame time sounds like a pretty dumb choice.

I'm not a game developer, but I am a realtime system developer. We could hem and haw about all the different hypothetical ways all the implementation details can suck, but locking and/or hardcoding for certain framerates is a perfectly fine decision. Doubly fine when those frames aren't graphical.

Easy but contrived example: If I hard code a purpose-specific audio pathway to only operate at 44100hz, that can be a perfectly acceptable design decision given the purpose. How deeply the code assumes that rate can be indicative of code quality, but if the assumption is hard to excise from the code in performance-critical areas, well that happens.

Here is the part where I have to qualify I'm not saying more than I've said. I'm not defending the use of 30fps cap here, or saying that this game's code is any good overall. The gaming customer has higher realtime expectations than 30fps, especially on PC. And although I haven't played this game, the market is showing it is a bad product beyond that.

I would wager that if your fix is to artificially cap performance of your software, then something in your code base sucks majorly. Call me crazy.

In a realtime system, consistent performance is a more important goal that maximum performance. For games that want high-end graphics, the goals are to have maximum performance on data of highly variable complexity, push the limits of what can be done, but rarely (if ever) whiff on the real time deadline. I won't disagree that most game codebases have pockets of major suck or that a locked rate can be a bandaid for sucking, but it is not indicative of such.


Very interesting, you are correct. Thanks for this reply. My experience has always been in the world of 'performance is king', it's easy for me to lose sight of the idea that systems do exist in which such a limit is beneficial.


I used to be a game developer and every game I've worked on, pretty much, has had a fixed frame rate, both for rendering and for game updates. (The two rates don't have to be the same.) A fixed rendering rate tends to make the game better to play (though of course this is a bit subjective), and a fixed game update rate avoids nasty timing-dependent bugs (e.g., due to parameters that work fine until you have overly long or short timesteps). Both have to cater for the commonly-encountered worst cases rather than the best ones.

(30Hz is somewhat common as rendering these days tends to involve a lot of fixed-cost full-screen passes for lighting and postprocessing. So by halving your frame rate you get over double the effective rendering time, which you can spend on having more stuff, higher res, etc.)

Even for 30Hz games, during development it's very common for the game not to run reliably at even 30Hz, until towards the end, at which point stuff is taken out, and assets and code optimised to make it all work. So it's not a cap that artificially limits performance so much as a floor that the developers are striving to hit :)

(I have no idea what the problem with Batman is specifically.)


Yeah, I'm thinking now that I was totally out of my element and a little foolish with my original comment. Were you developing console games or PC games? I'm guessing console as my impression is that fixed frame rates are much more common in the console world.


I worked nearly entirely on console games. Generally the frame rate would be decided early on (either after prototyping, or because you've been given a pile of old code to start from and that runs at a given rate), and would stay fixed. The games I worked on were split near enough 50/50 between 30Hz and 60Hz, with one game update per render, and usually some provision for handling one-off frame drops due to rendering taking too long. (That's not supposed to happen, but it can, particularly if you're trying to hit 60Hz.)

I can't say with any confidence what people tend to do for PC - I don't play PC games, and I last worked on a PC game in 2006 - but supposedly one tactic I've heard is used is to interpolate object positions and properties when it comes to rendering. So you run the logic at a fixed rate (to avoid the problems I alluded to in my previous post), and save both current object states and previous object states. Then, when it comes to render, you'll be needing to render the game state as it was at some point between the most recent game state you have and the one before that - so you can imagine your latest state as a keyframe, and the previous state as a keyframe, and just interpolate between them.

I imagine this can be a bit tricky to retrofit, though, so no doubt many ports from console to PC just decide to use the same frame rate as used on the console. But I'm guessing a bit here. Unreal and the like might just do it all for you these days anyway. Maybe the Batman programmers just forgot to tick the right checkbox ;)


Well, it's not like running at anything above 60 FPS makes any sense due to the refresh rate on the vast majority of displays being capped at 60 hz. You can slam geometry through the graphics card above that rate, but some of those frames won't make it to the screen. 30 FPS is not great, but it can be better than having wildly variable framerates. Consistent 30 FPS is better than traditional films, which mostly ran at 24 FPS.

Now, for a fast-twitch action game, processing input at 30 FPS is unacceptable; input lag sucks giant donkey balls. But I would expect that a AAA development studio knows that running input-handling, networking, physics, AI, etc in lock-step with the rendering loop is not the smartest idea. These things should run on their own schedules, independent of the graphical framerate.


30fps is horrid and basically unplayable for a first person PC game.

This isn't a movie, it is active, not passive. "Better than traditional films" is completely irrelevant.

Take a game where you can cap the framerate and bind a key to switch between capped at 30 and capped at 60, if you're playing the game you'll immediately feel the transition and 30fps will feel relatively unplayable.


This isn't a first-person PC game, and until quite recently 30fps was considered a playable framerate. It's only in the last couple of years that "60 has become the new 30".

Yes, of course a frame lock sucks, but it's not the end of the world and many people wouldn't even notice if not for everyone complaining about it.


That is complete nonsense about "last couple of years", I remember wanting ~40fps for half-life over a decade ago.

Yes, at that time (10-14 years ago), 30 fps was considered playable. But that is "playable" and even then wasn't recognised as a great framerate for most of that period.

For the last decade, less than that has been seen as bad. In fact a few years after half-life's release (when it was still popular but hardware had matured) 100fps was the goal, to match the 100Hz that most monitors back then could do. 60fps only even entered the discussion with the switch to TFT/flat panels with their lower refresh rates.


In fact a few years after half-life's release (when it was still popular but hardware had matured) 100fps was the goal.

Half-life also exhibited strange behavior if the 100 fps frame cap was lifted by turning on developer mode. If I recall correctly (forgive me as this was almost a decade ago) weapon mechanics or movement speed changed.


There is so much wrong with this post.


When the office staff go home for the day, their rubbish hodgepodge of PC varieties and specs should spending their evenings playing the game with a game bot.


Even if it is impossible for automated testing, that doesn't account for QA and other manual testing. Obviously the bugs aren't too hard to find for most of the people playing it.


I don't know if you're quoting or referencing something, but that's fantastic.


Thanks! Just something I dreamed up one day when someone said "you shouldn't have unit tests for private methods because unit tests of public methods will catch them" and I extrapolated from there.


If you have an unit test on private members, then you can't change the implementation safely without breaking tests.

The point of the unit test is to test the interface in isolation, not the implementation - e.g. do you really expect when testing a hash-map the elements to be in specific order, or do you test for their presence?


Really?

> If you have an unit test on private members, then you can't change the implementation safely without breaking tests.

This makes absolutely no sense

You change the private implementation, you change the unit test. It's that simple

Then you keep your API the same so that external users don't break.

It seems to be from the same people that like to whine about "missing tests and lack of coverage" quite funnily. It seems they like to nitpick and idolize tests instead of shipping


Depends. If we take the example of a library, then there are some tests which should never break ( except if the major version increases). But it is entirely sensible to check if you broke something internally during development.


I've been a game developer for 15+ years (mainly C++), never used unit tests, just good old plain asserts, sometimes ad-hoc code that creates/simulates errors or slowdowns. Pretty much QA people testing your game/tools and some form of automated tests (run this level, expect this to happen). Then I changed jobs, started writing in Java (+ GoogleWebKit and Javascript), was exposed to Unit, integration and end-to-end testing. Do I know it properly? Hell no. I'm still confused.

But this is what I seems to be getting out of it: You are given a black box, with inputs and outputs. There is also a spec (it could be in your head for all I know) that defines that for certain inputs, certain outpus are expected. This spec also tries to cover quite a lot distinct cases. Each such representative case of input and output is an unit test. (If your spec was really in your head, your unit tests kind of becomes it, or I like to think about it in this way - a Unit Spec :)).

The tricky part is when this blackbox is internally working with other blackboxes. Unit testing is all about testing the blackbox in isolation from other blackboxes. As such one needs to isolate them away. Currently what I'm using is DI (Dependency Injection) with guice/gin/dagger to achieve that.

Thanks for all comments, it seems I have to fill my gaps in what I know.


This is true for totally isolated unit tests but false for tests that require stubbing. You can theoretically use the same tests for a map implemented with one version of a hash table or the other. But once you start testing classes with dependencies, your tests inevitably touch implementation details. For example, unit testing a hash map that is backed by a memcached server would require some form of stubbing.

The metatesting reason for this confusion is that in the example you used, the scope of unit testing is the same as the scope of integration testing. If a class has 0 dependencies, a unit test is also an integration test, because it tests all the dependencies of the class, of which there are none.

So going back to your original point, your affirmation is true for full integration tests, i.e. tests that do not stub any dependency. If you do not stub your network connection to the memcached server, the same test - albeit with a different setup - can be used for a local hash table implementation.


The method should still do the same thing, regardless if it is private or public. If you purposefully create a new method and get rid of the old one, regardless if this is done by creating and deleting or by modifying, then the unit test should be changed as well.

In short, you are testing the interface of that private method, not the implementation.


By that logic you could test a single line and say "I'm testing the interface of that line, not the implementation".


You could if your line had a syntactical construct that presented an interface, like say, a name and a parameter list.


You really shouldn't. The public methods is the interface to the surrounding code, and that is what you want to make sure works. How you implement it, with private methods or 3rd party libs is up to you. If a bug in a private method makes it past your unittests of public methods, there was an edgecase you didn't test for.

It's also a matter of praticality - I simply odn't have time to write tests for each and every method.


You have some tests to ensure your public methods don't change implementation.

That isn't the purpose of all tests.

A complex public API should consist of smaller private parts. When you change those smaller parts of code, you would like to know if you break something and specifically what you broke. Testing of a small, isolated chunk of code is the 'unit' in the term 'unit tests'.

Unit tests on actual units of code allow you to more quickly isolate failures.


I test all public methods. Anything private is part of the implementation for those metods, and rarely would I want to test that. If I make a breaking change in a private method, and the public unittest does not test it, it's an edge case I didn't test for, or the public method is too broad.


The public implementation consists of smaller private parts.

If the public test fails, how do you know what specific part of the public implementation was responsible for the change?


I don't and that is not why I'm testing either. I'm testing so the contract (i.e. public methods) other developers rely on does not change. That said, I can see in the build history what checkin caused the test to fail, so I would still know where to look.


Well yeah that's the view being taken to its logical conclusion here:

- Private methods are just an implementation detail with respect to the public methods. (You really care about whether the public methods reply correctly.)

- Public methods are just an impdet with respect to the module API. (You really care about whether the module's public surface replies correctly.)

- The module's API is JAID with respect to the user/game API. (You really just care about whether the gamer gets the right responses to controller inputs.)

- The user/game API is JAID wrt to the code/gamer interface. (You really just care about whether the gamer likes what you've written.)

- The code/gamer interface is JAID wrt the company/customer interface. (You really just care whether people still give you money.)

- The company/customer interface is JAID wrt the investor/company interface. (You really just care whether the venture throws off more money than you put in.)

Nevertheless, you'd still like to catch the failures as soon and as narrowly as possible!


Absolutes and exaggerated conclusions are no good. As long as a test ensure we do not introduce old bugs (regression) and help me refactor to the point where the time spend writing and maintaining the test is less than I save, then it's golden. I just never had a need to test a private method - I do have to deliever a working piece of code, and unit- and integration tests help me knowing I have met the spec.


Tests that help you fix failures are great - just be sure they're helping more than they're hurting.

Beyond that I think the key is to test the things you don't expect to change. The user always needs to be able to log in. But I expect developers will rearrange the private method boundaries.


To go further: define an API, write tests against that API, and then do a pass of dead-code analysis on the resulting library-plus-test-suite. Any private functions left uncalled by your public API can just be removed!


Let's just hope that function doesn't end up being the one that's invoked to adjust for leap years.


Funny, but good point.

To counter that I'd say you should have a test case to cover the leap year handling is working as expected. If you aren't testing that since it wasn't in the spec, than why would you have the code at all?


My approach when I find myself wanting to test private methods is often to extract a new class, for which the methods under question form the public interface (working with Java here).


Reminded me a lot of "for want of a nail..." (Look it up)


I wish I could fit that quote on a regular coffee mug!


> Steam could be a major force in turning around the recent slew of absolutely craptastic, completely broken, game releases recently.

This is an interesting comment because the primary suspected reason for the implementation of Steam Refunds was Steam Greenlight, which allows anyone to upload a game to Steam, and after community vetting, the game could be released on the store proper.

Of course, many of the games were either poorly tested or use assets which the developers claim as original but are not. Steam Refunds fixes this in most cases

Jim Sterling has a good showcase of these games: https://www.youtube.com/playlist?list=PLlRceUcRZcK17mlpIEPsV...


The absolutely dreadful launch of Assassins Creed: Unity probably did not help. Consumer confidence was dropping fast, having a refund option encourages trying new things and takes some of the risk out of AAA pre-orders.


> Steam could be a major force in turning around the recent slew of absolutely craptastic, completely broken, game releases recently.

> QA/QC should not happen at the customer level.

With the exception of games running under Linux https://news.ycombinator.com/item?id=9757382 :).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: