1) Great team/culture
2) Everyone put forth the effort (scheduled crunch)
3) Some amazing techwork or artwork or designwork etc...
1) Didn't anticipate x/y/z
2) Burnout (remember that crunch)?
3) Bad/lazy scheduling and/or capacity planning
Kudos to the people who did this at MS but you're looking at tainted data. Most game developers know what really went wrong and do not publicly broadcast it in fear of losing future contracts or publisher trust. In the end the postmortem becomes a soundboard for 'shoutouts' and cheer-leading instead of that raw, unbiased feedback.
I've been to all hands post-mortems that were like that and it can get really really ugly.
Said director of engineering literally watched us blow milestone after milestone yet somehow the knowledge we were going to miss our commits (and not by a little) never propagated upwards to the ceo. Either that or the ceo didn't want to listen; it could be any combination of those two.
Either way, the engineers were pissed.
Incredibly poor milestone management and a failure to triage didn't end up in our fucking learnings document somehow.
> We find that we were able to identify both best practices and pitfalls
in game development using the information present in the postmortems.
Such information on the development of all kinds of software would be
highly useful too. Therefore we urge the research community to
provide a forum where postmortems on general software development can
be presented, and practitioners to report their retrospective thoughts
in a postmortem.
> Finally, based on our analysis of the data we
collected, we make a few recommendations to game developers. First,
be sure to practice good risk management techniques. This will help
avoid some of the adverse effects of obstacles that you may encounter
during development. Second, prescribe to an iterative development
process, and utilize prototypes as a method of proving features and
concepts before committing them to your design. Third, don't be
overly ambitious in your design. Be reasonable, and take into account
your schedule and budget before adding something to your design.
Building off of that, don't be overly optimistic with your scheduling.
If you make an estimate that initially feels optimistic to you, don't
give that estimate to your stakeholders. Revisit and reassess your
design to form a better estimation.
The best piece of advice I can give is to never, ever give an estimate to stakeholders until you've worked on the estimate first. And don't finish an estimate until you've actually built some small proofs of concept.
Your estimates will still be wrong, but they'll be much more accurate than some off-the-cuff number that your stakeholders will be building their various plans around.
It all sounds straightforward, but the hardest discipline for me as a developer is to keep from saying things in stakeholder meetings that make my stakeholders happy in the moment, but ultimately needed more thought and effort. It's something I struggle with in every such meeting, to varying degrees of success.
"I don't know, but here's $how_I'll_find_out and here's $when_I'll_have_an_answer."
This is especially true in startup situations where you are learning technologies that may themselves not be complete, with people who are learning their own new things, to achieve a result which is only hypothetically possible. It drove my CFO nuts but rather than commit to a schedule for a big deliverable I would walk backwards from the end point and say, "These are the stops between where we are, and where we are going. We measure our progress by getting to each stop, but like a subway map there isn't a known amount of time between stops, only what the stops are." And he would come back with "well we only have money to get to this date, will it be done by then?" and then we would talk about the uncertainty between each of the milestones. At some point you can reach a common understanding of what the unknowns are and how discovering their value will inform on the difficulty of the next step.
That said, I've met managers who just say "Oh we will be done by %x date." and then basically worked the problem the same way I have.
If you're agile you can break down the intermediate stops as sprints but estimating the backlog is still the killer step.
The conclusion does not accurately reflect that... Which seems to be more of a "recommendation" than a takeaway or a real summery.
this argument doesn't necessarily disprove the notion. it's plausible to suggest that whatever impact they had was mitigated to some degree.
Games are still all closed source. Game engineers prefer to sell their components for a pittance (for example, on the Unity Asset Store) instead of collaborating on GitHub. They really, really hate writing tests. There's a deep reliance on manual testing. They still have a "ship" culture ("who cares about the code, as long as we ship by the deadline?"), disregarding the fact that games run live for years now. Multiple managers actively fought me on doing code reviews (I was new-ish, I wanted my code to be reviewed). I saw and worked on games that had no codified version control branching strategy. No coding standards. Multiple issue tracking systems. A pathetic grip on sharing code among projects. Four implementations of a state machine in one game. It goes on.
I'd like to think it was just my employer's problem, but from talking to people who have been in games for a long time, it's endemic to the industry.
I'm now out of the industry :)
I think the idea that the game industry is "behind" other fields is kind of comical, given that games are some of the most complex software in the world, and big game teams have only a few hundred people on them, and meanwhile something relatively trivial like Twitter has 4000 people. It's true that game teams don't do a lot of Agile or TDD or whatever the next buzzword is, but that is because those things are mostly superstition and obviously don't work when you start attacking hard problems.
So if you are someone a few years out of school who learned TDD it is easy to say "games are behind, they don't do all the new stuff!!" while being unaware that almost all the new stuff is bogus cargo-cultism anyway.
I do agree that the game industry engages in unhealthy levels of crunch that are to its long-term detriment, but this is mostly an orthogonal issue to software engineering practices.
(I loved the Witness, btw.)
Unless you're in the habit of delivering RC-quality milestones (which is afaik unheard of in AAA development), the marginal value of a milestone is negative right up until the last one.
The argument in this thread is that games industry is somehow behind times and not using the best practices used at places such as Twitter. The question is - how do you know your practices are the best and not the other way around?
Game studios go out of business if they don't deliver quality software on time and on budget. I know it first hand since I have been through several studio closures. The practices game studios use are tested through natural selection - if they fail at delivering software they are out of business. To make things more interesting there is also competition: a good game decimates sales of the worse games released around the same time. And if the sales go below the projected ROI - it's the game over. So it's not enough to be good enough to survive, you need to be better than the competition.
Trendy web companies don't sell software. They sell services and the quality of software they use is secondary. E.g. if I wrote a Facebook's clone but 100 times faster, using 10 times less memory and with 1/1000th of Facebook's staff it would not threaten Facebook. People would not close their accounts and move to my network just because I have better software. A web company is fine as long as their website runs semi-reliably.
So how come the battle-tested practices of the games industry are so bad compared to the practices of the industry, which mostly sells advertisement? What are the criteria you use to compare?
I'm not siding with this notion that Twitter (or webdev in general) is ahead, mind you, but it's fair to say that neither websites nor games are about selling software. They're more about entertainment.
there is more to twitter than the web app you type dozen words into.
i don't much about them (including whether 4000 is accurate), but presumably there's a lot to work on regarding international near-realtime messaging infrastructure, high availability, machine learning, sentiment analysis, smart advertising systems, yadda yadda. 4k does seem like a lot, but i doubt their suits are interested in needlessly hiring people to sit around picking their nose.
I've spent time in the industry and its behind in practices, wages and quality of life. There may be some smart, motivated people but the industry as a whole has not grown well.
Testing is not merely about future proofing (although it's a really nice benefit), it's about proving that your code actually works now. By writing tests and verifying that they pass you can be sure that the code actually does what you think it does. Furthermore you can ensure that when other parts of the code changes your code still does what you think it does, which might be an issue tomorrow, next week, or in ten years.
\* note: Testing does not prove that your code is correct, only that it satisfies the conditions of the test. As with all engineering testing is only effective as the person implementing it.
So to me the far more interesting and helpful question to answer is: Are there most efficient ways to keep people from making the same mistake over and over, other than repeating what everyone already knows, over and over?
Identify the forces that drive people away from best practices most often and give us ideas and tools to tackle them.
1. Won't sugar coat feedback,
2. Has shipped games,
3. Has a healthy sense of humility, and
3. Is genuinely concerned about your success.
In the long run, it's like most things. It takes practice.
To the criticism that the game industry is behind the rest of the software industry in software engineering practices, I do believe the rest of the software industry can eat something phallic. :)
I'd like to see them pull off what we have to with about 10X the competition, juggling massive game assets, a fraction of the budget, finance, and exit options, needing to meet a tough frame rate budget (even more exacting now with VR), all issues GPU-ish (when was the last time you wrote a shader, Mr. Website Developer?), and having people with a huge range of technical skills requiring direct access to the source/asset repo.
> about 10X the competition
Steam added 1500 games in the first 7 months of 2015. Crunchbase had over 8000 investments in 2015. So I don't think that's true, even accounting for a Christmas peak.
> juggling massive game assets
Lots of data goes into games, lots of data comes out of websites. The difference to my eyes is that the industry has multiple, principled, well-understood frameworks to deal with the data problem and the games industry has...what? Maybe the state of the art has moved on in the last 5 years but when I left it was all "well it's just a DAG, how hard can it be to write a tool to recursively build it".
> a fraction of the budget, finance, and exit options
Being willing to work for less and with fewer resources is not a sign of good engineering practice. I don't know if I'd say it's the opposite but it's at best irrelevant.
> a tough frame rate budget
Not very different to needing to meet tough latency/error rate SLAs, except instead of failing TRC (or getting your exec producer to stare down MSFT/Sony), you'll just go broke. Oh, and you don't know what the SLA is, ever, so it's a continuous process of tradeoff.
> all issues GPU-ish (when was the last time you wrote a shader, Mr. Website Developer?)
Ignoring that some "website developers" (many of whom are not "Mr" - something else that the games industry does not do well at) do write shaders in the form of GPGPU optimization of ML tasks, obviously tasks differ. Let's not pretend that banging out something in Cg is PhD-level shit. It's not harder than writing a Spark job, for sure.
> having people with a huge range of technical skills requiring direct access to the source/asset repo.
You have Mr. Blow upthread sneering at Twitter for employing 4000 people to build something "trivial" but you think that each of those people have identical programming chops in every bit of the code they touch? If there's a single thing that argues against your idea it's this - the rest of the industry has become pretty good at modularizing areas of the business and auditing changes to it with code review, build/asset/deployment tagging etc. The last games company I worked at, someone stayed at the office until 5am before going on holiday for a week and sent an email that effectively said "I've rewritten quite a bit of the engine, you'll probably need to fix the build because I didn't test anything, see you in a fortnight". People gasp when gmail/twitter/facebook is down for half an hour but I've worked in places where there _wasn't even a canonical build_ that could be broken.
Almost everything you implement in CG is "PhD-level shit". Implementing techniques from SIGGRAPH papers in your game engine are usually non-trivial. Game engines/games are some of the most complex systems in software engineering, the complexity can't be compared to web apps at all.
It also, often, doesn't.
In both cases, there is value in publishing.
Doesn't that simply mean that this research confirms other people's experience/gut feeling?
The takeaways I remember about that: A good producer matters, a team of a least three people does better than one or two, and experience shipping stuff matters.