Hacker News new | past | comments | ask | show | jobs | submit login
Tough times on the road to Starcraft (2012) (codeofhonor.com)
212 points by davikr on Aug 13, 2022 | hide | past | favorite | 112 comments



> Since we were only “two months” from shipping, making changes to the engine for the better was regularly passed over for band-aiding existing but sub-optimal solutions, which led to many months of suffering

I'm having a version of this problem right now. One of our products is not going to be sold to new customers at some point soon (but will continue to have plenty of existing customers), so it receives a small-ish budget for maintenance.

A lot of people seem to believe that this means any problems should be patched over with duct tape at the cheapest possible up-front expense, even if it comes at the cost of increased complexity.

I think of it the other way around: we are going to be forced to maintain whatever complexity we add to this with a very small budget, so at this point we shouldn't make any change that increases complexity. Ideally, we'd only make changes that decrease complexity! Even at higher up-front cost.

But it's hard to get people to understand the logic of that. I might not be phrasing it well.

It's a little like business people and developers think that most of the budget of an actively developed product goes to new development and only a small fraction is needed for maintenance, when in practise it seems to me to be the other way around.


Time to have a conversation with your manager to understand his point of view. I am managing teams that are shipping new products and I have a very different answers to these kind of questions based on the business needs :

- Sometimes we are fully convinced that the new feature/product is going to be used and generate revenue and that we will have to maintain it for months after ship date. In this case it's often worth it to do things "right" as a bit more time early can save a ton of time afterwards. This is typically the case for incremental updates of an existing product with an established client base.

- Sometimes we have no idea on the success of the product/feature. Doing things "right" is far less critical that just shipping, as their is a non negligible chance that you are just unknowingly wasting time on a useless component. Doing a useless thing "right" has no value, spending more time on it is pure waste.

You often see "horror stories" written by engineers about products that were rushed too fast to prod, became a hit, and had to be significantly re-engineered. Business wise, these actually are success stories as the business was able to create a successful product, sell it, then improve it. Even if things were a mess behind the scene.

You also sometimes see engineer being proud of winning a fight with management in order to improve a product before shipping, and then having that product being a massive success. Take care that there is significant survivorship bias here: you almost never see the same engineer writing about the product he perfectly engineered, fought to push the release date and that got 0 user. He'll consider that the lack of business success has nothing to do with him. These things happen all the time, you should consider that the default state of any new significantly innovative product or idea is to generate a total of 0 revenue, and you should act accordingly by helping the business validate the idea as early and fast as possible. And sometimes that means taking on technical debt.

A great engineer will be able to accept that estimating the success of a brand new product/idea/feature is extremely hard and adapt his approach depending on the situation, accepting debt in the process if needed, going to prod "too quick if needed etc. The same engineer can also afford to be ruthless in asking for time to do things right when the business aspects have been validated because he has shown his ability to compromise and understand the context in which he operates.


There are a few things you left out:

1. Doing things right often results in releasing faster. But wildly unrealistic schedules sometimes cause teams to reject the right thing infavor of the "quick" thing you pay for ten times over before release.

2. The word "debt" implies that it can be paid off. In many cases, removing the complexity introduced is simply inteactable.

3. Complexity has a huge cost throughout the entire stack -- testing, building, fixing bugs, etc. A shortcut may just be a shortcut, but if it introduces additional complexity, that's qualitatively different and you need to be careful.


You're absolutely right! In this case we're talking about an old feature with known proven value that has stopped working in a new deployment context, and will take some reworking to get going.

That said, I'm not married to the idea of getting it working at all. I'm perfectly fine with scrapping it entirely. That's a business decision I currently don't have the data to judge on.

All I'm saying is that this is something where if we are going to build it, we know are going to have to support it for a long time into the future. So if we fix it, it should be fixed at low added complexity. But it's also an alternative to declare it dead, of course.


Thank you for that point of view. Thinking as a customer here, the amount of times I didn’t buy a product because it was bad or in a state I would call incomplete is great indeed. Shipping something incomplete, broken or not up to task can kill a product well before it’s given a good chance. Only a few times I have given a product a chance if I saw some hope in it.


This was one of the larger of many reasons I left my previous company, which was a holding company that owned several products with similar audiences and feature sets. They bought a new product at the beginning of the pandemic that was deemed the primary replacement for the rest.

Basically spent ~3 years as the sole developer on an application in "maintenance mode" that kept getting more customers since it supported a bunch of features the intended replacement didn't. That increased client base wanted new features in the maintenance product that leadership wouldn't turn down. The product still maintained a turn-around time an order of magnitude less than the primary replacements.

A few days before I tendered my resignation, the company had laid off about half the development team for the primary replacement (even though they had contracts promising work that was planned to take 12+ months with the full dev team), and declared a different product as the "primary replacement" for the other ones they held. Don't know how that will work out for them, don't really care either.

New place has a standing policy that basically says if we can prove a maintenance system is taking more than a few hours a month, they'll authorized repairing/replacing/removing said system. It's incredible how much of a difference it makes on employee morale not needing to constantly context shift to spend a few hours fixing some broken system over and over again.


"There is no time to get in the car; hurry up and push it!"


During my Software Engineering degree it was hammered into us that 20% of the cost of a Software Project comes before launch day, and the other 80% is maintenance.

In my experience, I'd go 10/90.


I feel like that rule conflates two different things: initial launch and maintenance. New development (creation of new features etc) can happen past launch, and maintenance (fixing bugs in "finished" features) can happen before launch.


What school was that if I may ask?


Swinburne University in Melbourne. The degree is accredited by the Australian Institute of Engineers. (At the time, the only Soft. Eng. degree that was)


> While Blizzard’s early games had been far more successful than expected, that just raised expectations for future growth.

Why does this always seem to be the case? If someone sets a world record in sprinting, we don't expect them to exceed it every year. Yet in business, if you pull a miracle in the 11th hour at your job, that becomes your new baseline next time reviews come around.


we do expect humans to keep getting faster and faster for some reason, so we are expecting nutritionists and coaches to continually improve and move their baseline


Sprinting is clearly limited by physics and biology, but for mind work it's unclear where the limits to experience and learning are. Is there such a thing as the 10x programmer? Or is it even 100x? These things are all highly debatable.


I had no idea StarCraft still saved the entire game state: what a gigantic waste! I wonder if the game engine was deterministic or not (if it wasn't there was no choice for save files: you had to save the full game state if you wanted to implement a replay or continue functionality).

Now Warcraft III, which came out only three years later, had a fully deterministic game engine and, for replay files, only saved players inputs. Hence the Warcraft III save file, even for entire games, were tiny. And there were several websites where you could download and replay save files from other people, including from famous matchups. Fun times: I'd exchange my best games with my brother, as email attachments IIRC. We'd then watch each other's games and make comments.

IIRC Microsoft's first Age of Empire, which came out in 1997, already used a deterministic game engine so there was already an AAA title who used that technique when StarCraft came out in 1998.


> I had no idea StarCraft still saved the entire game state: what a gigantic waste! > Fun times: I'd exchange my best games with my brother, as email attachments IIRC. We'd then watch each other's games and make comments.

There is a major drawback to save files which rely on the deterministic nature of the game: what happens when the determinism changes? I don't mean due to bugs; I mean intentional modifications to the game's behaviour through patches. Either these changes will break old replays and saves, or the game needs to carry around multiple simulation versions.

You bring up Age of Empires. This is a small problem in the current AoE2 competitive scene. Yes, AoE2 has a competitive scene – the game has been re-released twice in the past decade, and still enjoys the attention of competitive play, including sponsored tournaments (e.g., the Red Bull Wololo series). Microsoft are attempting to give the game the same treatment as other modern, competitive titles, which means frequent balance patches, and occasional injections of new content (DLC). Unfortunately, each one of these updates breaks old save games and replays, which, as you point out, rely on the deterministic nature of the engine. It means that any casting/commentating of games must happen within relatively short order of them being played, and that the only reliable distribution format for old games is... screen recordings. I'd much rather a less space-efficient save format (or at least having the option to convert deterministic replays into a fully self-sufficient archive format) than having to discard native recordings of great games by the wayside in favour of videos.


Yep, I recall being bitten by that.

StarCraft actually let you save replays which were just player inputs - which begs the question why save games couldn’t do this, but I digress - and I clearly remember attempting to watch a replay after a major game update and seeing it fall apart after a few minutes.

The replay was of a 40 minute game. It ran fine for the first few minutes, then I’m not sure what went wrong but suddenly all the characters stopped moving (except the automated mining drones) and the game just stayed like that for the next 30+ minutes.


> Which begs the question why save games couldn’t do this

Replays weren't added until 1.08[1] so it may have changed sometime post launch, but I'd have thought because it means you have to run the sim to get to the eg. 30 minute mark where the save is, and even on modern machines running remastered that takes quite a while. A more constrained machine from the times wouldn't be able to do the x16 speed fast forward used today.

[1] https://liquipedia.net/starcraft/Patch_1.08


I think StarCraft2 would launch an old replay with an old executable. If you look around, there is a directory with exes from a bunch of different versions of the game. I don't know if they still do this since it seems quite unwieldy.


The engine has to be pretty tiny for an old game like that, right? They could probably ship a copy of all the historical version along with it, just for replays...


The current AoE2 competitive scene is awesome.


I remember StarCraft replays would often diverge from what actually happened part way through, and from the point of divergence would decay into increasing nonsense. So they were capturing events like user input, and when that wasn't done perfectly it failed spectacularly.


Deterministic engines are very hard to pull off. I looked into it for my own engine, and decided not to.

The problem is floating point calculations on different processors. Maybe in the 90's that wasn't such an issue, but good luck now. Plus, if you don't need a physics engine, maybe you can stay in integer space.

That would explain small deviations at first, and getting bigger by time.

For example a game like Braid also chose not to use a deterministic engine for that reason.


The Factorio developers claim that they didn’t have much issues with floats, beyond trig functions (which are in software).

> Originally we were quite afraid of Floating point operations discrepancies across different computers. But surprisingly, this hasn't been a big problem so far (knocking on the table). We got away with implementing our own trigonometric functions.

https://www.factorio.com/blog/post/fff-52


AFAIK the basic functions (add/sub/mul/div) work fine, but the trig functions don't [0]. It's not a processor issue; it's a libc (or whatever) issue. Of course you need the trig functions in almost any engine, but at that point you can write them yourself.

[0]: https://news.ycombinator.com/item?id=28318425


It's an IEEE-754 problem... Only the above operations and sqrt require full accuracy. Just about every other floating point operation is allowed to be less accurate and fully compliant, and widely differ in exact answers. Even on say x86, using SSE vs FPU vs libc (which often will not use above versions in order to be more accurate) will give different answers.


> The problem is floating point calculations on different processors. Maybe in the 90's that wasn't such an issue

Haha, I'd be surprised if they used floats at all in a 90's game. Fixed point were popular.


And it's not gone. I can't seem to find the reference anymore, but I'd swear I recently saw a talk from a Supercell fellow describing they use fixed point today, for their online games.

Edit: this is it https://www.youtube.com/watch?v=HrUF-LFTs-A

The whole talk is very relevant to anyone interested in this topic.


In the 2000's I wrote games on MS PocketPC. It had emulated floating point, which was painfully slow for games. Most games could do with just integers. Once I made some physics based games, I implemented fixed point indeed, and it went more or less fine.

A 3D engine in fixed point (which I never fully finished) gave me a lot of headaches. So glad all those devices have proper floating point and a gpu's.


Quake was the first big game to use floating point for everything and require a FPU, and it was released in 1996. I would guess that by the late 90's plenty of games used floats.


StarCraft uses fixed point for its decimal math as a solution (or perhaps originally for performance)


At least with the current version, there is a workaround for that, which works for at least any non-EUD map: Watch the replay without speeding the playback up beyond 1x.

I usually don’t bother since I don’t care about the whole replay and don’t want to wait. But it’s handy when you want to learn the strategy of somebody better than you that you just played against.

Civilization 7.9.9D was my favorite custom back in the day :)


Yup! I once saw replay where match had different outcone from actuall game due to delay on human player inputs and none on AI player inputs. It was crazy.


It's fully deterministic, multiplayer works by only sending user input around (at a reduced tick rate) and the unit movements get calculated locally from that.


> It's fully deterministic, multiplayer works by only sending user input around (at a reduced tick rate) and the unit movements get calculated locally from that.

That's interesting. I wonder why the other poster say the replay functionality would turn into nonsense at some point.

Now if the game is deterministic and multiplayer works by only sending user inputs around, why the save / continue feature wasn't implemented that way?

In any case I love to read about how these games worked!


> Now if the game is deterministic and multiplayer works by only sending user inputs around, why the save / continue feature wasn't implemented that way?

Your only interpreter for the data is the entire game engine. By design it's going to use most of the power of the target hardware, so cannot run faster. Ie, 30 minute load time if you had played 30 min.


Running the simulation is not going to use most of the power of your machine, in any remotely sane design:

- a huge part is going to go to rendering, which you don't need to do if you only care about the end state

- in many games, the simulation runs at a much lower frame rate than the game, so perhaps you see the game animated at a silky smooth 60 fps but internally the sim runs at a fraction. 10 and 5 fps for the sim are common.

- some sim architectures may allow you to run the sim at a variable frame rate and still generate deterministic results. So (ideally) you tell the sim to run a single step of 30 mins and you get the end result. In practice of course variable steps will be limited to much less, but Supercell mentioned reducing their server sim cpu costs by 80-90% with this kind of trick.

All added up with reasonable guesses for a well optimized case: 30 mins of sim run at 1 sim tick per second of game time, where sim ticks take say 2ms, could take 3.6 seconds to sim, not 30 minutes.


Multiplayer requires all users are on the same version. SC2 replays also require this. SC1 may not have. Determinism would only have applied if the version used for playback was the same as that used for recording.


I recall a Starcraft multiplayer game diverging with a dorm buddy back in college. Thought I was winning; turned out we were playing entirely different games!


The replays can possibly malfunction if you speed the playback up faster than 1x. Not sure about the technical reasons, but the devs never fixed it. Many players still don’t know about the workaround to use 1x though.


With replay-based saving, if you want to load a game that has 30 minutes worth of progress, you have to replay 30 minutes worth of gameplay! what a gigantic waste!

In other words, as usual it's all about tradeoffs.


This is exactly how rejoining an in-progress Heroes of the Storm(another Blizzard title that was built on top of the Starcraft II engine) match worked. You would 'fast-forward' through and it could often take 5+ minutes on a slower machine.

The ideal system would be similar to what we have with video algorithms. Every so often we draw out the full data needed for the existing state, then from then on-wards is just diffs until the next key frame is reached.


Probably two reasons. You only have "two months away from the launch", so no time to re-architecture the existing working save mechanism. Why change it when you have an infinite amount of other works to be done? It's just a straightforward prioritization problem.

Also, it's fairly tricky to make the deterministic replay works. SC definitely do that for network play, but the risk is arguably higher for saving. StarCraft there were lots of synchronization issues even at very last minutes of its beta due to unexpected non-determinism which is probably because the game engine was not designed with that in mind. IIRC, the game would simply shutdown the session in a such case. But if it's from the save file, you don't have a way to detect that so the save file is going to be in a corrupted state with no hope of recovering. In the case of WC3, the engine has been written with lessons learned in the hard way so it should be capable of handling this problem more elegantly.


I looked at the size of StarCraft game replay file sizes; less than half a megabyte. So maybe not that bad?


Meta: Something funny about how HN will automatically strip "Why" from headlines (garbling legit titles) and here the mod edit to a "proper" title got set to something with "why". Something to consider!

Every game programming story like this makes me feel like game programmers are the most undervalued programmers out there. Loads of them have very little experience, are asked to build extremely complex systems on fast moving ground, and get paid peanuts to do it.

I cannot imagine a system harder to put together than a game (maybe like... accounting software with a bunch of extra business rules?). Just so much stuff that can go wrong.


> maybe like... accounting software with a bunch of extra business rules?

Can confirm. This is pretty nasty too. It's really tradeoffs all around.

Personally, I enjoy working on custom/DIY game engine code to unwind from all the banking crap during the week. I have come to terms with the reality that my side projects will probably never be finished in any meaningful way, but at least I feel like I have total autonomy over some complex thing.

Eventually, I may move back into game dev full time. For me it was never about the money. It is about doing something no one else is doing and having fun along the way.

I think financial independence (not necessarily wealth) is a big part of not letting game dev consume your soul.



(2012), with a couple of significant discussions years ago.

https://news.ycombinator.com/item?id=4491216

https://news.ycombinator.com/item?id=8938647


It's amazing how many of the pathfinding behaviors are critical to the balance and function of StarCraft. From the carrier's drone leashing behavior to the way Dragoons can get stuck and how workers float over each other but can collide at critical moments, the pathfinding code and the control over it is critical to StarCraft's high skill ceiling, while StarCraft II just has deathballs.

I can't wait for the promised followup



Calling StarCraft II “just deathballs” is a complete misrepresentation and utterly false. Maybe Protoss in HotS/WoL (ie 5-11 years ago) but not anything now. Also drones are the Zerg’s worker unit, the things that fly out of carriers are called “interceptors” - but I agree pathing is very important in broodwar.


A current SC2 pro game vs a pro game from WoL might as well be completely different games. The skill and meta development over that time is immense. It's like watching the NBA in the 1960's vs. today.


By this analogy is the skill level in sc1 like watching NBA in 2080?


Yes, but with 90+ years old Le Bron and Steph Curry still dominating because all the young players decided to play hokey instead of basketball.


Is it still deathballs at the highest levels? I haven't kept up with the Starcraft II scene.


It was some years ago, but the current SC2 has a wide variability. E.g. the TSL8 final (https://m.youtube.com/watch?v=k0cWokKNBUQ) played this year was the opposite of deathballs, with rushed games decided by small errors.

At the top level players should keep an eye on the other player at all times, because a lot of strategies has a very specific response or you die.


Does anyone here know the conclusion to Alpha Star? What was its winrate against actual pros that it encountered on the ladder after the APM abuse was fixed following the exhibition match with TLO? Was it ever able to beat Serral?


AFAIK, it beat Serral, but he said the setup wasn't optimal (not his keyboard nor his mouse). The games are in the Artosis channel (https://m.youtube.com/watch?v=nbiVbd_CEIA), and the comments are beginner friendly.


They didn't run Alphastar on ladder long enough to really converge on a MMR (they also ran a bunch of different accounts in parallel and combined their results), but they projected ~6.3k Protoss, ~6k Terran and ~5.8k Zerg.

With those MMR, you'd... probably not expect it to be beat Serral (7.5k MMR lmao...)


Team liquids YouTube channel just dropped a bunch of SC2 TSL videos, having not watched or played since WoL it's been pretty interesting. Half the game is brand new, half is still the same. It seems like the games go faster and things are more sensitive to early mistakes/advantages than before, but there's still some good comebacks in there. It didn't feel all that death-ball, however, it feels like that's the last step in winning when you've already won. Take it with a grain of salt tho, I'm not watching particularly critically.


Hasn't been the case for a decade in GSL. I have been following the scene since the first public beta. I actually had the chance to play a very early alpha back in 2007 at gamescom (games convention back then). Most of the units didn't even have sounds and the ones that did, used the SC1/BW files.


Depends what you mean by deathballs.

SC2 generally does not encourage significantly splitting up your main army in mid-late game. So ya, main armies are still normally moving around as a single blob/body.

I think every race still has enough tools to punish a-move, stacked/bunched up ground armies (lurker, baneling, widow mines, tank/liberator, psistorm, disruptor) that even if a ground army moves as a clump, there's significant pressure to quickly micro and split on contact.

There was a recent-ish period where there was a lot of dissatisfaction about late-game protoss air basically being a death ball, especially against zerg. I think that has somewhat dissipated - they released a patch that made disincentivized getting to that state, but I suspect that dealing with late-game protoss air is still annoying as heck for zerg.


The deathball was mostly a protoss thing. Since LotV late game protoss armies now have disruptors which are pretty micro intensive and aren't as deathball prone as the (retarded) collosus…

Anyway, even if SC2 has its issues and by many regards can be considered inferior to broodwar from an esport perspective, saying “sc2 is just death ball” only reveals the unculture of the speaker.


How do you not realize at the 10th or 11th time you are writing the insert/remove from a doubly linked list that you should be making a reusable function to to the job?


Given all the optimizations mentioned, I'd guess they used intrusive lists, which aren't straightforward to abstract. You'd have to use C++ templates or mess with the pre-processor to instantiate a copy of the functions for each type. Hence from the article:

> Among its many features, storm contained an excellent implementation of doubly-linked lists using templates in C++.


I have seen junior developers do far worse things and consider it their job.

It's easy to think that you were (in a sense) hired to type out that code over and over to realise the vision of the designer. If nobody tells you otherwise, you'll be more and more convinced of it for every time you type the code out.


I think Visual Studio at the time could inline functions - VS 6.0 from 1998 definitely could: http://www.cs.cmu.edu/~rbd/doc/optcode.htm. Especially if explicitly marked 'inline', then it would happen at lower optimization settings.

I wonder if devs at the time were skeptical of relying on automatic optimizations, when a few years earlier doing it themselves was the only option. I could definitely see myself falling into that mindset.


I mean, adding to a doubly linked list is just four lines of code and deleting is just two, right? A bit more if it’s possible for the list to become completely empty.


I think the issue was more that they would copy-paste those lines from somewhere else and then forget to change one of the expressions.


It's difficult to know without the context, but inserting/removing from a linked list is so simple it's probably shorter than a function call, and due to the limitations about how parameter passing works in C and many other languages, the code may even be cognitively simpler without the function call.

In such a context, the only cost of not creating a function is that you make the code harder to refactor... hardly a concern for a game developer.


>while sacrificing personal health and family life.

Game dev never changes


>> Game dev, game dev never changes

There, I corrected it for You :)


Yeah, I ran into the class hierarchy problem in my early gamedev attempts. When I encountered this article some years ago I thought "cool, real actual professional game developers ran into the same problems I did!"

My first solution involved something I called "agents" which are really just delegates. Agents could be chained, allowing a given behavior to be written just once and combined with other behaviors. These days I am fully on board the ECS train in my game work. Though it presents complexities of its own, ECS really helps manage the combinatorial explosion of state and behavior a game object may have.



Then: "we needed to make something that kicked ass."

Now: "we need to outspend on marketing and advertising and out-execute on in-game monetization."


Game development has an interesting relationship with software engineering.

Clearly, engineering is important. But games that are too well-engineered tend to be not fun. A lot of what makes a game fun are the exceptions to fundamental assumptions/rules/constraints.


The graphics on a released product when compared to the original in 1996th are leagues ahead.

I still love the game and would like to let my kids play it one day.


First two parts written in September 2012. That third part is only "2 minutes away" and has been for 20 years.


Read this before from a previous HN post. Devoured the article again, this one is a gem.


> But the programming team continually worked towards shipping in only two months for the next fourteen months!

Every time I hear these stories from game developers, I just shake my head and think “where the hell is the project manager?” It’s as if the whole game industry just doesn’t believe in rigorous, disciplined project management, and instead always relies on crunch time, 48 hour days and an endless supply of naive youth to burn out. Nobody benefits when you keep saying “we’re two months from shipping” for 14 months.


Let's be honest: most project managers cause more problems than they solve.

They would do stuff like downplaying all the problems and ship a broken game, or pulling developers into redundant meetings to waste their time.

A software engineer that is proficient at memory management, multithreading, pathfinding, networking, graphics, linear algebra, computer science, data structures and algorithms, and all the other stuff involved in programming a game, doesn't want to listen to a person that not only doesn't understand any of that, but feels entitled to have an opinion on how tasks involving that should be triaged and executed, and sometimes are insufferable people obsessed with hierarchy and domination to compensate for not understanding things.

Want your project to be fast? Get out of the way. Let engineers collaborate and figure it out. And most importantly, stop trying to see software projects through the perspective of oversimplification.

Good project managers exist, but most are not. Hear it from Steve Jobs if you need to: https://www.youtube.com/watch?v=fj0hpsJvrko


That is not what project managers do.

Project managers set priorities, communicate requirements to developers, and communicate expected timlines, problems, needed changes, and other developer-surfaced issues back up the chain.

If engineers (like me) are left alone to work on it, either one of them becomes de-facto project manager, or you probably end up with a well-engineered product and wholly unsatisfied customers.


If one engineer becomes a de-facto project manager is because the team lets them because they realized it's in their best collective interest. And that's not a bad thing.


That's a great thing, presuming that engineer actually enjoys the task. I know many people who pick up the task out of necessity, but dislike it and burn-out if forced to do too much of it.


What you described is pretty much this

https://youtu.be/m4OvQIGDg4I


Trust me that's not exclusive to game shops. Even an open source project (but that is run by a for-profit organisation) I've been involved in for the last year and a half has been similar, even though anyone could surely see 6 months ago it wasn't shipping any time soon. They're still staying 2 months away (I reckon 4 minimum now).


It is a general phenomenon, nothin the game industry has exclusive rights to.

Stubbornness, cognitive dissonance are my explanations. I witnessed quite a few projects, whose deadlines got postponed for more than 6 month. Now comes the exciting part. These managers had a track record of postponing or more neutral: experiencing delay. Instead of fewer crunch time during the next projects, they stick to their previous behavior instead of trying to learn.

There you go, learned self-inflicting pain.


I think it goes high up - all the way up to the company's relationship with its investors (venture capital or investing customers both).

Investors and customers stick around because the company promises that it's just around the corner. Then that's unrealistic and the delays start racking up. (This is at least what I have seen, when deadlines come from all the way from top management and everyone below tries to squeeze their plan into those deadlines.)


Starcraft was released in 1998. The game development industry was still pretty new and nowhere near the size it is today. Also, modern project management strategies weren't really used much back then and even if they were, they probably wouldn't be found in a gamedev studio where headcount was really tight.


Is there actually solid data modern project management strategies work better than the ones from 20 odd years ago? Best run project I think I ever worked on was from the late 90s.


PM hasn't changed much besides trending towards smaller batches, the difference is more business folks are aware there are strategies at all.


That "smaller batches" change has been pretty extreme though. My current company is on the bandwagon of being able to release new features every day, even if we have to jump through hoops and severely compromise quality to make it happen. I appreciate being able to avoid the disadvantages of 6-12 month long release cycles, but surely there's some sweet spot between daily and yearly (which almost certainly varies depending on the product and company developing it).


After thirty years, trends tend to add up!


Do you mean “release early, release often”?

https://en.m.wikipedia.org/wiki/Release_early,_release_often


Release early/often is fine as a philosophy but there need to be better guidelines around how early/often actually makes sense, rather than assuming just because company X releases product Y multiple times a day it must be something to aim for.


Modern project management is mostly predicated on fallacious dogmas and pseudoscience.


On the other hand, early software development resembled more established hardware development at the time, with much higher quality requirements.

So I would expect otherwise. Maybe 1998 was the tipping point?


> Nobody benefits

Somebody benefits alright, just not the developers who are on the death march.


StarCraft was developed 1995-1998. That's something to keep in mind. This is not today


this is fascinating but this guy's writing style is so boastful. you just get this crazy insight into the ego of a successful person.


Yeah, what an ass. Oh wait, that's me!

I did call it my self-aggrandizing blog at one point. In retrospect I think I mostly wrote it to vent about the pain and awfulness of crunch culture. I'll seek a therapist in future, and hope you'll forgive my trespasses.


For what it's worth, I hope you do end up writing more posts in the future. It's easy to overlook or forget the pains and lows of projects in hindsight, especially for particularly successful ones like the ones you've worked on. Your blog posts have been some of my favorites because they don't gloss over the sacrifices required for those successes, a reality that Blizzard's historically secretive culture tended to hide a lot of. Your work more than speaks for itself, and you've definitely earned the right to be boastful of it.

I was actually excited to see your blog domain pop up on HN today, only to find out it was to an old post. You have a ton of valuable insight and knowledge into games and project management and I hope that commenters like the one you replied to don't put you off from sharing that with the world.


It didn't come across as overly boastful to me. It was very interesting, and also on a related note thanks for helping make those sweet games.

Warcraft/Diablo/Starcraft blew my mind as a kid and were a huge part of my childhood, they played a major role in getting me into computers.


A unique voice is what makes the post an interesting read.

Speaking about your own achievments does not lessen my achievements.

Thak you for the post I found it enjoyable.


Through StarCraft, you were responsible for bringing countless hours of enjoyment to me and my friends, not to mention millions of others around the world.

I don’t mind if you’re a little boastful.

Thanks for all the fun!


Thanks for your work on Starcraft. It's given me a ton of fun over the years and I'm actually playing a tournament game in Remastered this weekend.

I actually really wish we had our hands on some of the earliest builds, even the "orcs in space" ones. I've been playing this game for so long that I'm curious how exactly the final version of it was beaten into shape over time, and how different it feels to play over its development period.


"Humility is not thinking less of yourself, it is thinking of yourself less" — Rick Warren

You can be a central and large contributor on a project and know it. If you lord over other people with that knowledge and position and flex to stroke your ego, that's pride and arrogance.


Thank you for your work on StarCraft. The level of depth to the game and my endless fascination into how it worked was my inspiration for becoming a software engineer. Your blog is definitely a good reminder that it was built by humans too.


What's funny is that -- having worked with him during the time this article is about -- Pat was one of the nicest and most humble developers on the Blizzard staff, despite being "#3" at the company. I don't mind his tone re-reading this now-decade old blog.


I’d love to hear which lines the op felt were boastful I read all three articles only got the sense of damn I wish I had been lucky enough to work with him.


netcoyote co-created one of the most successful games ever. I think it's a bit justified.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: