Hacker News new | past | comments | ask | show | jobs | submit login
Many games are held together by duct tape (polygon.com)
120 points by misotaur 6 days ago | hide | past | web | favorite | 154 comments

Watching speed runners play through games is a great way to visualise not only this aspect (that games are flimsy at best), but to visualise the same practice in any software - if you don't code defensively, often a simple mis-input breaks the game.

A common pattern in many speed runs is finding some glitch through a door, and then the game logic kicks in and says "you are past this door, so you must have got the key".

If you are curious, watch the five or so minutes of this speedrun: https://youtu.be/0Ct8n1CClUM?t=3072 First, the player jumps through a game world, getting to a race he shouldn't be able to get to yet. Second, he glitches into the "solid" wall, which is just a thin wall around the racetrack.

During the Hotline: Miami run this year the runner mentioned that on certain GPUs the game would crash unavoidably after 25 minutes due to a memory leak, and that if you wanted to increase the FPS you could plug in more mice.

None of that precluded it from being a great game, though.


> and that if you wanted to increase the FPS you could plug in more mice

I am having trouble imagining what sort of event handling bs this might be taking advantage of...

I am too vanilla a coder at this point in my life.

I'm not a game dev, but rough guess: the framerate is artificially limited to some 'sane' range by the equivalent of a short sleep() which is cancelled by an input event. More devices, more events, more chance to prompt the next frame early?

Yep, almost certainly what's happening.

It's very common to constrain your central "game loop" for several reasons. A game like Hotline presumably has some basic loop like "process inputs, process events, update AI plans, process AI actions, redraw screen". When the logical tasks are easy (e.g. most of the AIs are dead and you're standing still), that's essentially just a busy-wait that redraws your screen as fast as it can. Constantly maxing system resources is obnoxious, and it can be jarring when things slow back down. 40FPS might look just fine, but you can still 'feel' the change when you ramp up and down from 60FPS, so it's nicer to just cap the whole thing at 40FPS. Mouse inputs probably don't adjust the cap itself, but to avoid laggy responses they're usually interrupts which might refresh the screen.

A fun aside: some games fill out the time until the next redraw with more "thinking time" instead of sleep(), which leads to bizarre behaviors like an AI that gets smarter when you turn down the graphics settings.

Yes, a typical artifact of not resuming the remaining sleep after the event handling.

Does not work with pushing random keys only because those are buffered separately.

Not gonna lie that is hilariously ingenious if its the case.

I've been in game dev for 10 years and I'm as stumped as you are.

Rendering is likely blocked on mouse interrupts.

For anyone interested in speed runs, you should check out Games Done Quick - https://gamesdonequick.com/. And it's all done for a great cause too.

I recently watched the fallout anthology run and some of the things the speed run community have discovered are amazing.

One of the best GDQ runs I've ever seen was one of the FFVII speed runs. This is a game that normally takes like 50 hours to complete start to finish. The RNG of the game is able to be manipulated by keeping the number of steps taken by the player absolutely perfect. This is a "speed run" that they play without any kind of tools or cheats for hours and the entire time they're keeping the exact number of steps taken through different levels on perfect track to be able to beat the game without spending dozens of hours training their characters.

Thanks to Breath of the Wild, I've recently enjoyed watching a few speedruns and it's amazing to see the number of glitches in BoTW. Check this out https://www.youtube.com/watch?v=JEtHpCfi_DE

I also don't understand how anyone can accept speedrunning through glitches as a world record, but I guess if everyone does the same run with similar or the same glitches then it's fair? Seems like it just becomes a race to know every glitch in the game.

EDIT: I just want to say thank you to the replies that explains the mechanics and categories of speedrunning.

A lot of the glitches in breath of the wild are just systems of the game interacting in weird ways. There are sometimes "glitchless" categories, but sometimes it's very difficult to figure out what is and is not a glitch. Calling something a "glitch" or not is a surprisingly subjective thing. To one player it may be a legitimate use of the bullet-time mechanic to turn frozen enemies into physics cannons while shield-surfing on them. To another it's a glitch. A lot of what gets decided as for the run comes down to the amount of time that using that strategy results in. Lowest wins. Any% in breath of the wild means "kill Ganon ASAP".

This can get super heated. There was a controversial "glitchless" run of Mirror's Edge at GDQ a few years back...

Sometimes people make a "no major glitches" or "no OOB" category, where specific glitches are disallowed, but others are allowed.

If we're thinking of the same Mirror's Edge run, that was meant to be a joke and to bring to light the ridiculousness of a "glitchless" run as a lot of things can be interpreted as a glitch or not.

Interesting! Thanks for this; this happened just as I was getting into speed running, and I, like many others apparently, didn’t realize that, but I did some googling over lunch and it appears this was the case! Thank you for the extra context.

They break it into categories. 100%, Any%, glitched, glitchless, etc. To set a record with glitches doesn’t put you against people that run without them.

Yeah, to give some examples, [1] is a world-record ("WR") Hollow Knight Any% run with "No Major Glitches" ("NMG") completed in 33:07 just a few weeks ago. [2] is an Any% run that takes full advantage of glitches, and is over 10 minutes faster as a result (20:21 from November last year). [3] is an example of doing a complete 112% (i.e. getting everything that counts towards the regular completion percentage plus the DLC), and consequently takes over 3 hours. (That particular run was 3:23:34 from last June.) I mostly understand what's going on in the first -- a few minor bugs get used (like falling faster when the menu is open) and it takes advantage of things like changing the language to Chinese to skip text faster as well as quiting to the main menu to warp back to the last bench, but primarily the run depends on exploiting mechanics that are intentionally part of the regular gameplay but which allow the player to sequence break (like dying deliberately by the charm vendor to spawn your shade so that you can bounce off the shade while fighting it to get into the Resting Grounds quickly) and knowing exactly how to move to get through areas as fast as possible. For [2], I'm honestly not sure what is even going on for a lot of it. I think the speedrunner gets outside the level geometry by exploiting a bug with how loading works?

[1] https://www.youtube.com/watch?v=FFZy2gtwpI4

[2] https://www.youtube.com/watch?v=SAw-_uYhAlU

[3] https://www.youtube.com/watch?v=MCOmg5kpCM8

Playing the meta is part of the game. If you're not loading in other code or using 3rd party assistance, it's fair.

Understanding the buy strategy in CSGO doesn't make you a "cheater", in means you understand how to stretch the game to it's limits.

Mike Vrabel taking penalties to burn the clock might seen unfair, but he was just leveraging the rules as written to improve his chances of winning.

> Understanding the buy strategy in CSGO doesn't make you a "cheater", in means you understand how to stretch the game to it's limits.

the economy in cs is a core mechanic of the game. I don't know of any buy strategies that are even close to considered "exploits".

a better example would be illegal boosts. in the old days, there were many OP positions you could get into by boosting on top of other players (or even throwables in midair). each competitive league would have its own lists of illegal boost spots for each map. boosting on throwables was almost always illegal. you could also do stuff like defuse through walls, which was always illegal.

bunnyhopping was something of a grey area. some leagues allowed it, while others didn't. it was eventually patched out of the default game settings behind an svar.

allowing literally anything that didn't involve loading external code would have severely broken cs:source. there were even ways of replacing assets that would effectively give you wallhacks without injecting any code. these decisions always comes down to "how much does this deviate from the balance designed by the developer?" and most importantly "is the game still fun with this behavior?"

All good points, but additionally: sometimes it's good to wait and see where a glitch takes a game, rather than banning it straight away because it 'breaks' the version of the game that everyone has been playing so far.

A classic example is the Quake series, where the movement mechanics that gave the games lasting appeal were all originally accidental. (And Carmack even briefly patched strafe-jumping out of Q3, before quickly reverting -- but only because his fix caused other problems: https://www.rockpapershotgun.com/2014/09/02/quake-3-john-car...)

David Sirlin also writes well about this sort of thing (with more of a focus on 'cheap' use of intended mechanics), though his tone could be offputting to people who think hyper-competitive videogaming is a bit silly: http://www.sirlin.net/articles/playing-to-win

I totally agree. "skiing" in tribes is another classic example. fun mechanics take priority over everything. after all, it's a game!

as an aside, strafe jumping is pretty similar to bunnyhopping in cs, probably because the source/goldsrc engine had its roots in quake code. I brought it up as a grey area. it's a fun mechanic, but timing is really important on cs maps, and it definitely breaks the balance on some of them. personally I think it was a mistake for valve to patch it out of the vanilla game (leagues, 3rd party servers, and matchmaking could have just disabled it through the svar), but that's all history now.

my main point is that the mere fact that games take place in virtual environments doesn't make whatever the client/server permits the ultimate law of the land. just like in real life, the rules of the game are the intersection of what's possible and what's fun.

> Mike Vrabel taking penalties to burn the clock might seen unfair

Certainly not unfair to do it against Bill Belichick, who did almost the exact same thing earlier in the season--first time I'd ever seen a delay of game or false start penalty declined.

For some extra context here, the "100%/any%" thing comes from Metroid, which would give you a percentage score at the end of the game. So "any%" came to mean "literally anything goes, get to the end of the game." This culture has carried over to games that don't explicitly tell you how much of the game you've completed.

Sorry for the downvotes. For the speedrunning community, somebody pointing out that using glitches during a speedrun is cheating is a bit like making an obvious joke about somebody's last name. Not the first time they've heard it, and won't be the last :)

I only pointed it out because I wanted to understand the reasons, especially when world records are being set. However, I accept your apology and reasoning for downvoting me, and I would never point it out during someone's speedrun since I would just want to enjoy their achievements.

Ex-Game dev here: shipped many titles on console (AAA and some zzz titles).

When comes time to shipping a game, the crunches, pressure and overall stress is through the roof. Most everything goes at that point: duct tape, hot glue, bobby pins and toad spit. And if you disagree with management over these 'not so best practices' (in order to ship faster...) they will find someone to do it and you could be out of a job.

That being said I would not trade my time in that industry for anything else. It's basically the BUDs of programming (of course I'm biased)

Good times!

I used to work in a large FB/Mobile game studio (one of the largest) and was on a team that launched a certain FB flash-based game. It was a project that was on "crunch" from beginning to launch (about 6-9 months). Literally 3 days before launch we found a severe flaw in our client-server communication layer that didn't do what we needed (this was almost a decade ago, I no longer remember what exactly the issue was). I spent an entire day heads down rewriting the whole client-server communication architecture that day. I've been in non-game, software dev jobs after that company, and can't imagine something like this happening in any other company/industry, even in aggressive scrappy startups. Duct tape and hot glue are pretty much the norms in game industry. Like you said, I also wouldn't trade that experience for anything else, but I am also never going to get back to it.

If anyone wants a taste of what it's like to start with a clean code base that sharply turns spaghetti when the deadline gets close, give Ludum Dare a try sometime! By the end of the 48h you're forced to choose between growing the project sustainably vs. implementing as many core features as you can before the clock runs out. Write-only programming!

it usually makes sense to start hacking and slashing towards shipping anyway since 95% games have a shelf life and aren't maintained and extended much beyond the original release. By the time the next game starts production there's probably major engine and platform upgrades to contend with and a different non-overlapping set of features to support so why bother with excessively clean well architected code that won't be re-used. (Of course this is somewhat of a self-fulfilling prophecy the state of the code-base influences business decisions with regards to what will be re-used and what the next game will be)

What does BUDs stand for?


Basic Underwater Demolition/SEAL (BUD/S) Training

I enjoy playing games, but have long felt conflicted because of the studio practices so many AAA shops have (i.e. long hours and lack of job security). I have quite a bit of respect for studios like Klie (https://www.polygon.com/features/2013/5/29/4362838/the-birth...) because of this. I wish more studios would spearhead cultural change like this, instead of waiting until they are forced by employees.

What do you do now? I like clean code, but also understand the need of producing results. I get pleasure from creating clean, readable, commented and maintainable code. However, I understand my boss mostly doesn't care, he just wants a working product that can improve the efficiency of our business.

Example: Product which ran part of our business for the last 15 years, is the worst code I have seen in my entire life. I have seen better code written in a state university. However, it did its job and generated a lot of money in the process.

I think delivering glued/hacked code happens in many places.

My favorite insight on clean code is that the metaphor of "technical debt" is incredibly deep. It's not just "we skipped X hours of cleanup" - you can meaningfully discuss the purpose and size of the loan, interest rate, payment plan, and more.

From that viewpoint, there are basically three happy cases for paying tech debt. You can treat it like a paid-off credit card, releasing code today and then paying off the principal tomorrow before you incur any interest. You can use it like a home loan, accepting that you can afford long-term interest more easily than committing all the time upfront. Or you can use it almost literally as a business loan, releasing something ugly which makes the money you'll need for services/salaries/etc to pay off the debt.

In practice, most companies accrue tech debt like credit card debt or back taxes. Either they pay a fortune in interest (i.e. maintenance, dev ramp-up time, and slowed development) so they can't afford to work down the principal, or they ignore the entire problem until it grows way more expensive.

The upside is that there's not much longterm cost to declare tech bankruptcy. If something can limp along until it's replaced, all the work you've been putting off can be skipped completely. Hence the godawful state of videogame code: people aren't necessarily worse about hacking things together, but they're usually coding with a clear ship date in mind, so they don't worry so much about feature requests, training new devs, or anything else long-term.

(Sometimes this blows up, like when a studio wants to make a sequel to a hit game and discovers everything they've built is unusable. And post-release patching has raised the lifespan of game code a lot from the days when you pressed a master CD and walked away. But all of that is speculative: if you don't ship, or don't sell well, the code is dead anyway.)

I run a small MMORPG and I'm considering open-sourcing the whole thing but the code is a complete mess. The source code of VVVVVV is a work of art in comparison.

What holds me back are 1. I'm ashamed to reveal the monstrosity 2. I'm afraid the code is too messy for anyone to be able to make contributions 3. it will make it much easier for hackers to find exploits.

It's a shame because I think the game would be really cool as an open source project.

Most games' source are real messy. And it's ok. Don't be ashamed.

When I find an open sourced game engine that has been patched and improved by the community, ported to OpenBSD, made run smoother on my hardware, etcetra., I don't think "what a shame the code is so messy." I think "I'm so glad they released the source code and allowed this to happen!"

It's very sad to see so many games that I'd potentially like but then I run into little issues that nobody is ever going to fix without the source (or a complete remake, as in the case of e.g. OpenMW, but these projects are not very common and few of them ever mature).

Game developers often think of gamers as very entitled people, but flip side is that gamers are usually left on their own with their problems and they aren't given the tools to fix them while the developer is too big or busy (or both) to care.

Messy code that produces a valuable product is better than beautiful code that does nothing.

Or worse, the pursuit of beautiful, well-tested, aesthetically-pleasing code that is never finished and never shipped.

That's true. Except when interviewers see your profile and they suddenly care a lot about the hacky code you wrote at 3am on a saturday for fun. sigh...

Imagine how many billion dollar ideas are just sitting on peoples hard-drives

True. It's a balance that we as developers must always seek; Some people say that premature optimization is evil. Premature code refactoring can be just as damaging.

Definitely open source it! The world needs more open source games!

I was working on something similar (small mmo) a few years back and would have killed for at least one example that had been shared. Part of this was to see how they did gongs, part of it (more importantly to me) was to know that there were other crazy people out there doing the same thing!!

> Part of this was to see how they did gongs

What is a "gong" in this case? An actual (in-game) gong? :)

Could you take pride in rewriting the code? Maybe as an open source adventure with others?

Going through the shame of showing it off is more valuable on its own merits than leaving it as is, regardless of the code quality or social outcomes.

I believe that as code "writers", we tend to have a bias that code should well-written. And I don't think it is that important.

If you are working in a global company with a distributed workforce, well-written code is mandatory because it is a way to communicate and to maintain the code in the long term. Even more so if this is a critical code that could kill.

But if you started as a lone developer for a game hobby project, the achievement is not about the code quality, but more about the product as a whole you managed to ship and have users or customers.

So I don't think there is anything inherently shameful in writing bad code, it highly depends on the context

The shame is a personal thing. Being in a community of people that value high quality implementations or having high personal standards for work is a good driver for code shame. It's a choice to go through that shame. Code is a neutral medium.

Well, there should be a bit of shame in writing code that's buggy or not working at all...

Of course if you avoid that while still writing a mess, it's mostly fine.

If I could upvote this x100, I would.

I'm curious: Are you running it as full-time, or a side project? How much time does it take for maintenance in a month? Can you share a link?

Side project but will actually go almost full-time indie developer now in February and continue my regular job as a part-time consultant. Really stoked about it!

It doesn't require much maintenance and I'm running it on the lowest tier Digital Ocean droplet so almost free. In December I didn't touch it at all while having a break between patches.

Here you go: https://canvaslegacy.com/

A player has painted the whole starting mine yellow so it looks like a mess in the newbie area but will address that in next patch. :) Also, mobile version is severely lacking so desktop is highly recommended (actually just wrapped up the new mobile client but haven't released it yet).

Thanks, will check it out!

Perhaps consider opening it up in a limited fashion to start? See if you can find some vetted contributors first, and then work towards eventually open sourcing the whole thing once you feel it's in a better place and it's been (sort of) peer-reviewed for bugs etc..

There are plenty of people looking to get into the games industry, and working on a shipped/existing project can be much less daunting than building your own thing. It might take a bit of time but I'm sure you could find some collaborators, maybe through game jams or meetups?

Also, everyone writes messy code. I wouldn't stress too much about that part.

p.s. Congrats on shipping it! :-)

Everyone knows this, you shouldn't be ashamed.

No one will care (that it's messy). This is a common reason people don't open source software, and it doesn't matter.

The fact you have it out there and shipped is what matters.

Anyone working in the industry knows that the fact that software works at all is flabbergasting.

As a counterpoint, the Quake -> Quake 2 -> Quake 3 original open source releases are remarkably clean. Quake 2 in particular is very minimalist and the game was developed in something like 10 months.

Granted, it's not really fair to compare anyone to the id software team of the time. But the requirements of the Quake series, basically a BSP renderer that is easy to modify and extend, probably forces it to be extensible and clean.

Also, Carmack had a philosophy of rewriting things instead of just adding code.

The fast inverse square root algorithm Carmack invented for Quake 3's lighting is the definition of elegant code


It's EFFICIENT, but it's the opposite of elegant. If you looked at the code you would have no idea what it is doing or meant to do.

Elegant code should be simple, readable, and apply some leverage to make a difficult problem more clear.

It was a very different time with serious limits on graphics, memory and disk space.

Going back even further some of the early 8bit games were engineering marvels where the only hacks were to squeeze out every last bit of memory.

Engines (Doom, Quake) need to be much cleaner and more consistent than (one-off) games.

Mind you both reused many ideas from previous ones. Doom reused Wolfenstein 3D, Quake reused pieces from Doom. (WAD format for example.)

The engines made for the games were indeed intended to be one offs.

Reuse of design and formats does not imply reuse of code.

This is very different from new "universal" engines like Unity, Unreal Engine, Ogre, Frostbite, Dawn Engine or Unigine. Those were designed to be separate from games.

Sometimes the best code comes from making something that's meant to be temporary, replaced or duplicated instead of reused.

True; I guess ID had this internal engine strategy or it grew out of the culture there naturally? I read the Doom book but don’t remember if that was explained. I have the Doom black book but did not read it yet.

I'd suggest Michael Abrash's Black Book which I recently read. He wrote it while working for ID and there's lots of little bits of gold in there. The PDF is free online. Note, it's very big.

I did read that too long ago; will reread: thanks!

A friend of mine in gaming though had an insight I found illuminating. The difference with games though is that there’s often an element of “One and Done” in that unlike much of web development, the starting from scratch for the next generation is more of a given. There is less need for long term support as games are expected to meet its twilight much sooner. With web development however, it is a regular risk assessment and there must be a true meeting of minds to decide that a rewrite is worth it.

The product I’m building since 15 years evolved from a 32 bit desktop app (with dependencies from the early 90s) into a heavier 64 bit desktop app 10 years (hundreds of man years) later, and further into cloud services that weren’t even a term when development started.

When you build something that will have new requirements in ten years, implemented by different developers, then you can motivate spending time on code hygiene.

For a game anything that won’t be in the next game you can duct tape. The parts that will go into hundreds of games over a decade is the “engine” and I suppose in that you’d find the same sort of discipline and hygiene as in any other long term code base.

> The product I’m building since 15 years evolved from a 32 bit desktop app (with dependencies from the early 90s) into a heavier 64 bit desktop app 10 years (hundreds of man years) later, and further into cloud services that weren’t even a term when development started.

My employer, as well. We have three teams where the main code bases were started ~2001, when the company founded. One of these teams has code that started as Java applets that are now .NET Core. We have another ASP.NET system that they are currently migrating today. The other two teams, building Windows desktop applications, are now invested in web technologies, like browser engines and WebRTC on our A/V side.

I cut my teeth in the embedded industry, and we were supporting at least one product that I recall working on there that had code that had started in the early 90s.

It's incredible to think about. There was a nice thread yesterday that touched on this topic, as well. [0]

[0]: https://news.ycombinator.com/item?id=22042186

> There is less need for long term support as games are expected to meet its twilight much sooner.

I think this is true for the majority of games.

But I wonder if it remains true for the most successful.

e.g. GTA V released in 2013. Apparently as of 2018 it saw $6B in revenue. Though, surely a lot of this has to do with the new content they introduce to incentivize microtransactions.

Fortnite was a paid early-access game in July 2017, but had a free-to-play battle-royale mode by September 2017. Minecraft came out of beta in 2011.

I can agree with "just ship it, quality of code is not a concern" for an initial release. I wonder how much bad code affects big and successful products, though.

e.g. Ubisoft's Ghost Recon Breakpoint was very buggy, even though it looked like an iteration on the Ghost Recon Wildlands game just two years earlier.

Games with online multiplayer are frequently closer to saas than fire and forget. This change started happening publicly mid 2000s with people like ncsoft describing games this way.

I miss the 'be big and better each release', fire and forget of the 90s but it's long gone.

I'm not sure about the others but Minecraft's code was notoriously bad for years, but it didn't seem to hold it back.

Honestly I think the bugs _helped_ Minecraft—bugs in the terrain generation made for interesting scenery that was popular to share, and bugs in the multiplayer code helped build the community as people shared how to get things like minecart tracks to work.

A lot of games are receiving years or even decades of updates now though, such as minecraft, terraria, factorio, stardew valley, ...

For better or worse, it's nice to get new features in games you love, but it were also great times when you knew a game was "it", when you finished, it was done, and there were no mandatory updates.

Minecraft was on the cusp of that change, and it's pretty telling how much of a hellscape the code was for years.

I mean I still buy and play games that were made 20+ years ago. Shame they have so many bugs we can't really fix.

I don't buy websites that were made 20 years ago. Almost every commercial site has seen multiple rewrites in that timespan.

For many old games, it's not even certain whether the source code even exists any longer. I'm always impressed with the sometimes herculean efforts that fan communities will achieve to patch and fix old abandonware games.

IMHO, this is a failure of the copyright model. All information required to build software should be required to be escrowed at release.

The thing is, when a game is shipped, it won't be updated for more than 5 years usually (I think 5 years is pretty long for games actually).

But I often have to work on website code made 10 years ago. THe fact that it's used does not matter much. The interesting difference here is how long will the code be maintained.

I think code quality is important, and big studios make a buck on having clean and workable code they keep improving over time.

That being said, and especially on indie games like VVVVVV, hammering in a cool little detail gives the final product much more value than having clean code.

That's why big engines with mature coding patterns sometimes are not the way to go to make a cool little thing. Making novel mechanics (e.g. time reversal in Braid, or the super tight controls of Super Meat Boy) with an existing engine, while possible, would probably be a PITA.

The "hackability" of the engine you use can allow you to be more creative, even if a bit messy to maintain.

Back in the days of flash, I'd probably use a simple display list engine (flixel comes to mind) to do the heavy lifting, and hack away with those basic building blocks.

Nowadays, I was surprised by the relative hackability you get with godot[0], while still being able to tap into what you can call a mature engine. If you like to make small games, you owe it to yourself to check it out.

[0]: https://godotengine.org/

Writing hacky code can be liberating though. I really enjoyed hacking together things in the pico-8 last year. Globals? Fine. Neat function names? Nah. Variable names? 'a' will do.

Wow a 3440 line switch statement for processing game state!


Explicit state machine is better than an implicit one. And every program is one of these.

4 KLOC for big chunk of game logic is rather frugal.

The main problems that are plain is not enough names (for states, flags and triggers) and not having split off text to some central repository. (Making translation easier.)

Orthodox version of the state machine would have code execute while you're changing states rather than in any given state.

The big switch statement is a good design with mediocre implementation. (Compare with Sierra's AGI LOGIC code: https://wiki.scummvm.org/index.php?title=AGI/Specifications/... )

The giant switch statement doesn't bother me that much. It's essentially just a big table at that point.

What would drive me nuts are the lack of symbols for each case. At least there's a comment for many of them.

And it works! I learned a lot about preventing magic with this here: https://gameprogrammingpatterns.com/state.html

For me - and I mostly do frontend apps now - I always advise to go for state pattern instead of complex if/thens. Thanks to the aforementioned article.

Does Redux help with things like this?

If you have enough discipline, sure. Finite state machine tools like XState are more strict about this.

I'm not at all accustomed to game design or game code. How is game state normally called upon and saved?

When you are making a game, priority number one is always that you are indeed making a game, and not how.

(substitute "game" for anything you want to do)

I love elegant code, but in the end it's never _more_ important than the game itself.

Sure this goes if you're a one man team. But the more people share hacky code it quickly can turn into a nightmare to work with.

This is very true. I created an open source framework to help developers build scalable real-time systems and I tried to focus on multiplayer games at one point; I thought it would be a good idea to ride the wave of web-based multiplayer .io games by targeting those developers but I ended up realizing that they're better off just using raw WebSockets. My framework could only help in terms of producing the initial prototype.

The profit margins on ad-sponsored web-based multiplayer games are paper thin. They don't care about code structure or cleanliness at all. If someone can make fugly code that performs 10% better, that can make the difference between a profit or a loss or it can mean doubling profits/earnings from the game.

This is very different from most software businesses where even a solution that performs 10x worse is still acceptable if it makes for cleaner code that is easier to maintain and can handle changing requirements better.

Micro-optimizations and clean code are mutually exclusive IMO. Like for example, people may be tempted to send JSON objects with nice descriptive properties like 'direction', 'keyCode', etc... But in the end it's faster to just send a raw string or binary packet without any property names; just pass raw integers directly in a certain order and the receiver decodes the message assuming a certain order. This is extremely inflexible at the protocol level but it performs really well (I.e. you can't easily add new properties later or make some of them optional without breaking the current protocol).

Games, websites, and everything else - both Apple [1] and Dropbox [2] got passwords wrong in the last decade. I am in awe of the NASA programmers who have virtually no bugs [3].

[1] https://www.theguardian.com/technology/2017/nov/29/macos-hig...

[2] https://www.cnet.com/news/dropbox-confirms-security-glitch-n...

[3] https://www.fastcompany.com/28121/they-write-right-stuff

I think you may not realize how many bugs NASA programmers are making. For instance, see the fairly simple mistake made on Deep Impact spacecraft:


There are no doubt more, that don't cause the loss of an entire spacecraft.

Every time I see "Deep Impact" in the context of NASA I end up thinking of the Climate Orbiter; where, due to an issue with unit conversion between metric and SI they turned it into the most expensive lawn dart to ever leave the solar system.

I'm sure there are oodles of creators out there who hold back their source code because they don't want to deal with being shamed. We should be using these opportunities to learn and grow together, and not as a soundboard to feel good about how much better we are than someone else.

Someone posted this talk by Johnathan Blow on a story a week or two ago and it seems worth sharing here:


I've never thought about it but it makes a lot of sense. Just thinking about the few times I had to write some hacky code to get something to work and given the complexity of video games it is kind of expected.

This I what made the 1990s era games so full of non-critical bugs AKA "glitches".

The ceiling of complexity was raised waaaay over that of the previous generation of tech, but the techniques used by developers didn't advance nearly as far.

So you have a lot of if this value == this number, load this scene (OoT) or if this value is greater than this limit, cycle it back to the lowest possible value of the range (Civ, Nuclear Ghandi)

Now, bugs in big games tend to be critical crashes because the tooling and techniques have standardized and caught up to the complexity ceiling

It's interesting to see how duct taping a game inside out can result in absolute success once you ship something valuable. It can be really thrilling and you can get away with it specially if you're working alone, which for me is the best work. It's another story entirely if you're on a team, coding together a product that must be maintained and improved for years. I work on both sides, and I can say that I only have real fun when I'm coding alone. Though learning to work in team is a great skill as well. I'm specially careful with code architecture and readability, but sometimes when you're on the flow, the zone, the code just flows as well;

So is almost every other piece of software. I would like to see an analysis of commercial closed source software on how many of then use good practices, have clean code etc. Not the ones that filled in the [] we y use TDD in the CIO monthly questionnaire. Maybe I meet the wrong companies I have to work or integrate with, but in my experience it is not very far from 0% (but it is >0 luckily). I met one company last week who do everything right, but only for projects over 500k where over 10% goes into those practices, otherwise there is simply no money for the overhead...

The code sample they have there that is "messy" doesn't look that bad to me. The reality is that almost all the code that runs lots of important things is a complete spagetti.

> “Games aren’t just an ordinary piece of software, they are a complex beast that require many different disciplines to successfully ship, and often on timelines that require sacrifices to be made,” said game developer ...

Is this any different from any other type of software? What is this "ordinary" software that doesn't require different disciplines, or have timelines?

It's not the most complex or hardest kind of software, but it does have special concerns.

Dates are huge; you have to have a good E3 demo, and you (often) have to ship in time for Christmas. These dates can not slip.

There's also a lot more art / interfacing with artists / art tooling and collaboration than a typical software project.

The real differentiation to games from most other software is just how little correctness actually matters. It arguably doesn't matter at all as long as it doesn't produce feel bad moments lost items or lost progress.

If AIs of today make a heap of code (if you say it's data, I say data is code is data) that is really a spaghetti of a code that happens to seemingly achieve a goal, maybe game developers are already a bit like that ;)

VVVVVV's code is still better than the source code of Descent - holy hell, was that codebase a mess in areas, and virtually no comments in the complex parts of the 3D engine...

change games to software


I wish people wouldn’t mention programming and computer science in the same essay, unless it’s something very technical such as discussing synchronization algorithms.

Most programming isn’t science. Most programming isn’t even engineering. Most programming is contracting or DIY tinkering.

We use ridiculous metaphors like constructing an airport, when most programmers are tradespeople working on renovations.

Have you ever inspected the work of a tract house builder? It’s awful and usually plays fast and loose with building codes.

When an indie teaches themselves to code, we get the same result as a homeowner teaching themselves to renovate. That’s not a bad thing, but nobody swapping ordinary outlets for outlets with USB-C ports is thinking about Maxwell’s equations.

> Have you ever inspected the work of a tract house builder? It’s awful and usually plays fast and loose with building codes.

This is a great metaphor. I work in construction management, and the quality difference between non-union commercial and union commercial firms (let alone a non-union resi contractor) is large enough that a lot of large commercial buildings have union contractor only exclusions (which is good for me, and better for the union tradespeople that perform the work)

Programming is far more trade-like than most programmers would like to admit..


Actually most programmers I know are vastly intellectually overqualified for their jobs. We are talking like science PhDs.

It's not "tradespeople working on renovations" it's more like rocket surgeons making mud huts.

That's probably selection bias due to the circles you're part of. Most of the people I know have at least a master's degree but we can agree that I can't extrapolate based on that data.

I'm sure only a small minority of the official ~25.000.000 programmers in the world are science PhDs and rocket surgeons (I expect the unofficial number of programmers is even bigger). I have my doubts that those dime-a-dozen mobile apps or websites are all made by scientists.

Sounds like they were mislead as to what their career prerequisites were.

Or we're just churning out too many Phds

Specifically job requirements are incorrect. Which big companies have figured out and are hiring for cheap in China and India.

Is that really whats happening? The way I see it is the old guys without even degrees worked on higher quality code than their younger counterparts with increasing levels of education.

I think the issue is probably touched on by something like " bullshit jobs" by David graeber

I would not call that "overqualified," just "overeducated."

It turns out the main requirements of the job are not education.

The field is called computer "science" and maybe that creates expectations.

But from experience I know that almost all engineering professions put on their pants one leg at a time.

There is certainly a large difference between theoretical and applied physicists and computer science lacks this discrimination, but engineering is often about cutting corners where nobody takes a closer look.

If a colleague of mine calculates some optics, he puts some numbers in a special software and waits a few hours.

There is a lot of mundane work until there is a problem where you can apply computer science. Programming beside the very basics isn't explicitly taught as part of the curriculum. I think it is just a matter of experience and having understood the fundamentals at least at one point in your life.

And being surprised about the lacking code quality in "professional" environments is certainly be a regular experience.

That aside, nearly any modern game is a fairly complex construct.

I know some self-taught programmers that certainly eclipsed the "hobby" phase.

As someone that's gotten their degrees in Electrical & Computer Engineering I can whole-heartedly agree that this extends to even most engineering degrees; it's why we have EIT & PE grades, and the PE needs to sign off on anything that the company could get sued over.

The PE is essentially the safety valve that has both the education (usually also needs a minimum of a Masters Degree) and the experience (at least 5 years in industry) to say "this design isn't hot garbage." And even with that safety valve in place we get unmitigated disasters like the Tacoma Narrows Bridge or the Challenger explosion - which coincidentally are case studies in almost every engineering ethics class and at least one of our design classes at the undergraduate level. And that's just covering the big disasters, there's hundreds of thousands of projects out there that are also held together with duct-tape and bailing twine that passed a PE's stamp of approval.

Working for the DoD I saw all sorts of hacks from the engineering department to get custom hardware working to replace 60 year old CoTs products that have been out of production for 30 years but are necessary for things like missile and torpedo guidance.

I wish the following quote was etched in stone, then the stone was loaded into a Trebuchet, and then said Trebuchet was used to lob the stone into the offices of hiring managers worldwide:

“Computer Science is no more about computers, than Astronomy is about telescopes.”

(Djikstra didn’t originate this. A longer and more interesting version of this was said by Hal Abelson, and a shorter version by Mike Fellows, who said it was a rallying cry of the period. https://en.wikiquote.org/wiki/Computer_science)

If we stop thinking that computer science is about computers, perhaps we’ll stop thinking a computer science degree is a requirement for clerking or practising a trade in the field, and universities will stop trying to provide a trade education with a gown, mortar, and “science” degree.

I don't see how that's a nit, the article is basically making exactly that point.

To be fair,everybody in web development knows the same thing is true for the internet.



I can't tell you how many times I have to reteach myself this lesson. I'm not advocating for writing sloppy/bad code on purpose but I fall into the trap way too often of trying to make my code a work of art or overly-clever. No one cares, don't spend twice the time to try to abstract something within an inch of its life. Write working code and move on. Once you've done something 4, 5, 10, 15 times then you can look for an abstraction.

Really I think I use refactoring or "stressing over the best way" really to just be a way to procrastinate and tell myself I'm doing something useful or dare I say noble. Of course in practice it's often neither.

Agreed, I think it's important to bear in mind the "time value of code".

Some code may very well be "set and forget" - you write it once and noone ever looks at it ever again. If a project is allocated 100 days, why invest even 1 of those refactoring and polishing code that really doesn't need it?

The difficulty, of course is knowing which code is 'set and forget' and which isn't. I usually avoid refactoring until you hit the same "how does this work again?" wall at least three times.

> I'm not advocating for writing sloppy/bad code on purpose

Well, if the “ugly” code works the same as the “pretty” code, but the ugly code can be developed faster than the pretty code, then you should definitely prefer the ugly code: in fact, if this is the case, the ugly code should be considered state-of-the-art because it can be written faster. Since we all accept that the pretty code takes longer, it had better pay for itself in some way. The theory is that the pretty code is easier to maintain over time: it takes longer up front, but when it comes time to make a change, since the code is so maintainable, the change is easier to make. Although that makes intuitive sense, I can’t say that that’s been my experience; in 30 years of software development, I’ve never come across code that’s particularly easier to make changes to than any other. I’ve tried to take maintainability into account when I’m writing code myself, and I can’t even think of a time when I had to make a change and found that my foresight saved me time and effort.

It doesn’t help much that none of us agree on what “good quality” code looks like: everybody seems to call all code “bad”. I’ve I’d like to see software development advance a bit in terms of professionalism where we at least agree on the principles of high-quality software code, and the principles are objectively defensible in terms of what the cost/benefits of following them are.

Sounds more like too little care rather than too much. Of course it also depends on how much maintenance is going to occur on that code. I think in production code one should already look for an abstraction if something is done twice. The goal is not to create a work of art or an abstraction that stands the test of time. It is just to remove some duplication, distribute code a little better over functions, give things names that are somewhat better and so on. If one does a bit of that every day one is never going to be in a horrible mess. One problem is that people sometimes think that refactoring is all or nothing. You either refactor nothing or you go all the way to adhere exactly to design patterns. Neither of these two extremes produces very good code. The middle way is where the good code is.

I used to think that if something is done twice, it should be abstracted.

Now, I'm a bit more careful. There's a pretty famous quote by Sandi Mentz: "Duplication is far cheaper than the wrong abstraction."


Make sure that you're picking the write way to think about the task at hand, rather than blindly following DRY.

There are times when even a single instance of a code call would be made clearer with abstraction, and there are times where having the same piece of code duplicated multiple times (or duplicated with one piece changed) is far clearer than trying to abstract it.

This Reddit Comment also has an interesting take: https://www.reddit.com/r/programming/comments/5txp5t/duplica...

> The main purpose of abstractions is not to remove or reduce duplication, and not even to make code "reusable"; it is to make semantic patterns and assumptions explicit and, if possible, first-class.

The further comments provide more discussion.


I agree with the rest of your comment that refactoring and code-cleanup should be done in pieces and that, as with everything, striking the right balance is key.

If there is any article that I absolutely hate it is 'Duplication is far cheaper than the wrong abstraction'. A somewhat minimal abstraction has very little chance of being wrong. And if it is wrong, is it really that difficult to know? It has a chance of needing some improvement, but what code does not have a chance of needing some improvement? If one goes the full monty and introduces three design patterns every time that two lines are duplicated sometimes one will certainly end up with, perhaps, not so much the wrong abstraction as an overly convoluted one. This acticle is the excuse for programmers everywhere not to fix their messes. It is 100% opposite to what programmers need to hear.

> It is 100% opposite to what programmers need to hear.

Maybe you know very different programmers than I have known, but if I had to compare what has been a bigger source of problems—missed abstractions, or abstractions that make things more difficult for no benefit, it's definitely bad abstractions almost every time.

Well, I worked for quite some time in a code base that was basically one big ball of missed abstractions. Maybe I was traumatized a bit by that. I also have seen wrong abstractions, but not because there should not be an abstraction. If people had actually written abstractions that removed duplication instead of the ones that they half-understood from the design pattern book, they would have had come up with better ones. Duplication is in fact the best way to find the correct abstraction. The article makes this sound suspect but it really is not. The whole literature of design patterns is basically unneeded. If one just follows the path of deduplication one will discover all of them one by one and one will have applied the right one in the right place. Maybe there is some value in noticing that one still has to keep thinking and keep an eye on the likely future and not blindly do this but that is no more than a small footnote to the main message which is that you will find all the right abstractions by removing duplication. It is quite right that in TDD the refactor step is often described as the removal of duplication.

You have not worked on bad enough code base.

Duplication is a problem as the different implementations inevitably drift and get repeated bugs, but simple reduplication results in the rather known problem of RavioliCode() of thousands low cohesion functions. Which ends up unreadable thus bug prone and slow to develop.

Use of the right patterns or rather paradigms reduces amount of code in general, thus reducing duplication.

Wrong patterns are hard to actually change especially on change averse projects. The more widely used the wrong design is, the harder it is to change as the hacks on it multiply. Even worse if the wrong patterns (not code) are duplicated.

These require in depth rewrites to which bosses are usually allergic, which are very hard to pull off on bigger teams too. Incredibly hard to coordinate.

-- () https://wiki.c2.com/?RavioliCode - can happen in functional and structural code too.

'You have not worked on bad enough code base.'

You know, it is a bit pretentious to read a few sentences that someone wrote and then conclude a lot about what they know or do not know.

And I actually do know what ravioli code is. I think ravioli code is mostly a good thing. Also the people on the c2.com page are not uniformly negative about it. Not all code should be ravioli code but in a project with complex requirements there should be quite a bit of ravioli code. It is true that ravioli code is not easy to understand if you are a newcomer to a project but really if the context is 'a project with complex requirements' why would anyone think that it is easy to get into, no matter how it is written?

Another thing is that ravioli code absolutely needs automated tests.

'This style maximizes maintainability by old hands at the expense of comprehensibility by newcomers.' Maintainability is exactly what I want maximized. It sounds a bit bad if there are no developers that are there for a long time. But in that case you are cooked anyway, I would say.

Main problem with the style is lack of overarching structure and cohesion.

Preferable state is high internal cohesion and low coupling. Most code is opposite. Ravioli is when you trade high coupling without introducing cohesion, which is structure. Typical state after dumb refactor rather than rework.

Simple extraction of functions does not get you anything (if done well within a module, maybe increases cohesion), while making them reusable modules trades reduced cohesion for increased coupling, which is bad.

If you deduplicate too much you suddenly cannot change anything simply... As every place that got deduped is now coupled to one implementation. Once you need some special care, you get to replicate it again oftentimes.

Only true primitives are really worth it to not replicate and parts of code that won't change. (Ask the crystal ball again.)

See if deduplicaton gets you any of useful high level designs, like MVC, event-driven, reactive, message based. Without overarching design, you end up with a mass of locally useful ones that together are incompatible thus requiring lots of different, unique glue code.

The big mistake people do is to equate design patterns with code patterns. Which is what the silly GoF book did a lot. For example "fire and forget background parallel tasks" is a design pattern. Reactor (executes Strategies to deal with Events) is one while Singleton or Context are not. Event also is not. Etc. Generally anything that doesn't actually structure anything is a code pattern.

It is quite possible to get an MVC by deduplicating things. Image we have two tables that display some data by the same means, e.g., a web page. This will be about the same code twice. For instance, both will iterate over rows and columns. Removing that duplication will give you an M and a V. Then you may also have to react to some events in about the same way. Remove that duplication and you have a C.

Agree, though I'd add

s/many games/the world/g

And yet every time one starts a new project, we tell ourselves "this time I will do it better!"

I just started a new big project. Wish me luck. It replaces something held together with duct tape and string concatenation in 5000 line files.

To be fair, you probably do a bit better this time around. Most likely not on most counts, but you probably internalized a few mistakes from last time and improved on those aspects. And come next project, you'll improve on a few more.

You'll do great!

Web Development is very different I feel. Web devs iterate on their product a lot more. A game is shipped. There is nothing after its shipped besides minor patches (if we ignore multiplayer online games)

Web developers constantly need to fulfill new requirements so having a good level of code quality is important.

You might think so, but it really depends on the culture and philosophy of the company.

"Let's rewrite this old messy app in 'modern' technology" - sure it seems cleaner at first, but when you actually reach parity with the old app, you are likely about as messy.

I tend to think that a lot of the messiness in business apps is more due to the complexity of the business rules than the structure of the application.

That's not true, I know of several projects that were very well written with cutting edge frameworks, elegant abstractions, full test suites and balanced 40 hour weeks. Of course none of them ever went live lol.

It has a light side, a dark side and it binds the whole universe together.

“Perfect is the enemy of the good” applies to computer programs just as much as anything else.

I recently found myself trying to describe modern software techniques to a layman, who is a carpenter.

"Its like this - you're hired to build a house. First, you have to go get someone motivated to harvest the raw materials for you - designs, logic, etc. - which will then be turned into the 'raw wood' that holds up the walls and keeps the roof on. Then, when that person is busy getting the materials cut, you start building the tools you know you're going to need, to get the walls up and strung together. You don't have these tools yet, because you left all the previous ones you've worked with at a previous construction site. The reason for this is that you are going to use the tools to put the walls up, sure - but then you're going to glue all the tools in place to make sure the walls stay up. That glue is the most powerful stuff in the universe, but it will fail catastrophically if you don't put the tools at just the right angle in the glueball .."

Basically, you glue all the new tools together, cover them in wood, and leave them in place so that the thing doesn't fall over ...

For the sake of anyone who uses your software, I hope you're better at writing code than coming up with metaphors.

Exactly. That's software development for you. :P

I legitimately don't understand this metaphor at all

I think it's like,

1) get a product owner to describe requirements (not sure why they are gathering raw mats instead of producing blue prints)

2) He tries to steal code from previous jobs to bring to new jobs, because he pretend that he "owns" them.

3) He can't steal code from previous jobs because he left them in the walls.

4) Javascript is glue.

Javascript: just, no. Never.

.. dunno where you got that Javascript snark from, nor did I even mention 'raw mats', but it seems you got the picture because you immediately added your own invented mess to the scene. Congratulations, you must be a software developer.

But why is he gluing the tools?

They need to be there to keep the walls from falling over. :P

I'm not sure about 'modern software techniques' but to me, you seem to be describing Minecraft. :D

Take bit of this, a bit of that, string them together to make a tool that will do one thing. Put that tool between two other parts, and glue it in place.

Eventually, you get the house built and - if you're good - it looks pretty good and it keeps the kids warm.

But don't, ever, look between the walls. You won't be happy with all the thorny and spikey bits that are glued in place. And yeah, there's quite a lot of hammers and drills and saws literally glued in place - don't touch a single one. They belong there.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact