A common pattern in many speed runs is finding some glitch through a door, and then the game logic kicks in and says "you are past this door, so you must have got the key".
If you are curious, watch the five or so minutes of this speedrun: https://youtu.be/0Ct8n1CClUM?t=3072 First, the player jumps through a game world, getting to a race he shouldn't be able to get to yet. Second, he glitches into the "solid" wall, which is just a thin wall around the racetrack.
None of that precluded it from being a great game, though.
I am having trouble imagining what sort of event handling bs this might be taking advantage of...
I am too vanilla a coder at this point in my life.
It's very common to constrain your central "game loop" for several reasons. A game like Hotline presumably has some basic loop like "process inputs, process events, update AI plans, process AI actions, redraw screen". When the logical tasks are easy (e.g. most of the AIs are dead and you're standing still), that's essentially just a busy-wait that redraws your screen as fast as it can. Constantly maxing system resources is obnoxious, and it can be jarring when things slow back down. 40FPS might look just fine, but you can still 'feel' the change when you ramp up and down from 60FPS, so it's nicer to just cap the whole thing at 40FPS. Mouse inputs probably don't adjust the cap itself, but to avoid laggy responses they're usually interrupts which might refresh the screen.
A fun aside: some games fill out the time until the next redraw with more "thinking time" instead of sleep(), which leads to bizarre behaviors like an AI that gets smarter when you turn down the graphics settings.
Does not work with pushing random keys only because those are buffered separately.
I recently watched the fallout anthology run and some of the things the speed run community have discovered are amazing.
I also don't understand how anyone can accept speedrunning through glitches as a world record, but I guess if everyone does the same run with similar or the same glitches then it's fair? Seems like it just becomes a race to know every glitch in the game.
EDIT: I just want to say thank you to the replies that explains the mechanics and categories of speedrunning.
Sometimes people make a "no major glitches" or "no OOB" category, where specific glitches are disallowed, but others are allowed.
Understanding the buy strategy in CSGO doesn't make you a "cheater", in means you understand how to stretch the game to it's limits.
Mike Vrabel taking penalties to burn the clock might seen unfair, but he was just leveraging the rules as written to improve his chances of winning.
the economy in cs is a core mechanic of the game. I don't know of any buy strategies that are even close to considered "exploits".
a better example would be illegal boosts. in the old days, there were many OP positions you could get into by boosting on top of other players (or even throwables in midair). each competitive league would have its own lists of illegal boost spots for each map. boosting on throwables was almost always illegal. you could also do stuff like defuse through walls, which was always illegal.
bunnyhopping was something of a grey area. some leagues allowed it, while others didn't. it was eventually patched out of the default game settings behind an svar.
allowing literally anything that didn't involve loading external code would have severely broken cs:source. there were even ways of replacing assets that would effectively give you wallhacks without injecting any code. these decisions always comes down to "how much does this deviate from the balance designed by the developer?" and most importantly "is the game still fun with this behavior?"
A classic example is the Quake series, where the movement mechanics that gave the games lasting appeal were all originally accidental. (And Carmack even briefly patched strafe-jumping out of Q3, before quickly reverting -- but only because his fix caused other problems: https://www.rockpapershotgun.com/2014/09/02/quake-3-john-car...)
David Sirlin also writes well about this sort of thing (with more of a focus on 'cheap' use of intended mechanics), though his tone could be offputting to people who think hyper-competitive videogaming is a bit silly: http://www.sirlin.net/articles/playing-to-win
as an aside, strafe jumping is pretty similar to bunnyhopping in cs, probably because the source/goldsrc engine had its roots in quake code. I brought it up as a grey area. it's a fun mechanic, but timing is really important on cs maps, and it definitely breaks the balance on some of them. personally I think it was a mistake for valve to patch it out of the vanilla game (leagues, 3rd party servers, and matchmaking could have just disabled it through the svar), but that's all history now.
my main point is that the mere fact that games take place in virtual environments doesn't make whatever the client/server permits the ultimate law of the land. just like in real life, the rules of the game are the intersection of what's possible and what's fun.
Certainly not unfair to do it against Bill Belichick, who did almost the exact same thing earlier in the season--first time I'd ever seen a delay of game or false start penalty declined.
When comes time to shipping a game, the crunches, pressure and overall stress is through the roof. Most everything goes at that point: duct tape, hot glue, bobby pins and toad spit. And if you disagree with management over these 'not so best practices' (in order to ship faster...) they will find someone to do it and you could be out of a job.
That being said I would not trade my time in that industry for anything else. It's basically the BUDs of programming (of course I'm biased)
Basic Underwater Demolition/SEAL (BUD/S) Training
Example: Product which ran part of our business for the last 15 years, is the worst code I have seen in my entire life. I have seen better code written in a state university. However, it did its job and generated a lot of money in the process.
I think delivering glued/hacked code happens in many places.
From that viewpoint, there are basically three happy cases for paying tech debt. You can treat it like a paid-off credit card, releasing code today and then paying off the principal tomorrow before you incur any interest. You can use it like a home loan, accepting that you can afford long-term interest more easily than committing all the time upfront. Or you can use it almost literally as a business loan, releasing something ugly which makes the money you'll need for services/salaries/etc to pay off the debt.
In practice, most companies accrue tech debt like credit card debt or back taxes. Either they pay a fortune in interest (i.e. maintenance, dev ramp-up time, and slowed development) so they can't afford to work down the principal, or they ignore the entire problem until it grows way more expensive.
The upside is that there's not much longterm cost to declare tech bankruptcy. If something can limp along until it's replaced, all the work you've been putting off can be skipped completely. Hence the godawful state of videogame code: people aren't necessarily worse about hacking things together, but they're usually coding with a clear ship date in mind, so they don't worry so much about feature requests, training new devs, or anything else long-term.
(Sometimes this blows up, like when a studio wants to make a sequel to a hit game and discovers everything they've built is unusable. And post-release patching has raised the lifespan of game code a lot from the days when you pressed a master CD and walked away. But all of that is speculative: if you don't ship, or don't sell well, the code is dead anyway.)
What holds me back are 1. I'm ashamed to reveal the monstrosity 2. I'm afraid the code is too messy for anyone to be able to make contributions 3. it will make it much easier for hackers to find exploits.
It's a shame because I think the game would be really cool as an open source project.
When I find an open sourced game engine that has been patched and improved by the community, ported to OpenBSD, made run smoother on my hardware, etcetra., I don't think "what a shame the code is so messy." I think "I'm so glad they released the source code and allowed this to happen!"
It's very sad to see so many games that I'd potentially like but then I run into little issues that nobody is ever going to fix without the source (or a complete remake, as in the case of e.g. OpenMW, but these projects are not very common and few of them ever mature).
Game developers often think of gamers as very entitled people, but flip side is that gamers are usually left on their own with their problems and they aren't given the tools to fix them while the developer is too big or busy (or both) to care.
What is a "gong" in this case? An actual (in-game) gong? :)
Going through the shame of showing it off is more valuable on its own merits than leaving it as is, regardless of the code quality or social outcomes.
If you are working in a global company with a distributed workforce, well-written code is mandatory because it is a way to communicate and to maintain the code in the long term. Even more so if this is a critical code that could kill.
But if you started as a lone developer for a game hobby project, the achievement is not about the code quality, but more about the product as a whole you managed to ship and have users or customers.
So I don't think there is anything inherently shameful in writing bad code, it highly depends on the context
Of course if you avoid that while still writing a mess, it's mostly fine.
It doesn't require much maintenance and I'm running it on the lowest tier Digital Ocean droplet so almost free. In December I didn't touch it at all while having a break between patches.
Here you go: https://canvaslegacy.com/
A player has painted the whole starting mine yellow so it looks like a mess in the newbie area but will address that in next patch. :) Also, mobile version is severely lacking so desktop is highly recommended (actually just wrapped up the new mobile client but haven't released it yet).
There are plenty of people looking to get into the games industry, and working on a shipped/existing project can be much less daunting than building your own thing. It might take a bit of time but I'm sure you could find some collaborators, maybe through game jams or meetups?
Also, everyone writes messy code. I wouldn't stress too much about that part.
p.s. Congrats on shipping it! :-)
Granted, it's not really fair to compare anyone to the id software team of the time. But the requirements of the Quake series, basically a BSP renderer that is easy to modify and extend, probably forces it to be extensible and clean.
Elegant code should be simple, readable, and apply some leverage to make a difficult problem more clear.
Going back even further some of the early 8bit games were engineering marvels where the only hacks were to squeeze out every last bit of memory.
The engines made for the games were indeed intended to be one offs.
Reuse of design and formats does not imply reuse of code.
This is very different from new "universal" engines like Unity, Unreal Engine, Ogre, Frostbite, Dawn Engine or Unigine. Those were designed to be separate from games.
When you build something that will have new requirements in ten years, implemented by different developers, then you can motivate spending time on code hygiene.
For a game anything that won’t be in the next game you can duct tape. The parts that will go into hundreds of games over a decade is the “engine” and I suppose in that you’d find the same sort of discipline and hygiene as in any other long term code base.
My employer, as well. We have three teams where the main code bases were started ~2001, when the company founded. One of these teams has code that started as Java applets that are now .NET Core. We have another ASP.NET system that they are currently migrating today. The other two teams, building Windows desktop applications, are now invested in web technologies, like browser engines and WebRTC on our A/V side.
I cut my teeth in the embedded industry, and we were supporting at least one product that I recall working on there that had code that had started in the early 90s.
It's incredible to think about. There was a nice thread yesterday that touched on this topic, as well. 
I think this is true for the majority of games.
But I wonder if it remains true for the most successful.
e.g. GTA V released in 2013. Apparently as of 2018 it saw $6B in revenue. Though, surely a lot of this has to do with the new content they introduce to incentivize microtransactions.
Fortnite was a paid early-access game in July 2017, but had a free-to-play battle-royale mode by September 2017.
Minecraft came out of beta in 2011.
I can agree with "just ship it, quality of code is not a concern" for an initial release. I wonder how much bad code affects big and successful products, though.
e.g. Ubisoft's Ghost Recon Breakpoint was very buggy, even though it looked like an iteration on the Ghost Recon Wildlands game just two years earlier.
I miss the 'be big and better each release', fire and forget of the 90s but it's long gone.
For better or worse, it's nice to get new features in games you love, but it were also great times when you knew a game was "it", when you finished, it was done, and there were no mandatory updates.
I don't buy websites that were made 20 years ago. Almost every commercial site has seen multiple rewrites in that timespan.
But I often have to work on website code made 10 years ago.
THe fact that it's used does not matter much. The interesting difference here is how long will the code be maintained.
That being said, and especially on indie games like VVVVVV, hammering in a cool little detail gives the final product much more value than having clean code.
That's why big engines with mature coding patterns sometimes are not the way to go to make a cool little thing. Making novel mechanics (e.g. time reversal in Braid, or the super tight controls of Super Meat Boy) with an existing engine, while possible, would probably be a PITA.
The "hackability" of the engine you use can allow you to be more creative, even if a bit messy to maintain.
Back in the days of flash, I'd probably use a simple display list engine (flixel comes to mind) to do the heavy lifting, and hack away with those basic building blocks.
Nowadays, I was surprised by the relative hackability you get with godot, while still being able to tap into what you can call a mature engine. If you like to make small games, you owe it to yourself to check it out.
4 KLOC for big chunk of game logic is rather frugal.
The main problems that are plain is not enough names (for states, flags and triggers) and not having split off text to some central repository. (Making translation easier.)
Orthodox version of the state machine would have code execute while you're changing states rather than in any given state.
The big switch statement is a good design with mediocre implementation. (Compare with Sierra's AGI LOGIC code: https://wiki.scummvm.org/index.php?title=AGI/Specifications/... )
What would drive me nuts are the lack of symbols for each case. At least there's a comment for many of them.
For me - and I mostly do frontend apps now - I always advise to go for state pattern instead of complex if/thens. Thanks to the aforementioned article.
(substitute "game" for anything you want to do)
I love elegant code, but in the end it's never _more_ important than the game itself.
The profit margins on ad-sponsored web-based multiplayer games are paper thin. They don't care about code structure or cleanliness at all. If someone can make fugly code that performs 10% better, that can make the difference between a profit or a loss or it can mean doubling profits/earnings from the game.
This is very different from most software businesses where even a solution that performs 10x worse is still acceptable if it makes for cleaner code that is easier to maintain and can handle changing requirements better.
Micro-optimizations and clean code are mutually exclusive IMO. Like for example, people may be tempted to send JSON objects with nice descriptive properties like 'direction', 'keyCode', etc... But in the end it's faster to just send a raw string or binary packet without any property names; just pass raw integers directly in a certain order and the receiver decodes the message assuming a certain order. This is extremely inflexible at the protocol level but it performs really well (I.e. you can't easily add new properties later or make some of them optional without breaking the current protocol).
There are no doubt more, that don't cause the loss of an entire spacecraft.
The ceiling of complexity was raised waaaay over that of the previous generation of tech, but the techniques used by developers didn't advance nearly as far.
So you have a lot of if this value == this number, load this scene (OoT) or if this value is greater than this limit, cycle it back to the lowest possible value of the range (Civ, Nuclear Ghandi)
Now, bugs in big games tend to be critical crashes because the tooling and techniques have standardized and caught up to the complexity ceiling
Is this any different from any other type of software? What is this "ordinary" software that doesn't require different disciplines, or have timelines?
Dates are huge; you have to have a good E3 demo, and you (often) have to ship in time for Christmas. These dates can not slip.
There's also a lot more art / interfacing with artists / art tooling and collaboration than a typical software project.
I wish people wouldn’t mention programming and computer science in the same essay, unless it’s something very technical such as discussing synchronization algorithms.
Most programming isn’t science. Most programming isn’t even engineering. Most programming is contracting or DIY tinkering.
We use ridiculous metaphors like constructing an airport, when most programmers are tradespeople working on renovations.
Have you ever inspected the work of a tract house builder? It’s awful and usually plays fast and loose with building codes.
When an indie teaches themselves to code, we get the same result as a homeowner teaching themselves to renovate. That’s not a bad thing, but nobody swapping ordinary outlets for outlets with USB-C ports is thinking about Maxwell’s equations.
This is a great metaphor. I work in construction management, and the quality difference between non-union commercial and union commercial firms (let alone a non-union resi contractor) is large enough that a lot of large commercial buildings have union contractor only exclusions (which is good for me, and better for the union tradespeople that perform the work)
Programming is far more trade-like than most programmers would like to admit..
Actually most programmers I know are vastly intellectually overqualified for their jobs. We are talking like science PhDs.
It's not "tradespeople working on renovations" it's more like rocket surgeons making mud huts.
I'm sure only a small minority of the official ~25.000.000 programmers in the world are science PhDs and rocket surgeons (I expect the unofficial number of programmers is even bigger). I have my doubts that those dime-a-dozen mobile apps or websites are all made by scientists.
Or we're just churning out too many Phds
I think the issue is probably touched on by something like " bullshit jobs" by David graeber
It turns out the main requirements of the job are not education.
But from experience I know that almost all engineering professions put on their pants one leg at a time.
There is certainly a large difference between theoretical and applied physicists and computer science lacks this discrimination, but engineering is often about cutting corners where nobody takes a closer look.
If a colleague of mine calculates some optics, he puts some numbers in a special software and waits a few hours.
There is a lot of mundane work until there is a problem where you can apply computer science. Programming beside the very basics isn't explicitly taught as part of the curriculum. I think it is just a matter of experience and having understood the fundamentals at least at one point in your life.
And being surprised about the lacking code quality in "professional" environments is certainly be a regular experience.
That aside, nearly any modern game is a fairly complex construct.
I know some self-taught programmers that certainly eclipsed the "hobby" phase.
The PE is essentially the safety valve that has both the education (usually also needs a minimum of a Masters Degree) and the experience (at least 5 years in industry) to say "this design isn't hot garbage." And even with that safety valve in place we get unmitigated disasters like the Tacoma Narrows Bridge or the Challenger explosion - which coincidentally are case studies in almost every engineering ethics class and at least one of our design classes at the undergraduate level. And that's just covering the big disasters, there's hundreds of thousands of projects out there that are also held together with duct-tape and bailing twine that passed a PE's stamp of approval.
Working for the DoD I saw all sorts of hacks from the engineering department to get custom hardware working to replace 60 year old CoTs products that have been out of production for 30 years but are necessary for things like missile and torpedo guidance.
“Computer Science is no more about computers, than Astronomy is about telescopes.”
(Djikstra didn’t originate this. A longer and more interesting version of this was said by Hal Abelson, and a shorter version by Mike Fellows, who said it was a rallying cry of the period. https://en.wikiquote.org/wiki/Computer_science)
If we stop thinking that computer science is about computers, perhaps we’ll stop thinking a computer science degree is a requirement for clerking or practising a trade in the field, and universities will stop trying to provide a trade education with a gown, mortar, and “science” degree.
I can't tell you how many times I have to reteach myself this lesson. I'm not advocating for writing sloppy/bad code on purpose but I fall into the trap way too often of trying to make my code a work of art or overly-clever. No one cares, don't spend twice the time to try to abstract something within an inch of its life. Write working code and move on. Once you've done something 4, 5, 10, 15 times then you can look for an abstraction.
Really I think I use refactoring or "stressing over the best way" really to just be a way to procrastinate and tell myself I'm doing something useful or dare I say noble. Of course in practice it's often neither.
Some code may very well be "set and forget" - you write it once and noone ever looks at it ever again. If a project is allocated 100 days, why invest even 1 of those refactoring and polishing code that really doesn't need it?
The difficulty, of course is knowing which code is 'set and forget' and which isn't. I usually avoid refactoring until you hit the same "how does this work again?" wall at least three times.
Well, if the “ugly” code works the same as the “pretty” code, but the ugly code can be developed faster than the pretty code, then you should definitely prefer the ugly code: in fact, if this is the case, the ugly code should be considered state-of-the-art because it can be written faster. Since we all accept that the pretty code takes longer, it had better pay for itself in some way. The theory is that the pretty code is easier to maintain over time: it takes longer up front, but when it comes time to make a change, since the code is so maintainable, the change is easier to make. Although that makes intuitive sense, I can’t say that that’s been my experience; in 30 years of software development, I’ve never come across code that’s particularly easier to make changes to than any other. I’ve tried to take maintainability into account when I’m writing code myself, and I can’t even think of a time when I had to make a change and found that my foresight saved me time and effort.
It doesn’t help much that none of us agree on what “good quality” code looks like: everybody seems to call all code “bad”. I’ve I’d like to see software development advance a bit in terms of professionalism where we at least agree on the principles of high-quality software code, and the principles are objectively defensible in terms of what the cost/benefits of following them are.
Now, I'm a bit more careful. There's a pretty famous quote by Sandi Mentz: "Duplication is far cheaper than the wrong abstraction."
Make sure that you're picking the write way to think about the task at hand, rather than blindly following DRY.
There are times when even a single instance of a code call would be made clearer with abstraction, and there are times where having the same piece of code duplicated multiple times (or duplicated with one piece changed) is far clearer than trying to abstract it.
This Reddit Comment also has an interesting take:
> The main purpose of abstractions is not to remove or reduce duplication, and not even to make code "reusable"; it is to make semantic patterns and assumptions explicit and, if possible, first-class.
The further comments provide more discussion.
I agree with the rest of your comment that refactoring and code-cleanup should be done in pieces and that, as with everything, striking the right balance is key.
Maybe you know very different programmers than I have known, but if I had to compare what has been a bigger source of problems—missed abstractions, or abstractions that make things more difficult for no benefit, it's definitely bad abstractions almost every time.
Duplication is a problem as the different implementations inevitably drift and get repeated bugs, but simple reduplication results in the rather known problem of RavioliCode() of thousands low cohesion functions. Which ends up unreadable thus bug prone and slow to develop.
Use of the right patterns or rather paradigms reduces amount of code in general, thus reducing duplication.
Wrong patterns are hard to actually change especially on change averse projects. The more widely used the wrong design is, the harder it is to change as the hacks on it multiply.
Even worse if the wrong patterns (not code) are duplicated.
These require in depth rewrites to which bosses are usually allergic, which are very hard to pull off on bigger teams too. Incredibly hard to coordinate.
() https://wiki.c2.com/?RavioliCode - can happen in functional and structural code too.
You know, it is a bit pretentious to read a few sentences that someone wrote and then conclude a lot about what they know or do not know.
And I actually do know what ravioli code is. I think ravioli code is mostly a good thing. Also the people on the c2.com page are not uniformly negative about it. Not all code should be ravioli code but in a project with complex requirements there should be quite a bit of ravioli code. It is true that ravioli code is not easy to understand if you are a newcomer to a project but really if the context is 'a project with complex requirements' why would anyone think that it is easy to get into, no matter how it is written?
Another thing is that ravioli code absolutely needs automated tests.
'This style maximizes maintainability by old hands at the expense of comprehensibility by newcomers.' Maintainability is exactly what I want maximized. It sounds a bit bad if there are no developers that are there for a long time. But in that case you are cooked anyway, I would say.
Preferable state is high internal cohesion and low coupling.
Most code is opposite.
Ravioli is when you trade high coupling without introducing cohesion, which is structure. Typical state after dumb refactor rather than rework.
Simple extraction of functions does not get you anything (if done well within a module, maybe increases cohesion), while making them reusable modules trades reduced cohesion for increased coupling, which is bad.
If you deduplicate too much you suddenly cannot change anything simply... As every place that got deduped is now coupled to one implementation. Once you need some special care, you get to replicate it again oftentimes.
Only true primitives are really worth it to not replicate and parts of code that won't change. (Ask the crystal ball again.)
See if deduplicaton gets you any of useful high level designs, like MVC, event-driven, reactive, message based. Without overarching design, you end up with a mass of locally useful ones that together are incompatible thus requiring lots of different, unique glue code.
The big mistake people do is to equate design patterns with code patterns. Which is what the silly GoF book did a lot. For example "fire and forget background parallel tasks" is a design pattern. Reactor (executes Strategies to deal with Events) is one while Singleton or Context are not. Event also is not. Etc. Generally anything that doesn't actually structure anything is a code pattern.
s/many games/the world/g
I just started a new big project. Wish me luck. It replaces something held together with duct tape and string concatenation in 5000 line files.
You'll do great!
Web developers constantly need to fulfill new requirements so having a good level of code quality is important.
"Let's rewrite this old messy app in 'modern' technology" - sure it seems cleaner at first, but when you actually reach parity with the old app, you are likely about as messy.
I tend to think that a lot of the messiness in business apps is more due to the complexity of the business rules than the structure of the application.
"Its like this - you're hired to build a house. First, you have to go get someone motivated to harvest the raw materials for you - designs, logic, etc. - which will then be turned into the 'raw wood' that holds up the walls and keeps the roof on. Then, when that person is busy getting the materials cut, you start building the tools you know you're going to need, to get the walls up and strung together. You don't have these tools yet, because you left all the previous ones you've worked with at a previous construction site. The reason for this is that you are going to use the tools to put the walls up, sure - but then you're going to glue all the tools in place to make sure the walls stay up. That glue is the most powerful stuff in the universe, but it will fail catastrophically if you don't put the tools at just the right angle in the glueball .."
Basically, you glue all the new tools together, cover them in wood, and leave them in place so that the thing doesn't fall over ...
1) get a product owner to describe requirements (not sure why they are gathering raw mats instead of producing blue prints)
2) He tries to steal code from previous jobs to bring to new jobs, because he pretend that he "owns" them.
3) He can't steal code from previous jobs because he left them in the walls.
Eventually, you get the house built and - if you're good - it looks pretty good and it keeps the kids warm.
But don't, ever, look between the walls. You won't be happy with all the thorny and spikey bits that are glued in place. And yeah, there's quite a lot of hammers and drills and saws literally glued in place - don't touch a single one. They belong there.