> We have all of our available code backed up on our SVN server. But the programmer that wrote the lobby code back in 2014 wrote some of the code directly to the lobby server. So we don't have these, and since this programmer isn't here anymore, and probably wouldn't remember what he wrote even if he were, the entire thing doesn't work unless we rewrite. (src: )
> Unfortunately, it really is due to incompetence on our part. We really didn't know that our server programmer was writing some of the code directly to the server until we went looking for it after the server deleted our code. We thought we'd just pop the backed-up code directly onto the new server and that'd be the end of it. (src: )
After skimming some of the Steam reviews, the cynic in me thinks they didn't want to bother supporting the game anymore after the one that followed (My Time at Portia) took off, so they made up a story that would get them more sympathy than an abrupt announcement that the MP portion would be discontinued. I mean, there's no direct evidence of this, but you'd think that in 5 years... hmm...
Looking at a Unity forum thread from 2015 that the U-Link dev responded in, it seems the writing was already on the wall, which should have triggered a reassessment of their game's use of that component. It's also mentioned that ULink support ended in a reply from 2016 and it released as open source. (https://forum.unity.com/threads/is-ulink-dead-update-devresp...)
That's as far as I care to look into it, but something feels off.
The problem here was that there seemed to be no backup and restore strategy at all.
That said, though: if you're a tiny group like this and don't have the time to invest in a gold plated storage design, "Just Back Up All The Servers" is a pretty reasonable choice to make.
Keep in mind that once the servers are treated as cattle then individual server backups are typically no longer needed. The developer in this case would likely not be in this situation then.
In case of things like this. Also it's usually quicker to recover a server using an image than it would be to provision from source in the case of, say, a hardware failure.
What would you do when you had 100 servers?
Something more scalable. There's no reason why you should pick an approach to solving a problem and stick with it forever. If the situation changes you do something more appropriate. That doesn't mean the small scale approach is wrong when the scale is small.
Yes, if you have a deployment strategy where you can reproducibly deploy from your source code repository, then you don't need to back up the servers. But until you've tested that that works, and I mean really tested it, you need server backups.
in that role I would typically back up the deploy directory (/opt/foo) and the /etc/ and parts of /var/ every day to Glacier.
Of course this is all pre-docker and pre-k8s but this is 2013 we are talking about
Because the rule number one of the programmers club, is backup everything, everywhere, all the time, if you don't remember do it again, and if you're sure you have enough backups do it again anyway. I even do backup of my backups everyday (in pendrives, in cds, in ancient scrolls, etc)
There's a certain elegance and assurance you get from this that has been lost with the times, akin to how monolithic server software with all functionality natively available in the code has gone away in favor of microservices. Now you have message queues, k/v stores, caches, search engines as a microservices that are tacked on to the core services and rarely fully understood by the engineering team and containing more functionality than the codebase ever really utilizes. Ends up being more complicated in manage in a lot of ways. I think the emergence of microservices is one of the driving forces behind selective state backups, because you can never back up the entire state at once, everything is too spread out. You're not going to back up the running state of the k8s node, or whatever
As one writer said early in the unraveling of Agile, it's like you were told that if you eat your vegetables you can have desert, and now we just want desert but don't want to eat our vegetables.
While working on other things I discovered modifications written directly to cached versions of the Django instance itself, which were not committed to any source control. He simply edited the instance of Django in the disk image, which was then replicated to our various deployments.
The worst part was the mods this guy made were perfectly capable of running within our project, from code in source control. Why he took the approach he did was a complete mystery to us.
However, the support staff still had root logins to all our servers, so they followed the letter of the law by not having their patches go through the code review / source control / gradual rollout process...
And, "Do you have a QA team," he said, expecting the answer "no".
God, the many times I had told them so months before difficult situations arrived, just to be swiftly ignored by business because some devs didn't want to bother, and wanted to get to more shiny and fancy and interesting problems.
Nor rolling it and both testing this procedure and putting people off mutating the server.
in 2002 or so i pushed the sync button an a sony palm pilot (yes, they existed). first button push: whole crm database, strip out non supported fields.
second push: push database back to crm, 'null' all unsuported fields.
after it became clear what happened, to the IT and discuss how to bring the data back via crm backup which we paid alot of monye for (monthly, this was before cheap cloud storage).
.... 1 month later 30 people started refilling copy&paste the crm from their mails, adress books, ...
tl;dr: always test your back ups.
Personally, I'm getting really sick of the "software doesn't matter, do whatever you want, ship it" attitude in the industry. It creates a lot of needless toil because people do not want to learn how to do things properly.
Your software is your product. The primary cost of software is NOT the initial development: It is the ongoing maintenance. When you cut corners, you are taking out a loan against your future sanity and time. Higher quality code bases and better practices lead to stability, ease of extension, and more money.
A while ago I was working for a start-up using Google Cloud SQL Server with Postgres.
I was relying on the built-in database backup functionality, sleeping peacefully at night knowing I effectively delegated that task to a higher power.
Then, one day, I deleted the dev database. Whoops! Hmm... I cannot find the backups...
So I go onto the google cloud slack:
Me: "Hey, where are these backups stored?
Them: "The backups are deleted with the database."
I felt a bit like Wile E Coyote running in mid-air after running straight off a cliff.
Needless to say, I dropped all current projects and took a week to developing a backup, restore, and testing solution for our product. WHEW. Terrifying stuff!
Request the backup copy of a semi-randomly chosen file. It's been enlightening.
Crazy still that they’re tied like that.
I want to agree with this attitude, I really do. But with how crowded the indie game market is right now, I understand the urge to move fast and loose. Games already take years to develop and are incredibly expensive, and hiring additional developers is incredibly expensive. Considering the tiny chance your game will be successful and make its money back, why would you spend tons of time frontloading work to avoid future maintenance? In all likelihood, nobody will play your game and it won't matter! And in the off chance it does get popular, you can go back and clean things up later... definitely... right?
I suspect this is a case of inexperience as opposed to someone casting off established wisdom ( you should be able to rebuil your project, be constantly testing to ensure you can, enable automated backups for all servers )
Read the stardew valley chapter in "Blood, Sweat, and Pixels".
Turned out great, but it was harrowing for a while.
The developer says that they were using uLink for multiplayer which was a Unity plugin/framework for building services natively in Unity. It sounds like this was a binary only dependency with no source. Perhaps they didn't store the binaries in source control (or anywhere else) and would have to re-implement the networking with some other framework. For a small team and title with a small player base, it would be hard to justify the cost.
Lessons learned to always have full source for any critical dependencies or a contract with the maintainer for long term support. Also, make sure you can rebuild your deployment target from scratch and have your continuous integration process testing that early and often.
I wanted to update the game with all new high res graphics for new devices, and support tablets too.
However, I started working on new functionality in the game, something that kind-of affected the core of every other thing that was happening in the game.
I never quite got that perfect, but I kept updating, making backups, etc. Still, no version control.
Then it turned out that the latest update I released about 4 weeks earlier had an issue with a wide range of devices that someone finally brought to my attention. This was a new Samsung device that everyone was getting.
Customers started writing in about the game not working on their new device.
I had a backup of the code before I released the last update (which had a ton of changes and new features), and then a backup AFTER the release but also AFTER I put in a lot of the broken core functionality.
Basically, I was in a situation of having an 8 week set back to redo all the changes I made before the last release from a backup. That would take about 4 weeks, then I'd have to release, and start again on the changes I've been spending 4 weeks on.
The game was doing well, but I definitely wasn't 2 months ahead in income, and this was a side project.
I ultimately couldn't do anything about it and had to just press on with the new changes that did go out, but in a less than stellar way about 4 weeks after that.
In the meantime, I had to make a ton of refunds for people with certain devices, and then block that device from the dev console and put notes in the description not to buy the game if you are using a device like X.
Doing this basically brought my sales to a halt.
I ended up putting out one more update with the small changes I had that weren't perfect, people didn't love the change, I did make it so people with the new devices could download and buy again, but my rank and sales hand tumbled miserably anyway. The game sold a few copies here and there for a few more months, and then within a year I stopped development completely and stopped selling the game.
So..... version control and proper backups are VERY important, especially if you have customers. This game was my ticket to being an independent game developer making a living selling games, and then it crashed because I couldn't work on an older version.
But that depends on how you use your vcs of choice, and that depends on how you organize your work. With rushing big updates you could end up with basically vcs’d equivalent of two releases you had, one old and one broken. As organizing work is hard under some circumstances (e.g. demand grows faster than your ability to both control your code precisely and stay on schedule, given time constraints), and it could be even not possible to avoid that problem completely by vcs means.
Moreover, it wasn’t just a bug, right? It seems that it broke new devices by using something that it shouldn’t use at all, and even with logged, well-separated commits it could be non-trivial to rollback and merge. Because otherwise it would be almost trivial to forward-fix the latest release as well.
Of course, that is my blind guess from your description, but I think I got “exactly there” too many times even with proper vcs at hand. Sometimes picking between two distant branches/revisions is as hard as it is with bare diff -ru a/ b/, and involves expert-level vcs skills.
Many other distribution channels provide multiple binaries or versions and let the customer choose which to install.
Apple and Google are very simple and primarily aimed at non-engineer consumers, so it makes sense that they wouldn't allow people to select a version. It doesn't mean that this doesn't suck.
> zede05 [developer] 26 Jun @ 3:58am @The sap is rising! We have all of our available code backed up on our SVN server. But the programmer that wrote the lobby code back in 2014 wrote some of the code directly to the lobby server. So we don't have these, and since this programmer isn't here anymore, and probably wouldn't remember what he wrote even if he were, the entire thing doesn't work unless we rewrite.
Update: corrected to CCU from MAU
The drop off comes very quickly!
1st most played game has 694,472 CCU (Counter-Strike: Global Offensive)
25th most played game has 19,744 CCU (Terraria)
50th most played game has 13,172 CCU (Counter-Strike)
100th most played game has 4,367 CCU (Sims 3)
500th most played game has 529 CCU (Octopath Traveller)
1000th most played game has 167 CCU (Shadowrun Hong Kong)
10000th most played game has 1 CCU
Yep, power laws at work. Look at Twitch viewer numbers for another real time example. The distribution is always concentrated at the top, followed by a long tail. Also the case with the best selling apps, books, movies, etc.
Heck, I think Roblox would have them all beaten.
Disclaimer: I work at Riot but don't have any internal numbers to share.
If you use cloud solutions, like aws, instance images are reliable backup units without having to test it. Managed resources like aws databases (rds, dynamo... ) have built in backups that just work.
You only need to proof test your backups if you are using untested or 100% handcrafted solutions, and in those cases you either know really well that you are doing or you should be using battle tested solutions
I don't trust myself to have perfectly configured RDS backups; to not have missed something critical (S3 bucket); to not have any number of other things outside the DB itself that may come into play should the worst happen.
Running through a "restore from backup" exercise every so often helps suss out where the missing pieces are.
"Backups are a tax you pay for the luxury of restore"
"How Google Backs Up the Internet" http://www.youtube.com/watch?v=eNliOm9NtCM Detailed notes: http://highscalability.com/blog/2014/2/3/how-google-backs-up...
My guess is they could fix the problems, but it would cost an engineer a month? or a couple months? Considering the game was on the decline, it's hard to justify spending any effort to fix the issues.
Now the weird part. It was a F2P game - I don't know how much stuff people were buying in the game, but there's an implied contract that when you purchase a digital good, the digital good will continue to work for some reasonable amount of time. They are breaking that agreement by not restoring their servers and I am not sure what that means legally - there's potential for class action lawsuits, maybe?
Repositories get deleted by mistake. Company accounts get closed. VCS servers break down or catch fire or get wiped maliciously or by mistake. Disk encryption keys get lost.
Offline backups on remote locations exist for a reason.
Its almost inconceivable to me how it is possible that a company, no matter how small, does all these things so utterly wrong that shit like this is possible.
Even as a child, when playing around with Delphi and Turbo Pascal, I made my backups dutifully onto an external drive. I did not use version control because I had no clue what that was (and, arguably anything is better than SVN), but at least I had backups of everything.
Version control and backups are basically a form of insurance. Sometimes even a complete backup doesn't help. Even losing a week on a fast changing code base can set you back a month. The path you take forward has value as much as the destination.
Insurance is completely useless right up until you need to claim. Then it is the most important thing ever and suddenly it is absolutely critical to getting back on track.
Lessons and reminders for almost anyone doing almost anything creative. History forms your memory.
Because backing up is good but totally irrelevant if you don't occasionally use on of those backups to restore back to a good state.
One quick an easy way to test that is to have new developers apply the deploy/restore system to their own machine using a recent backup. (Obviously scrubbed of sensitive info.)
You get a developer online faster, another set of eyes on the process, and validation that it works. It's not perfect but pretty good.
But I understand when I don't I'm taking the risk that this will disappear with no recovery. Most of the time that's okay, some scripts have low reusablility and somethigs I need an excuse to refactor.
To run a business without it is inviting failure. Once you involve someone else the conditions to recreate get hazy.
It sounds like only one piece wasn't in version control. That might be why that developer is no longer there.
It's not about their purpose in itself, rather, it's about a structured process and discipline.
All in all, source control (and even PR/issues) virtually cost zero, so while there are arguments again unit testing/documentation etc., there's hardly an argument against it.
Besides, no source control, no bisect :-)
I definitely agree that this is more on the obsessive side of being structured - but on the other hand, the cost is virtually zero (I open and close PRs via commandline).
In those conditions, I'd say that issues and PRs are both complementary and necessary parts of a certain perspective/workflow. When I open an issue, it's a request for action. PRs are the other side of the coin - the represent the action that satisfies the request (as a matter of fact, "Closes #..." is crucial for me).
Opening an issue and closing it without a Pr, for me it... asymmetric (also practically: assigning a PR to an issues gives you the reference in the interface). Ultimately it's a matter of fully embracing a workflow.
Ideologies aside though, PRs nowadays are not just an isolated concept - typically, they carry various hooks (CI, code analyzers etc.) that do help. So, in a way, somebody is meditating on my code, even if it's just a machine ;-)
As for rebuilding the multiplayer: it sounds like they had about 30 concurrent players and minimal ongoing sales. That's a dead game and at that point, "cut bait" is not unreasonable; they're not exactly a giant developer with money to splash around. "Give players free keys to the new game and move on" is about as fair a path forward as they can make, and is doing pretty right by those players.
I didn't submit this story here to get the cheap-games-earn-a-lifetime-commitment crowd riled up. (And make no mistake: $25 is a cheap game. So's $60, for that matter.) I posted it because it's a great object lesson to back up your stuff. If I can be honest, I'm mildly regretting doing so after reading your post.
This kind of ambiguous relationship being pervasive in the industry muddles traditional notions of when the studio's commitment ends, which opens the door for misunderstanding and abuse.
And even with Early Access, you're still paying for it as it is at the time of purchase--you're buying for the now or for the hope of the future, you are not buying for any future commitments of any sort. If this isn't well-understood already, it had best become understood, because financial exigencies don't really leave a lot of room for "but I bought it in Early Access!".
The finances of games are brutal and punishing and the race to the bottom that consumers have happily encouraged has resulted in those consumers' investment not being valued nearly as much. This was foreseeable and is inevitable. The best way to be reasonably assured that the games you like continue to be worked on, maintained, and managed is 1) pay a lot more for them up front, or 2) play games with ongoing subscription systems. As-is? It is economically non-viable to throw money after 30 concurrent users and nobody with a pocket calculator could fault them for it.
without history rewriting exists?
or with a CQRS filtering management.
You would still be able to rewrite stuff until it goes to the server, but once it's on the server, you wouldn't be able to rewrite and push to that server.