Hacker News new | past | comments | ask | show | jobs | submit login
Game developer loses multiplayer service code (steamcommunity.com)
178 points by eropple on Sept 8, 2019 | hide | past | favorite | 95 comments



The developer added more details in the discussion thread:

> We have all of our available code backed up on our SVN server. But the programmer that wrote the lobby code back in 2014 wrote some of the code directly to the lobby server. So we don't have these, and since this programmer isn't here anymore, and probably wouldn't remember what he wrote even if he were, the entire thing doesn't work unless we rewrite. (src: [1])

> Unfortunately, it really is due to incompetence on our part. We really didn't know that our server programmer was writing some of the code directly to the server until we went looking for it after the server deleted our code. We thought we'd just pop the backed-up code directly onto the new server and that'd be the end of it. (src: [2])

[1] https://steamcommunity.com/app/237870/eventcomments/16330403...

[2] https://steamcommunity.com/app/237870/eventcomments/16330403...


The missing code in question is from 2014? No one in the last 5 years suggested keeping a yearly snapshot, or making a backup prior to each major release? That's not an "Oopsy, I'm new to this field" error.

After skimming some of the Steam reviews, the cynic in me thinks they didn't want to bother supporting the game anymore after the one that followed (My Time at Portia) took off, so they made up a story that would get them more sympathy than an abrupt announcement that the MP portion would be discontinued. I mean, there's no direct evidence of this, but you'd think that in 5 years... hmm...

Looking at a Unity forum thread from 2015 that the U-Link dev responded in, it seems the writing was already on the wall, which should have triggered a reassessment of their game's use of that component. It's also mentioned that ULink support ended in a reply from 2016 and it released as open source. (https://forum.unity.com/threads/is-ulink-dead-update-devresp...)

That's as far as I care to look into it, but something feels off.


That really sounds like they haven't been backing up their server, and that they think of source control as the backup. Both of these things are pretty bad.


This is common. Most places use their revision control system as backup for their source code and take backups of their database. This makes sense because the source code holds no state. Why would you back up the server? What would you do when you had 100 servers? You’d take backups of the running disk state of all of those? To what end?


Right, it's not a problem with technique. There are lots of storage management paradigms whose backup and restore strategies don't involve filesystem copies of the running servers.

The problem here was that there seemed to be no backup and restore strategy at all.

That said, though: if you're a tiny group like this and don't have the time to invest in a gold plated storage design, "Just Back Up All The Servers" is a pretty reasonable choice to make.


If the servers are still treated as pets, then they need to be backed up. Outside of code there would be configuration.

Keep in mind that once the servers are treated as cattle then individual server backups are typically no longer needed. The developer in this case would likely not be in this situation then.


Why would you back up the server?

In case of things like this. Also it's usually quicker to recover a server using an image than it would be to provision from source in the case of, say, a hardware failure.

What would you do when you had 100 servers?

Something more scalable. There's no reason why you should pick an approach to solving a problem and stick with it forever. If the situation changes you do something more appropriate. That doesn't mean the small scale approach is wrong when the scale is small.


I mean, the better way of doing this is just keep your application code stateless and make frequent backups of your data stores. If the developers knew they had server code that wasn't checked into SVN, your solution makes more sense, but they just had no idea. Even in corporate, if I need to set up a new box for a service, I don't backup an image from another box and deploy it to the new one - I just pull my Docker image and run that.


Yes, but part of keeping your application code stateless is that you need to actually do things to keep it that way. If you haven't done those things, it's easier to set up Tarsnap in a cron job than it is to make sure your deployment process is stateless.


If you have a service you haven’t built in 5 years, then I’d say you have a lot more important problems than server backups.


Well, I think we have an answer to "what end" now.

Yes, if you have a deployment strategy where you can reproducibly deploy from your source code repository, then you don't need to back up the servers. But until you've tested that that works, and I mean really tested it, you need server backups.


Precisely to this end :)

in that role I would typically back up the deploy directory (/opt/foo) and the /etc/ and parts of /var/ every day to Glacier.

Of course this is all pre-docker and pre-k8s but this is 2013 we are talking about


Why would you back up the server?

Because the rule number one of the programmers club, is backup everything, everywhere, all the time, if you don't remember do it again, and if you're sure you have enough backups do it again anyway. I even do backup of my backups everyday (in pendrives, in cds, in ancient scrolls, etc)


In the age of treating servers (or containers) like cattle instead of pets, the "Back up everything" mantra has fallen by the wayside. In order to get away with selective backups you have to know exactly where long-term state is stored and you need to have the infrastructure in place to manage re-provisioning everything and restoring snapshots. It's not something you can tack on later. Iterate, test, integrate, document, audit, review. It ends up being much more complicated than periodic wholesale snapshots on a server.

There's a certain elegance and assurance you get from this that has been lost with the times, akin to how monolithic server software with all functionality natively available in the code has gone away in favor of microservices. Now you have message queues, k/v stores, caches, search engines as a microservices that are tacked on to the core services and rarely fully understood by the engineering team and containing more functionality than the codebase ever really utilizes. Ends up being more complicated in manage in a lot of ways. I think the emergence of microservices is one of the driving forces behind selective state backups, because you can never back up the entire state at once, everything is too spread out. You're not going to back up the running state of the k8s node, or whatever


It's more "make everything restorable" rather than "back up everything."


This is actually a pretty reasonable way to run things In The Cloud™. If your servers are transient and provisioned on demand, then backing them up is largely meaningless. Source control is the backup.


But you're supposed to prove it's the backup. It's part of the deal.

As one writer said early in the unraveling of Agile, it's like you were told that if you eat your vegetables you can have desert, and now we just want desert but don't want to eat our vegetables.


Shit like this happens more often than one might think. My last job we had one magic disk image that worked and anything else we tried to handroll didn't. (The app itself was written in an ancient version of Django.)

While working on other things I discovered modifications written directly to cached versions of the Django instance itself, which were not committed to any source control. He simply edited the instance of Django in the disk image, which was then replicated to our various deployments.

The worst part was the mods this guy made were perfectly capable of running within our project, from code in source control. Why he took the approach he did was a complete mystery to us.


My last job had similar -- our internal tools were a buggy mess, to the point where customer support spent upwards of 75% of their time fighting the software instead of supporting customers - but the CEO decided that the dev team would not fix any bugs that aren't DIRECTLY affecting end-users, nor would the dev team "waste time" reviewing patches from frustrated tech-literate support staff fixing internal bugs for themselves.

However, the support staff still had root logins to all our servers, so they followed the letter of the law by not having their patches go through the code review / source control / gradual rollout process...


My next question is "why didn't the QA team find this?"

And, "Do you have a QA team," he said, expecting the answer "no".


My next comment as a QA: "I told you so"

God, the many times I had told them so months before difficult situations arrived, just to be swiftly ignored by business because some devs didn't want to bother, and wanted to get to more shiny and fancy and interesting problems.


> That really sounds like they haven't been backing up their server

Nor rolling it and both testing this procedure and putting people off mutating the server.


the old "can we use the back-up to really bring everything back up?" question.

in 2002 or so i pushed the sync button an a sony palm pilot (yes, they existed). first button push: whole crm database, strip out non supported fields.

second push: push database back to crm, 'null' all unsuported fields.

after it became clear what happened, to the IT and discuss how to bring the data back via crm backup which we paid alot of monye for (monthly, this was before cheap cloud storage).

.... 1 month later 30 people started refilling copy&paste the crm from their mails, adress books, ...

tl;dr: always test your back ups.


Man, that is really unfortunate. My heart goes out to these developers. I've worked with a few of these "do whatever works" cowfolk before. It is exhausting. And, as we see here, can ruin a company.

Personally, I'm getting really sick of the "software doesn't matter, do whatever you want, ship it" attitude in the industry. It creates a lot of needless toil because people do not want to learn how to do things properly.

Your software is your product. The primary cost of software is NOT the initial development: It is the ongoing maintenance. When you cut corners, you are taking out a loan against your future sanity and time. Higher quality code bases and better practices lead to stability, ease of extension, and more money.

___

A while ago I was working for a start-up using Google Cloud SQL Server with Postgres.

I was relying on the built-in database backup functionality, sleeping peacefully at night knowing I effectively delegated that task to a higher power.

Then, one day, I deleted the dev database. Whoops! Hmm... I cannot find the backups...

So I go onto the google cloud slack:

Me: "Hey, where are these backups stored?

Them: "The backups are deleted with the database."

I felt a bit like Wile E Coyote running in mid-air after running straight off a cliff.

Needless to say, I dropped all current projects and took a week to developing a backup, restore, and testing solution for our product. WHEW. Terrifying stuff!


Things I always do a few weeks after starting a new job...

Request the backup copy of a semi-randomly chosen file. It's been enlightening.


Related: organizations that would rather pay cryptolocker ransoms than restore from backups.


A backup that is deleted when the original data is deleted is not a backup. It's called a replicating clone. Backup can be restored even if the original data is deleted. Either they misrepresent or they don't understand what a backup is.


I think they deleted the database, not the content, and that’s what triggered the backups to be deleted.

Crazy still that they’re tied like that.


> Your software is your product. The primary cost of software is NOT the initial development: It is the ongoing maintenance. When you cut corners, you are taking out a loan against your future sanity and time. Higher quality code bases and better practices lead to stability, ease of extension, and more money.

I want to agree with this attitude, I really do. But with how crowded the indie game market is right now, I understand the urge to move fast and loose. Games already take years to develop and are incredibly expensive, and hiring additional developers is incredibly expensive. Considering the tiny chance your game will be successful and make its money back, why would you spend tons of time frontloading work to avoid future maintenance? In all likelihood, nobody will play your game and it won't matter! And in the off chance it does get popular, you can go back and clean things up later... definitely... right?


I would argue it is not that much harder to do things correctly as opposed to doing things fast and loose. It just requires reading a few books and learning a tool like ansible. It is extremely boring, but the time pays for itself.

I suspect this is a case of inexperience as opposed to someone casting off established wisdom ( you should be able to rebuil your project, be constantly testing to ensure you can, enable automated backups for all servers )

Read the stardew valley chapter in "Blood, Sweat, and Pixels".

Turned out great, but it was harrowing for a while.


I had a similar experience with a web host long ago. I requested a restore and they said they couldn't do it. After pressing and requesting a refund on the additional fees we were paying for the backup, they declined to refund because the backups were running fine as evidenced by a cron job executing like clock work. It turns out they never checked the result code or logs of the cron job it hadn't been running successfully at all.


How did they think that was ok? A backup that gets automatically deleted when the original is deleted is not a backup.


Yeah thats pretty... awful.


In some cases, AWS RDS snapshots work that way too.


Yep, have been on a project that was burned by this.


Although I agree with your sentiment, just preaching it wouldn't change them. That's the real problem that we need to discuss. And we're yet to know how to systematically educate ourselves and our colleagues to prevent a similar accident in future.


Hard to know what is going on here. They claim that "all the code base got deleted from its server" where the server in this context is a deployment target, not a source control server. At any studio I've worked, and even on larger hobby or student projects, fixing this would involve an automated build/deployment using <insert build pipeline, maybe Jenkins>. At worst you could go back to source and build binaries for the target manually and setup whatever deployment configuration necessary to stand up the service again.

The developer says that they were using uLink for multiplayer which was a Unity plugin/framework for building services natively in Unity. It sounds like this was a binary only dependency with no source. Perhaps they didn't store the binaries in source control (or anywhere else) and would have to re-implement the networking with some other framework. For a small team and title with a small player base, it would be hard to justify the cost.

Lessons learned to always have full source for any critical dependencies or a contract with the maintainer for long term support. Also, make sure you can rebuild your deployment target from scratch and have your continuous integration process testing that early and often.


Or just check everything in. It's a bit taboo when you have proper dependency management to also check in the dependency binaries but I've found this is the only sure fire way to get rock solid turn key builds.


In 2010 I didn't know about version control. I released an Android game and sold it. It did really well. I had released many updates, and hardware was changing fast.

I wanted to update the game with all new high res graphics for new devices, and support tablets too.

However, I started working on new functionality in the game, something that kind-of affected the core of every other thing that was happening in the game.

I never quite got that perfect, but I kept updating, making backups, etc. Still, no version control.

Then it turned out that the latest update I released about 4 weeks earlier had an issue with a wide range of devices that someone finally brought to my attention. This was a new Samsung device that everyone was getting.

Customers started writing in about the game not working on their new device.

I had a backup of the code before I released the last update (which had a ton of changes and new features), and then a backup AFTER the release but also AFTER I put in a lot of the broken core functionality.

Basically, I was in a situation of having an 8 week set back to redo all the changes I made before the last release from a backup. That would take about 4 weeks, then I'd have to release, and start again on the changes I've been spending 4 weeks on.

The game was doing well, but I definitely wasn't 2 months ahead in income, and this was a side project.

I ultimately couldn't do anything about it and had to just press on with the new changes that did go out, but in a less than stellar way about 4 weeks after that.

In the meantime, I had to make a ton of refunds for people with certain devices, and then block that device from the dev console and put notes in the description not to buy the game if you are using a device like X.

Doing this basically brought my sales to a halt.

I ended up putting out one more update with the small changes I had that weren't perfect, people didn't love the change, I did make it so people with the new devices could download and buy again, but my rank and sales hand tumbled miserably anyway. The game sold a few copies here and there for a few more months, and then within a year I stopped development completely and stopped selling the game.

So..... version control and proper backups are VERY important, especially if you have customers. This game was my ticket to being an independent game developer making a living selling games, and then it crashed because I couldn't work on an older version.


Version control is mandatory, but it is a tricky question if it could help with that situation much. If I understand correctly, you lost all ways to pick updates and reposition them on a foundation that works on both old and new devices.

But that depends on how you use your vcs of choice, and that depends on how you organize your work. With rushing big updates you could end up with basically vcs’d equivalent of two releases you had, one old and one broken. As organizing work is hard under some circumstances (e.g. demand grows faster than your ability to both control your code precisely and stay on schedule, given time constraints), and it could be even not possible to avoid that problem completely by vcs means.

Moreover, it wasn’t just a bug, right? It seems that it broke new devices by using something that it shouldn’t use at all, and even with logged, well-separated commits it could be non-trivial to rollback and merge. Because otherwise it would be almost trivial to forward-fix the latest release as well.

Of course, that is my blind guess from your description, but I think I got “exactly there” too many times even with proper vcs at hand. Sometimes picking between two distant branches/revisions is as hard as it is with bare diff -ru a/ b/, and involves expert-level vcs skills.


You were fine, you just needed a way to compare files to look at the changes, right? Which existed in 2010.


the problem here is not even your process. its google/apple walled gardens that do not the end user install a freaking older version!


I'm not sure why you were flagged. While the OP is responsible for a lack of version control, the distribution channel is very inflexible and something developers have to fight with.

Many other distribution channels provide multiple binaries or versions and let the customer choose which to install.

Apple and Google are very simple and primarily aimed at non-engineer consumers, so it makes sense that they wouldn't allow people to select a version. It doesn't mean that this doesn't suck.


Maybe things have changed now, but as far as I can remember Google Play does let you deploy older APKs. But it's just from the developer side, it's not like users can choose which APK to download.


What happens when the older version does not understand the newer data?


So I guess no version control where at least one developer had a checked out local copy? Ouch.


The comments suggest that while most of their code is in SVN (and to forestall the usual discussion, SVN is totally defensible for game devs, most people don't know about Git LFS and it isn't a perfect solution), a tools developer made edits in production to make the lobby code work and he's no longer with the company.


The comment in question

> zede05 [developer] 26 Jun @ 3:58am @The sap is rising! We have all of our available code backed up on our SVN server. But the programmer that wrote the lobby code back in 2014 wrote some of the code directly to the lobby server. So we don't have these, and since this programmer isn't here anymore, and probably wouldn't remember what he wrote even if he were, the entire thing doesn't work unless we rewrite.


Lessons learned but that is pretty unbecoming. This isn't a hobby project or something that was released for free. It's definitely not representative of most studios in the video game industry.


That would mean they still have most of the codebase. This sounds like they don’t have anything.


It sounds to me like they did all their multiplayer lobby stuff using a third-party system and did all the code/config in live. I removed "all" to reduce ambiguity though.


They were using version control (SVN) but one dev applied some updates (or complex configuration?) directly to production without backing it up or documenting changes.


https://steamcharts.com/app/237870 some stats for the game if relevant..


37 CCU avg is a dead game, that’s admirable they put a best effort into making their last players happy. That also means, while a little embarrassing to lose code, it’s not a big deal. And it’s also not surprising they’d edit the code on the server and not have it in SVN, game development is full of sins like that.

Update: corrected to CCU from MAU


For context the current CCU makes it the 1,898th most played game on steam right now (out of 10,772 games that have at least one player).

The drop off comes very quickly!

1st most played game has 694,472 CCU (Counter-Strike: Global Offensive)

25th most played game has 19,744 CCU (Terraria)

50th most played game has 13,172 CCU (Counter-Strike)

100th most played game has 4,367 CCU (Sims 3)

500th most played game has 529 CCU (Octopath Traveller)

1000th most played game has 167 CCU (Shadowrun Hong Kong)

10000th most played game has 1 CCU


> The drop off comes very quickly!

Yep, power laws at work. Look at Twitch viewer numbers for another real time example. The distribution is always concentrated at the top, followed by a long tail. Also the case with the best selling apps, books, movies, etc.


OT, but I'd be interested in where Fortnite, Minecraft, and League of Legends would hypothetically fit in that list.

Heck, I think Roblox would have them all beaten.


In 2014 Riot released numbers of 7.5MM peak concurrent players for League of Legends (off a MAU 67MM) and in 2016 reported a MAU of 100MM (but no new concurrency figures). Those are old stats but still useful for comparison.

Disclaimer: I work at Riot but don't have any internal numbers to share.


Wanted to jump back in here now that new public information is available, for August 2019 there was a daily average of 8MM peak concurrent players.

https://na.leagueoflegends.com/en/news/game-updates/special-...


Steam Charts shows concurrent users (CCU), not MAU.


Could be advertising for their next game.


That's really a shame. I sunk about a hundred hours into that game when it was new. The building mechanics in it were so robust it effectively contained SketchUp right in the game for building weapons, devices, vehicles, etc.

https://i.imgur.com/VqFuxWw.jpg


Unless you regularly test your backups, there's no reason to think you have backups. Unless you regularly rebuild from scratch, there's no reason to think you can rebuild from scratch.


That really depends.

If you use cloud solutions, like aws, instance images are reliable backup units without having to test it. Managed resources like aws databases (rds, dynamo... ) have built in backups that just work.

You only need to proof test your backups if you are using untested or 100% handcrafted solutions, and in those cases you either know really well that you are doing or you should be using battle tested solutions


I trust RDS to backup the DB properly.

I don't trust myself to have perfectly configured RDS backups; to not have missed something critical (S3 bucket); to not have any number of other things outside the DB itself that may come into play should the worst happen.

Running through a "restore from backup" exercise every so often helps suss out where the missing pieces are.


It’s still good to practice. You should be able to restore a prod backup in your sleep, ideally with the push of a button. You might _have_ backups, but it’ll still be anxiety-inducing if you haven’t practiced deploying them.


FWIW...

"Backups are a tax you pay for the luxury of restore"

"How Google Backs Up the Internet" http://www.youtube.com/watch?v=eNliOm9NtCM Detailed notes: http://highscalability.com/blog/2014/2/3/how-google-backs-up...


Here's my take - it looked like their average concurrent players was around ~30 before this happened, which indicates it was nearly a dead game. It's free-to-play, and I can't say how much they made per player, but a decent amount would be $0.05 per DAU. Converting concurrents to DAU is difficult, but let's say they had 500-1000 DAU. That would be on the order of $25-50/day.

My guess is they could fix the problems, but it would cost an engineer a month? or a couple months? Considering the game was on the decline, it's hard to justify spending any effort to fix the issues.

Now the weird part. It was a F2P game - I don't know how much stuff people were buying in the game, but there's an implied contract that when you purchase a digital good, the digital good will continue to work for some reasonable amount of time. They are breaking that agreement by not restoring their servers and I am not sure what that means legally - there's potential for class action lawsuits, maybe?


It was only made free to play after the blog post. Before that it was a $10 game: https://steamdb.info/app/237870/


It seems like the marketplace needs to offer some kind of contract: any purchase in a f2p within 30 days of the app being turned down for any reason should be refunded.


To all the commenters that whine about using version control: version control is not a replacement for backups and disaster recovery procedures.

Repositories get deleted by mistake. Company accounts get closed. VCS servers break down or catch fire or get wiped maliciously or by mistake. Disk encryption keys get lost.

Offline backups on remote locations exist for a reason.


Hilarious. Hobbyist projects for the win! In general it is helpful not to be completely ignorant of how the state-of-the art in your line of work is done and perhaps ask why it is done this way.

Its almost inconceivable to me how it is possible that a company, no matter how small, does all these things so utterly wrong that shit like this is possible.

Even as a child, when playing around with Delphi and Turbo Pascal, I made my backups dutifully onto an external drive. I did not use version control because I had no clue what that was (and, arguably anything is better than SVN), but at least I had backups of everything.


I'd like to know more details about this but I sense we have what's important already.

Version control and backups are basically a form of insurance. Sometimes even a complete backup doesn't help. Even losing a week on a fast changing code base can set you back a month. The path you take forward has value as much as the destination.

Insurance is completely useless right up until you need to claim. Then it is the most important thing ever and suddenly it is absolutely critical to getting back on track.

Lessons and reminders for almost anyone doing almost anything creative. History forms your memory.


The lesson here isn't "back up everything" but really is (and has always been) "make sure we can restore the system"

Because backing up is good but totally irrelevant if you don't occasionally use on of those backups to restore back to a good state.

One quick an easy way to test that is to have new developers apply the deploy/restore system to their own machine using a recent backup. (Obviously scrubbed of sensitive info.)

You get a developer online faster, another set of eyes on the process, and validation that it works. It's not perfect but pretty good.


I don't always use version control.

But I understand when I don't I'm taking the risk that this will disappear with no recovery. Most of the time that's okay, some scripts have low reusablility and somethigs I need an excuse to refactor.

To run a business without it is inviting failure. Once you involve someone else the conditions to recreate get hazy.

It sounds like only one piece wasn't in version control. That might be why that developer is no longer there.


I religiously use version control, and additionally, I use PRs and issues even on projects I work on alone.

It's not about their purpose in itself, rather, it's about a structured process and discipline.

All in all, source control (and even PR/issues) virtually cost zero, so while there are arguments again unit testing/documentation etc., there's hardly an argument against it.

Besides, no source control, no bisect :-)


Issues I get (and practice), but PRs? Are you requesting a PR from yourself, approving the PR when you have meditated about it for a day? :)


I could see being able to look at your code diff outside the context of your editor could help you identify issues you wouldn't otherwise see. Especially if your brain flips into code review mode when looking at a GitHub PR.


> Are you requesting a PR from yourself, approving the PR when you have meditated about it for a day? :)

He :-)

I definitely agree that this is more on the obsessive side of being structured - but on the other hand, the cost is virtually zero (I open and close PRs via commandline).

In those conditions, I'd say that issues and PRs are both complementary and necessary parts of a certain perspective/workflow. When I open an issue, it's a request for action. PRs are the other side of the coin - the represent the action that satisfies the request (as a matter of fact, "Closes #..." is crucial for me).

Opening an issue and closing it without a Pr, for me it... asymmetric (also practically: assigning a PR to an issues gives you the reference in the interface). Ultimately it's a matter of fully embracing a workflow.

Ideologies aside though, PRs nowadays are not just an isolated concept - typically, they carry various hooks (CI, code analyzers etc.) that do help. So, in a way, somebody is meditating on my code, even if it's just a machine ;-)


I do PRs for any minor/major features on a project that I work alone :) it helps to see a better diff in github, also see CI status (using drone) and also to review it after few days when I can be objective about it. Project is quite big at the moment with a lot of moving parts so it helps to have a process. Also, each PR has a name and short summary of what was changed, added, fixed. So when something breaks in 6 months it's easy to track down related PRs and refresh my memory.


How do they have the resources to write a new game but not rewrite the MP code?


Suppose they gave the resources to do either one of them but not both. If they rewrite the MP code for the current game but don’t write the new game, their company has no future and dies.


Older game. Smaller playerbase. Probably bad ROI.


A lot of people paid $24.95 for this game, and now they are going to abandon it, announce a sequel, and make it free for anyone who wants it. It's a triple slap in the face to existing customers. They should have re-wrote the needed code or contracted it out.


One thing that absolutely has to end is that the idea that a game has now become free is a "slap in the face" to customers. Customers paid for the game then-and-there, where they bought it and when they bought it. Discounts are not "insults" to those customers and by extension neither is making the game free and (possibly?) open-source.

As for rebuilding the multiplayer: it sounds like they had about 30 concurrent players and minimal ongoing sales. That's a dead game and at that point, "cut bait" is not unreasonable; they're not exactly a giant developer with money to splash around. "Give players free keys to the new game and move on" is about as fair a path forward as they can make, and is doing pretty right by those players.

I didn't submit this story here to get the cheap-games-earn-a-lifetime-commitment crowd riled up. (And make no mistake: $25 is a cheap game. So's $60, for that matter.) I posted it because it's a great object lesson to back up your stuff. If I can be honest, I'm mildly regretting doing so after reading your post.


The practice of releasing a game in early-access undermines the notion that you are paying for a game there-and-now. Even games that get 1.0 releases are often incomplete or continue to receive free content updates. Planet Explorers was released in early access, so people who bought it were clearly investing in its future, not paying for it as it was then.

This kind of ambiguous relationship being pervasive in the industry muddles traditional notions of when the studio's commitment ends, which opens the door for misunderstanding and abuse.


PE's 1.0 came out in November 2016. It's well and truly out of Early Access and has been for nearly two years and the playerbase had dwindled to a rounding error.

And even with Early Access, you're still paying for it as it is at the time of purchase--you're buying for the now or for the hope of the future, you are not buying for any future commitments of any sort. If this isn't well-understood already, it had best become understood, because financial exigencies don't really leave a lot of room for "but I bought it in Early Access!".

The finances of games are brutal and punishing and the race to the bottom that consumers have happily encouraged has resulted in those consumers' investment not being valued nearly as much. This was foreseeable and is inevitable. The best way to be reasonably assured that the games you like continue to be worked on, maintained, and managed is 1) pay a lot more for them up front, or 2) play games with ongoing subscription systems. As-is? It is economically non-viable to throw money after 30 concurrent users and nobody with a pocket calculator could fault them for it.


Do you really think it is a good investment to spend a a few hundred thousand dollars on a multiplayer game with 44 active players?


Those people probably bought it years ago. They could've lied and said they were closing down without anyone batting an eye.


To rewrite the MP code would mean reverse engineering their own code, that can be more work than just moving on and starting a new project where you can do all the stuff you learnt you did wrong before, properly.


And God knows how many other games and apps out there are happily running right now not knowing that some guy edited code directly on the server and left no trace of what he's done...


Does a GIT binary

   without history rewriting exists?
Immutable, with only an add history mode

or with a CQRS filtering management.


You can disable garbage collection in git, which will keep all versions of the history accessible by the reflog, in principle.

https://stackoverflow.com/questions/28092485/how-to-prevent-...


You can set up a git server to refuse history rewriting pushes (this is a feature of gitolite).

You would still be able to rewrite stuff until it goes to the server, but once it's on the server, you wouldn't be able to rewrite and push to that server.


Maybe they just really didn't want to support it anymore? Sunset the service...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: