Yeah, you see where this is going. The prod dump finished in time and shortly before leaving work I started importing the data. Then I sat around for a while before I realized I had forgotten about that additional "use the test environment" parameter -- and now I was importing a several hours old dump into the production database while the daily production run was running. I had to call company execs and explain the catastrophe to them, who in turn had to call in the vendor that sold us the system. Those were some pretty scary hours for a 20 yr old kid. Luckily it was just a matter of aborting the production run, reload the prod dump and then reschedule the production run for the day.
The next day I had to start my day at the vendor's place to get some shaming, but also a good piece of advice - "always say destructive things out loud before doing them". Then they continued to tell me stories of people they had worked with who really messed things up, and we all had some good, evil laughs.
Mistakes build experience, and hard learned lessons even more so. You now have a pretty good conversation starter to put on your CV. Personally I'd rather hire someone who was a "removal" specialist over someone who hadn't learned the skill yet. :)
I believe both GitLab and the community in general will come out stronger from this incident. Thank you all for being so transparent about it.
Oh absolutely. Problem is that gitlab didn't do that. And they will most likely have to learn many more lessons before become reliable.
Also, RDS gives you a synchronously replicated standby database, and automates failover, including updating the DNS CNAME that the clients connect to during a failover (so it is seamless to the clients, other than requiring a reconnect), and ensuring that you don't lose a single transaction during a failover (the magic of synchronous replication over a low latency link between datacenters).
For a company like Gitlab, that is public about wanting to exit the cloud, I feel like they could have really benefited from a fully managed relational database service. This entire tragic situation could have never happened if they were willing to acknowledge the obvious: managing relational databases is hard, and allowed someone with better operational automation, like AWS, to do it for them.
I have personally experienced a near-catastrophic situation 3 years ago, where 13 out of 15 days' worth of nightly RDS MySQL snapshots were corrupt and would not restore properly.
The root cause was a silent EBS data corruption bug (RDS is EBS-based), that Amazon support eventually admitted to us had slipped through and affected a "small" number of customers. Unlucky us.
We were given exceptional support including rare access to AWS engineers working on the issue, but at the end of the day, there was no other solution than to attempt restoring each nightly snapshot one after the other, until we hopefully found one that was free of table corruption. The lack of flexibility to do any "creative" problem-solving operations within RDS certainly bogged us down.
With a multi-hundred gigabyte database, the process was nerve-wracking as each restore attempt took hours to perform, and each failure meant saying goodbye to another day's worth of user data, with the looming armageddon scenario that eventually we would reach the end of our snapshots without having found a good one.
Finally, after a couple of days of complete downtime, the second to last snapshot worked (IIRC) and we went back online with almost two weeks of data loss, on a mostly user-generated content site.
We got a shitload of AWS credits for our trouble, but the company obviously went through a very near-death experience, and to this day I still don't 100% trust cloud backups unless we also have a local copy created regularly.
Cloud backups, and more generally all backups should be treated like nuclear profliferation treaties: Trust, but verify!
If your periodically restore your backups you'll catch this kind of crap when it's not an issue, rather than when shit had already hit the fan.
At my current startup, we have triple backup redundancy for a 500GB pg database:
1/ A postgres streaming replication hot standby server (who at this moment doesn't serve reads, but might in the future)
2/ WAL level streaming backups to aws s3 using WAL-E, which we automatically restore every week to our staging server
3/ Nightly logical pg_dump backups.
9 months ago we only had option 3 and were hit with a database corruption problem. Restoring the logical backup took hours and caused painful downtime as well as the loss of almost a day of user generated content. That's why we added options 1 and 2.
I can't recommend WAL-E enough for an additional backup strategy. Restoring from a wal (binary) backup is ~10x faster in our usecase (YMMV) and the most data you can loose is about 1 minute. As an additional bonus you get the ability to rollback to any point in time. This has helped us to recover user deleted data.
We have a separate Slack #backups channel where our scripts send a message for every succesful backup, along with the backup size (MB's) and duration. This helps everyone to check if backups ran, and if size and duration are increasing in an expected way.
Because we restore our staging on a weekly basis, we have a fully tested restore script, so when a real restore is needed, we have a couple of people who can handle the task with confidence.
I feel like this is about as "safe" as we should be.
If a database isn't specified, pg_restore will output the SQL commands to restore the database and the exit code will be zero (success) if it makes it through the entire backup. That lets you know that the original dump succeeded and there was no disk error for whatever was written. Save the file to something like S3 as well as the sha256 of it. If the hash matches after you retrieve you can be pretty damn sure that it's a valid backup!
Otherwise you get the blind scripts like GitLab had where pg_dump fails. No exit code checking. No verification. No beuno!
It probably depends on the criticality of the data, but if you test say every 2 week, you can still fall in the OPs case, right?
At what size/criticality should you have a daily restore test? maybe even a rolling restore test? so you check today's backup, but then check it again every month or something?
For physical backups (ex: wal archiving), a combination of read replicas that are actively queried against, rebuilt from base backups on a regular schedule, and staging master restores lesa frequent yet still regular schedule, will give you a high level of confidence.
Rechecking old backups isn't necessary if you save the hashes of the backup and can compare they still match.
In the meantime, I hope you've developed automation to test your backups regularly. You could just launch a new RDS instance from the latest nightly snapshot, and run a few test transactions against it.
I am curious did you manage to automate an restore smoke test after going through this?
1. stored on separate infrastructure so that obliteration of the primary infrastructure (AWS account locked out for non-payment, password gets stolen and everything gets deleted, datacenter gets eaten by a sinkhole, etc.) doesn't destroy the data.
2. offline, read-only. This is where most people get confused.
Backups are unequivocally NOT a live mirror like RAID 1, slightly-delayed replication setup like most databases provide, or a double-write system. These aren't backups because they make it impossible to recover from human errors, which include obvious things like dropping the wrong table, but also less obvious things, like a subtle bug that corrupts/damages some records and may take days or weeks to notice. Your standbys/mirrors are going to copy both of obvious and non-obvious things before you have a chance to stop them.
This is one of the most important things to remember. Redundancy is not backup. Redundancy is redundancy and it primarily protects against hardware and network failures. It's not a backup because it doesn't protect against human or software error.
3. regularly verified by real-world restoration cases; backups can't be trusted until they're confirmed, at least on a recurring, periodic basis. Automated alarms and monitoring should be used to validate that the backup file is present and that it is within a reasonable size variance between human-supervised verifications. Automatic logical checksums like those suggested by some other users in this thread (e.g., run pg_restore on a pg_dump to make sure that the file can be read through) are great too and should be used whenever available.
4. complete, consistent, and self-contained archive up to the timestamp of the backup. Differenced backups count as long as the full chain needed for a restoration is present.
This excludes COW filesystem snapshots, etc., because they're generally dependent on many internal objects dispersed throughout the filesystem; if your FS gets corrupted, it's very likely that some of the data referenced by your snapshots will be corrupted too (snapshots are only possible because COW semantics mean that the data does not have to be copied, just flagged as in use in multiple locations). If you can export the COW FS snapshot as a whole, self-contained unit that can live separately and produce a full and valid restoration of the filesystem, then that exported thing may be a backup, but the internal filesystem-local snapshot isn't (see also point 1).
Snapshots will help you against human error, so they are one kind of backup (and often very useful), but if you do not at least replicate those snapshots somewhere else, you are still vulnerable to data corruption bugs or hardware failures in the original system. Design your backup strategy to meet your requirements for risk mitigation.
If your cloud account, datacenter/Colo or office is terminated, hacked, burned down, or swallowed by a sink hole.. You don't want your backups going with it.
Cloud especially: even if you're on aws and have your backups on Glacier+s3 with replication to 7 datacenters in 3 continents... If your account goes away, so do your backups (or at least your access to them).
For example, you can't load custom extensions into RDS. Also, to the best of my knowledge RDS does not support a hot standby replica you can use for read-only queries, and replication between RDS and non RDS is also not supported. This means you can't balance load between multiple hosts, unless you're OK with running a multi-master setup (of which I'm not sure how well this would play out on RDS).
Most important of all, we ship PostgreSQL as part of our Omnibus package. As a result the best way of testing this over time is to use it ourselves, something we strive to do with everything we sihp. This means we need to actually run our own things. Using a hosted database would mean we wouldn't be using a part of what we ship, thus not being able to test it over time.
RDS has very nice Read Replicas.
For HA you can use High Availability (Multi-AZ).
So, you can get the benefit of up to 15 read replicas, and not have to pay for an extra standby server that is sitting idle.
In general though-- TANSTAAFL
I suppose disturbingly is meant to imply "It was frustrating to me, and I think it would be frustrating to anyone in the situation of seriously using Postgres on RDS, and perhaps it ought to even decrease their opinion of the RDS team's ability to prioritize and ship features that are production-ready".
Does that make sense?
: There was no workaround for getting a read replica. RDS doesn't allow you to run replication commands. So your options were "Don't use Postgres on RDS, or don't run queries against up-to-date copies of databases." There was never any announcement of when read replicas were coming. It was arguably irresponsible of them to release Postgres on RDS as a product and then wait a year to support read replicas, which is a core feature that other DB backends had already.
By all means, RDS isn't perfect. It doesn't suit my current needs. But I understand that getting these things to work in a managed way that suits the needs of most customers is not an easy task. I'll remain frustrated in some small way until RDS does suit my needs. I hope they continue to add features to give customers more flexibility. And from what I've seen, they likely will.
This is not true anymore.
I set up two read-only RDS replicas, one in a different AWS region, and another in the same region, for read-only queries, just by clicking in AWS console.
No. The cloud (AWS, GCE, Azure etc) is not "just" like your own server.
Just consider some basic details - you pay someone else to worry about things like power outages, disk failures, network issues, other hardware failures, and so on.
But... that "point" is trivial.
Did anyone ever claim that cloud servers are made of magic pixie dust? No.
The real "point" is that, cloud = hardware + service, with service > 0.
As the OP describes, GitLab tries to do their own service (because service is expensive... it is), and they find out, the hard way, that the "service" part is not easy at all.
Amazon & Microsoft & Google run millions of servers each, so they can afford to hire really good people, and establish really
good procedures, and so on.
That, I think, is a better point
Something like this is not a mere oversight on the part of technical leadership; it's either negligence or incompetence. Whoever is responsible for GitLab's server infrastructure should be having very serious thoughts right now.
I keep seeing people throw this around as if it's God's truth and it frustrates the hell out of me. It may be the case for your organization but everywhere I have worked (from startups to Fortune 500) the cloud allowed our engineers to focus on our product rather than infrastructure maintenance and contributed massively to our success.
The issue I think is that so many people just go balls-to-the-wall 1000% AWS and consider it a done deal, which is terrible, and then go around and telling everyone else they should do the same thing, which is also terrible.
The fact is that you can't just lay the responsibility for all of this in Amazon's lap. We'd be even less impressed if GitLab's excuse was "Yeah, we had the Amazon nightly snapshots enabled, so we only lost 19 hours of data" (whoever coincidentally took the backup 6 hours before the incident should get enough of a bonus to make his GitLab salary market-competitive!).
Amazon does start you out with some OK-ish defaults, which is better than allowing someone with 4 days of experience to set everything up, but ultimately that's not going to mean much in unskilled hands.
When it comes down to it, every company still need someone internal to take responsibility for their infrastructure; that means backups, security, permissions, performance, hardware, and yes, cost. If your company already has someone with those responsibilities, giving that person $500k to hire a few hardware jockeys is going to be much better than giving Amazon $3M to be the sole host for all of your infrastructure. If your company doesn't have anyone with these responsibilities, it needs to get on the stick, as GitLab has clearly demonstrated to us this month.
Also, your snapshot backup solution is trivial to implement on EC2 or anywhere else for that matter. But it is not easy to do it right in some scenarios. Read https://www.postgresql.org/docs/9.6/static/backup-file.html for details. LVM or ZFS are likely needed under the db layer.
Currently working at company number 2 with large (many terabytes) databases on RDS and can safely say this is horse shit.
The amount of time and energy it allows our engineers to spend on our actual products instead of database management is worth all of the extra cost and lock in and then some.
Edit: I just realized that you were talking about Postgres on RDS in particular. I don't have experience with Postgres so you may well be right.
a. How many hours would you guess you are saving a month?
b. What takes by a factor of 10 less time on RDS than doing it by hand? What task sees the largest time saved?
Because I always wonder when reading this, what am I missing? What haven't we done? Were we lucky? We were running MySQL and Postgres for multi hundred million EUR companies with millions of users and we did not spend a lot of effort into managing them.
2. Zero effort re-deployment from backups.
3. Almost zero effort encryption at rest.
4. Zero effort hot backups, automatic fail-overs, and multiple datacenter deployments
5. Low effort migrations of massive amounts of data between DBs when someone inevitably wants to refactor something
6. Zero effort logging and log aggregation
7. Almost zero effort alerting of issues via sms/email/other
I could go on but I'm on my way to work...
When you're paying engineers north of 150K all of this adds up, and I'd much rather throw the money at Amazon to handle this and pay the engineers to focus on our actual product.
Of course, my view is biased because we only hear about the issues - there might be a 100x more people using RDS without any issues, and we never hear about them.
In general, the pattern we see is that people start using RDS, and they're fairly happy because it allows them to build their product and RDS more or less works. Then they grow a bit over time, and something breaks.
Which brings me to two main RDS issues that we run into - lack of insight, and difficulty when migrating from RDS with minimum downtime. Once in a while we run into an issue where we wish we could connect over SSH and get some data from the OS, attach gdb to the backend, or something like that. Or using auto_explain. Not even mentioning custom C extensions that we use from time to time ...
They're simply uninformed then. AWS database migration service makes zero downtime migrations trivial between just about any major databases (mysql, oracle, postgres, aurora, sqlserver, etc.)
No database solution is totally reliable. If storing data is my primary job, like it is GitLab's, I'd like to have as much control of it as possible.
The difference is that Instapaper was able to restore from backups, because their managed service performed them properly. The archive data is taking longer to restore, but that's due to design decisions Instapaper made.
If you truly need more than 30K IOPs, I would recommend leveraging read-replicas, a Redis cache, and other solutions before just "throwing money at the problem" and purchasing a million IOPs.
I'm afraid you are seriously underestimating the operational capabilities required to successfully operate a highly-available, distributed, SSD storage layer.
There was no backup because the cfn template I built at the time did not have the flag that said take a final snapshot. If you do not take the final snapshot (via console, api, cfn) you are doomed: all the auto snapshots taken by AWS are deleted upon the removal of the RDS instance.
This was our staging db for one of our active projects which I and the dev team spent about a month working to get to staging and was under UAT. Fuck. I told my manager and he understood the impact so he just let me get started on rebuilding. The next morning I got the DB up and running since luckily I compiled my runbook when I first deployed it yo staging. But it was not fun because the data is synced via AWS DMS from our on premise Oracle db so I needed to get sign off from a number of departments.
So I learned my first lesson with RDS - make sure final snapshot flag is enabled (for EC2 user, please remind yourself anything stored on ephemeral storage are going to be loss upon a hard VM stop/start operation, so backup!!!).
I also learned that RDS is not truly HA in the case of upgrading servers, both minor and major upgrade. I've tested major upgrade and saw DB connection unavailable up to 10 min. In some minor version upgrades both primary and secondary had to be taken down.
Other small caveats such as auto minor version upgrade, maintenance windows, retention for automated snapshot are only up to 35 days, event logs in RDS console doesn't last for more than a day, converting to provisioned IOPS can be expensive are just some small annoyance or ugh kind of things I would encourage folks to pay close attention to. Oh yeah, also manual snapshots have to be managed by yourself, kind of obvious but there is no life cycle policy... building a read replica can take up to a day in my first attempt of ever creating a read replica.
Of course now I learned these lessons so we have auto and manual snapshots and a better schedule. I encourage you take the ownership of the upgrade even for minor version so you know how to design your applications to be better at fault tolerance.....in the end hing I liked RDS the most is the extensive free CW metrics available. I also recommended people not to use the mobile app and if you do, setup a read-only role / IAM user. The app is way too primitive and laggy. I still enjoy using RDS, the service is stable and quick to use, but just make sure you have the habit of backuping and take serious ownership and responsibility of the database.
You can select your maintenance window, and you can defer updates as long as you want - nobody will force you to update, unless you check the "auto minor version update" box.
Please don't blame AWS for your lack of understanding of the platform. They try to protect you from yourself, and the default behavior of taking a final snapshot before deleting an instance is in both CloudFormation and the Console. If you choose to override those defaults, don't blame AWS.
This bit us once. Someone issued a `shutdown -h now` out of habit in an instance that was going for reboot, and it came back without its data, because "shutdown" is the same as "stop", and "stop" on ephemeral instances means "delete all my data". Since the command was issued from inside the VM, no warning or message that would've appeared on the EC2 console was displayed.
Amazon's position on ephemeral storage was shockingly unacceptable and unprofessional. They claimed they had to scrub the physical storage as soon as the stop button was pressed for security purposes, which is a complete cop-out. Of course they can't reallocate that chunk of the disk to the next instance while your stuff is on it, but they could've implemented a small cooldown period between stoppage, scrubbing, and reallocating the disk so that there would at least be a panic button and/or so accidental reboots-as-shutdowns don't destroy data. The only reason they didn't do that is because they didn't want to need to expand their infrastructure to accommodate it. Very sloppy, and not at all OK. That's not how you treat customer data.
Fortunately, AWS has moved on; I don't think that any new instances can be created with ephemeral storage anymore. Pure EBS now.
>I also learned that RDS is not truly HA in the case of upgrading servers, both minor and major upgrade. I've tested major upgrade and saw DB connection unavailable up to 10 min. In some minor version upgrades both primary and secondary had to be taken down.
You need multi-AZ for true HA. Failover within the same AZ has a small delay, as you've noted.
>I still enjoy using RDS, the service is stable and quick to use, but just make sure you have the habit of backuping and take serious ownership and responsibility of the database.
As many others in this thread have said, AWS and other cloud providers aren't a silver bullet. Competent people are still needed to manage these sorts of things. GitLab most likely would not have fared any better under AWS.
There is a significant security reason why they blank the ephemeral storage. How would you feel if a competitor got the same physical server as you, and was able to read all of your data? AWS takes great lengths to protect customer data privacy in a shared, multi-tenant environment. They are very public through their documentation about how this works, so I think it's a bit negligent to blame them because you don't understand the platform.
AWS gets paid the big bucks to abstract such concerns away in a pleasant manner. The device with customer data can sit in reserve, attached to the customer's account, for a cooldown period (of maybe 24 hours?) that would allow the customer to redeem it. AWS could even charge a fee for such data redemptions to compensate for the temporary utilization of the resource, or they could say ephemeral instances will always cost your use + 1 day. They can put a quota on the number of times you can hop ephemeral nodes.
They could do basically anything else, because basically anything else is better than accidentally deleting data that you need due to a counterintuitive vendor-specific quirk that conflicts with established conventions and habits and then being told "Sorry, you should've read the docs better."
This is an Amazon-specific thing that bucks established convention and converts the otherwise-harmless habits of sysadmins into potential data loss events. It's very bad to do this ever (looking at you, killall Linux v killall Solaris), but it's especially bad to do it on a new platform like AWS where you know lots of people are going to be carrying over their established habits and learning the lay of the land. It is not reasonable for Amazon to tell the users that they just have to suck it up and read the docs more thoroughly next time.
This is not like invoking rm on your system or database root, which is a multi-decade danger that everyone is aware of and acclimated to accounting for, and which has multiple system-level safeguards in place to prevent it: user access control, safe-by-default versions of rm that have been distributed with most major distributions lately, etc., and for which thorough backup and replication solutions exist to provide remedies when inevitable accidents do happen.
The point is that just instantly deleting that data ASAP and providing 0 chance for recovery is wanton recklessness, and there's no excuse for it. Security is not an excuse because there's no reason they have to reallocate the storage the instant the node is stopped.
If such deletions could only be triggered from the EC2 console after removing a safeguard similar to Termination Protection, that may be more reasonable, but allowing a shutdown command from the CLI to destroy the data is patently irresponsible.
Good system design considers that humans will use the system, that humans make mistakes, and it will provide safeguards and forgiveness. Ephemeral storage fails on all of those fronts. Yes, technically, it's the user's fault for mistakenly pressing the buttons that make this happen. But that doesn't matter. The system needs to be reasonably safe. AWS's implementation of ephemeral storage is neither safe nor reasonable.
Amazon has done a good job of tucking ephemeral storage away. It used to be the default on certain instance sizes. As another commenter points out, it now requires one to specifically launch an AMI with instance-backed storage. It's good that they've made it harder to get into this mess, but it's bad that they continue to mistreat customers this way, especially when their prices are so exorbitant.
Look, AWS is trying to balance the economics of a large, shared, multi-tenant platform. It would be great if they had enough excess capacity around to keep ephemeral instance hardware unused for 24 hours after the customer terminates or stops the instance, but frankly, that's an edge case, and they would be forcing other customers to subsidize your edge case by charging everyone more.
Let me stop you there. In our case, it wasn't that we didn't understand what ephemeral storage was or how it functioned, or that it would get cleared if the instance was stopped (though I've frequently met people who are confused over whether instance storage gets wiped when a machine is stopped or when it's terminated; it gets wiped when an instance is stopped).
The issue was that someone typed "sudo shutdown -h now" out of habit instead of "sudo shutdown -r now" (and yes, something like "sudo reboot" should've been used instead to prevent such mistakes). Stopping an instance, which is what happens when you "shut down", can have other ramifications that are annoying, like getting a different IP address when it's started back up, but those annoyances are usually pretty easy to recover from, not a big deal. Much different ball park from getting your stuff wiped.
Destroying consumer data IS a big deal. It's ALWAYS a big deal. If your system allows users to destroy their data without being 1000% clear about what's happening, your system's design is broken. High-cost actions like that should require multiple confirmations.
Even the behavior of the `rm` command has been adjusted to account for this (though it could be argued that it hasn't been adjusted far enough); for the last several years, an extra flag has been required to remove the filesystem root.
>is to charge all customers for a minimum of 25 hours of use, even if they only use the instance for a single hour? That seems crazy.
One of several potential solutions. It doesn't seem crazy to me; at least, not in comparison to making a platform with such an abnormal design that something which is an innocent, non-destructive command everywhere else can unexpectedly destroy tons of data.
The ideal solution would be for Amazon to fix their design so that this is fully transparent to the user. Instance storage should be transmitted into a temporary EBS disk on shutdown and automatically re-applied to a new instance store when it's spun back up (it's OK if this happens asynchronously). The EBS disk would follow conventional EBS disk termination policies; that data shouldn't be deleted except at times that the EBS root disk would also be deleted (typically on instance termination, unless special action is taken to preserve it).
That could be an optional extension, but it should be on by default -- that is, you could start an instance store at a lower cost per hour if you disabled this functionality, similar to reduced redundancy storage in S3, etc. Almost every company would be thrilled to pay the extra few cents per hour to safeguard against the accidental destruction of virtually any quantity of data that might be important.
>Look, AWS is trying to balance the economics of a large, shared, multi-tenant platform. It would be great if they had enough excess capacity around to keep ephemeral instance hardware unused for 24 hours after the customer terminates or stops the instance, but frankly, that's an edge case, and they would be forcing other customers to subsidize your edge case by charging everyone more.
A redemption fee would punish the user who made the mistake for failing to account for Amazon's flawed design. Under this model, such fees should be at least high enough to make up the cost incurred by Amazon in keeping the hardware idle.
This way Amazon can punish people who impugn upon its bad design choices by making them embarrass themselves before their bosses when they have to explain why the AWS bill is $300 higher this month or whatever, and the data won't be gone. Winners all around.
Another thing I'd like to point out is that you really need to plan for ephemeral storage to fail. All it takes is a single disk drive failure in your physical host, and you've lost data. If you are using ephemeral storage at all, you should definitely have good, reliable backups, or the data should be protected in other ways (like HDFS replication).
Still around, can be launched with an instance-store backed AMI:
Also, depending on precise nature of the dmarc fuckup, wouldn't help anyway. Cloud watch receiving mail doesn't guarantee that you will receive mail.
They should have spun up a new server to act as secondary the moment replication failed. This new server is the one you run all of these commands on, and if you make a mistake you spin up a new one.
Only when the replication is back in good order do you go through and kill the servers you no longer need.
The procedure for setting up these new servers should be based on the same scripts that spin up new UAT servers for each release. You spin up a server that is a near copy of production and then do the upgrade to new software on that. Only when you've got a successful deployment do you kill the old UAT server. This way all of these processes are tested time and time again and you know exactly how long they'll take and iron out problems in the automation.
In a perfect world everything is cluster-ready &c at the outset. In this world it usually... isn't.
EDIT: ... and I'd posit that such cluster-readiness actually isn't worth it most of the time.
EDIT: Obviously, if you really need clustering, then you need it, but IME people tend to overestimate their needs drastically. Everybody wants to be Big Data, but almost nobody actually is.
For me, personally, going from cloud servers to rented dedicated servers cut my bill by 93% – more than an order of magnitude. At same performance.
In fact, it’d be cheaper to run 10x as many dedicated servers than to use cloud solutions for me.
If your engineering time is free, then this calculation is complete. Otherwise it is not.
Does that 93% saving pay for a DB engineer, or enough of your developers' time to build the same quality of redundancy as you'd get with a DBaaS?
This calculus is going to be different for every DB and every company, but the OpEx impact of switching to dedicated servers is a bit more complex than you suggest above.
So, for me the choice was between "use cloud tools, and get performance worse than a raspberry pi", or "run dedicated, and get more performance and storage and traffic than I need, and actually the ability to run my stuff".
For less than the price of a Netflix subscription I’m able to run services that can handle tenthousands of concurrent users, and have terabytes of storage (and enough traffic that I never have to worry about that).
And the cost of setting it up was for me a few days.
For me it was a decision between being able to run services, or not being able to run them at all.
However, that paradigm is not really applicable to GitLab's OpEx calculation; they have to pay their engineers ;)
You have to remember GitLab is a 100% remote company, so they can hire DBAs from anywhere on the planet.
The cloud obviously makes a lot more sense if you have US electricity prices and Silicon Valley wages.
My point is simply that your posts above didn't address the complexity of their calculation, as they didn't factor the costs of switching to self-hosted.
I could feel the sweat drops just from reading this.
I'd bet every one of us has experienced the panicked Ctrl+C of Death at some point or another.
Back in 2009 we were outsourcing our ops to a consulting company, who managed to delete our app database... more than once.
The first time it happened, we didn't understand what, exactly, had caused it. The database directory was just gone, and it seemed to have gone around 11pm. I (not they!) discovered this and we scrambled to recover the data. We had replication, but for some reason the guy on call wasn't able to restore from them -- he was standing in for our regular ops guy, who was away on site with another customer -- so after he'd struggled for a while, I said screw it, let's just restore the last dump, which fortunately had run an hour earlier; after some time we were able to get a new master set up, and although we had lost one hour of data, it was fortunately from a quiet period with very little writes. Everyone went to bed around 1am and things were fine, the users were forgiving, and it seemed like a one-time accident. The techs promised that setting up a new replication slave would happen the next day.
Then, the next day, at exactly 11pm, the exact same thing happened! This obviously pointed to a regular maintenance job as being the culprit. It turns out the script they used to rotate database backup files did an "rm -rf" of the database directory by accident. Again we scrambled to fix. This time the dump was 4 hours old, and there was no slave we could promote to master. We restored the last dump, and I spent the night writing and running a tool that reconstructed the most important data from our logs (fortunately we logged a great deal, including the content of things users were creating). I was able to go bed around 5am. The following afternoon, our main guy was called back to help fix things and set up replication. He had to travel back to the customer, and the last things he told the other guy was: "Remember to disable the cron job".
Then at 10pm... well, take a guess. Kaboom, no database. Turns out they were using Puppet for configuration management, and when the on-call guy had fixed the cron job, he hadn't edited Puppet; he'd edited the crontab on the machine manually. So Puppet ran 15 mins later and put the destructive cron job back in. This time we called everyone, including the CEO. The department head cut his vacation short and worked until 4am restoring the master from the replication logs.
We then fired the company (which filed for bankruptcy not too long after), got a ton of money back (we threatened to sue for damages), and took over the ops side of things ourselves. Haven't lost a database since.
Steps I personally take to avoid this:
- Avoid prod boxes like the plague
- Set up a prompt (globally) to make it extremely obvious that you're in production. Something like a red background and black text saying "PRODUCTION"
- When changing data in production (DB's, config, etc) write a script (or just commands to copy and paste) and have that peer reviewed. If anything doesn't go to plan, treat it as a red flag. This serves a dual purpose of having a quick record of your actions without hunting through logs.
- Never ever leave open sessions
- Avoid prod boxes. This is important enough for me to say twice. Most of the time it can be avoided, especially if you use configuration management tools and write tools to perform common operations.
Now, lets just cross my fingers I don't jinx myself :-)
Also, I would make sure to have a different prompt than default for non-prod systems too. That way you know to be suspicious if it hasn't been changed from default.
Peer review though - yes. That could help. I wouldn't say "I'm unlikely to make that mistake" - it's likely to go on the famous last words list...
So, I'll say it more clearly, and you can mark my words. It's unlikely I'll ever log into a production system, type the wrong command, and do something bad as a result.
Could I deploy code that does very bad things to production? Yes. It'll probably happen to me. Is that the situation described above? No.
I treat logging into a production system as if one wrong move could result in me losing my job. Why? Because one wrong move could result in me losing my job. I'm not joking when I say I avoid logging into a production system like the plague. It's unlikely to happen to me because its extremely rare for me to put myself in a situation where I could let this happen. There's almost always better alternatives that I'll resort to, well before doing anything like this.
Fortunately disks were slower back then, so it hadn’t deleted too many files when I interrupted it, and the computer was able to be recovered without too much inconvenience.
rm -rf ~/foo
rm -rf ~ /foo
$ tar cvfz mbox outbox mbox.tar.gz
On my system, this overwrote my full mailbox with a gzipped copy of my outbox, and a complaint that the mbox.tar.gz input file didn't exist.
That's right, the worst data loss happened while I was trying to take a backup. :(
Fortunately the University was using some tool that can re-image a computer each time it boots before hitting Windows so starting it back up and all the deleted system and application files were back.
Then examine the output to see if that is the stuff I really want to delete and if it is,
up-arrow Ctrl-A right right backspace backspace rm -rf enter
Never deleted the wrong stuff again in 30 years of doing that
For example, it will expand `ls *` to `ls foo bar baz`, etc
I stopped it after about 3 seconds, but that was enough to do critical damage.
Turns out he had accidentally executed an rm of the home dir on a major web server in the background so in panic, instead of killing the right pid, he just ran to the server and pulled the power cords. :D
Ended up restoring a few home dirs from tape.
My 2 cents... I might be the only one, but I don't like the way GL handled this case. I understand transparency as a core value and all, but they've gotten a bit too far.
IMHO this level of exposure has far-reaching, privacy implications for the ppl who work there. Implications that cannot be assessed now.
The engineer in question might have not suffered a PTSD, but some other engineer might haven been. Who knows how a bad public experience might play out? It's a fairly small circle, I'm not sure I would like to be part of a company that would expose me in a similar fashion, if I happen to screw up.
On the corporate side of things there is a saying in Greek: "Τα εν οίκω μη εν δήμω" meaning don't wash your dirty linen in public. Although they're getting praised by bloggers and other small-size startups, in the end of the day exposing your 6-layer broken backup policy and other internal flaws in between, while being funded at the tune of 25.62M in 4 rounds, does not look good.
It is not our intent to have one of our team members implicated by the transparency. That is why we redacted their name to team-member-1 and in any future incidents we'll do the same. It should be their choice to be identified or not. We are very aware of the stress that such a mistake might cause and the rest of the team has been very supportive.
I agree that we don't look good because of the broken backup policy. The way to fix that is to improve our processes. We recognize the risk to the company of being transparent, but your values are defined by what you do when it is hard.
Every day I'm growing more to like GitLab. It took me way too long to realize that GitLab has a singular focus to change how people create and collaborate.
A person purely motivated on principle to see a specific change is going to find a way to make it happen. The hard part with such ideological ventures is that you have to have the business sense to make it sustainable. I'm gradually learning to recognize both aspects present in GitLab.
When you're guided on principle, it's much easier to accept losses here and there in the right way...
> If you want to move from GitLab.com please know that you can easily export projects and import them on a self-hosted instance (and if in the future we regain your trust you can also go the other way).
...and be able to stay focused on the bigger picture! Some customers were going to react this way no matter what. Sytse's response here characterizes GitLab's response as a whole here—we know we did wrong here, we learned from it, and we're going to be able to do a better job here on out regardless of whatever the fallout from the incident is.
Sytse, I love what you're doing and I look forward to seeing your continued resilience and dedication to your goal. The world needs more businesses like this.
> It is not our intent to have one of our team members implicated by the transparency. That is why we redacted their name to team-member-1 and in any future incidents we'll do the same.
Great, good to know. I wish all the success in the world to you and everyone involved with Gitlab.
Most companies would stay as quiet about this as possible, you guys remained transparent and this is why I'll remain a customer.
I'm glad it all worked out in the end!
Isn't that a put you out of business moment? I guess there is enterprise installations but the main website would be toast.
I guess there is the chance there were still older backups taken intentionally or by chance that would have been worse but recoverable from
At my dayjob, we gradually stopped using email for almost all alerts, instead we have several Slack channels like #database-log where errors to MySQL go. Any cron jobs that fail post in #general-log. Uptime monitoring tools post in #status. So on...
Email has so much anti-spam stuff like DMARC that make it less reliable your mail will be delivered. For something failing like a backup or database query, it's too important to have potentially not reach someone who can make sure it gets fixed.
My 2 cents.
At the very least you want some kinda dead-man's switch that gets pissed if it's seen no events in the last x amount of time. Ideally you want to be polling the box in a stateful way; although with ephemeral nodes & flexible infra being all the rage that's fallen to the side a bit lately.
You could also check for evidence a run has been successful, although that does depend on what you're doing exactly.
For our backup system, we're going to build an audit cron job on our main server that checks all our Azure containers to see if each server has pushed a file lately. It'll alert us if a file hasn't been uploaded in a few days or if it's smaller than a few MB (which is suspiciously small; we'd expect a few hundred MB for mysqldump+files).
Messages in Monolog, like syslog have a level attached, so DEBUG, INFO, NOTICE and WARNING will only be written to a log file on disk. Anything higher, so ERROR, CRITICAL, ALERT or EMERGENCY will write to Slack (as well as log to disk). This means we only get notified of things failing and we can go on the server and see everything from DEBUG upwards which lets us mentally step through the cron job's run.
It's a very cool library. https://github.com/Seldaek/monolog
You can see the handlers here: https://github.com/Seldaek/monolog/tree/master/src/Monolog/H... which includes Slack, HipChat, IFTTT, Pushover, etc...
I can only image this engineer's poor old heart after the realization of removing that directory on the master. A sinking, awful feeling of dread.
I've had a few close calls in my career. Each time it's made me pause and thank my luck it wasn't prod.
>>The standby (secondary) is only used for failover purposes.
>>One of the engineers went to the secondary and wiped the data directory, then ran pg_basebackup.
IMO, secondaries should be treated exactly as their primaries. No operation should be done on a secondary unless you'd be OK doing that same operation on the primary. You can always create another instance for these operations.
Yikes. One common practice that would have avoided this is by using the just taken backup to populate stage. If the restore fails pages go out. If integration tests that run after a successful restore/populate fail- pages go out.
Live and learn I guess.
It's unfortunate they had this technical issue, but it's good to see others ( besides Github ) operating in this space. I should give Gitlab a try sometime.
This is a great attitude. Too often opportunity cost isn't considered when making rules to protect folks from doing something stupid.
It's really simple to point the finger and try to find a single cause of failure - but it's a fools errand - comparable to finding the single source behind a great success.
However, if "the engineer" that caused this happens to read this, the above is not a sign that you should quit the profession and become a hermit. A chain of events caused this, you just happened to be the one without a chair to sit in when the music stopped.
> That is, much like falling dominoes, Bird and others (Adams, 1976; Weaver, 1971) have described the cascading nature of human error beginning Human Error Perspective
39 with the failure of management to control losses (not necessarily of the monetary sort) within the organization.
If a design contains an SPOF, then it's a bad design and should not be approved until the SPOF is removed by adequate redundancy or other means.
I'm not saying this is the case here but it's all too easy to blame someone for making a mistake. Even the most experienced make mistakes but reducing your MTTR is often overlooked in favour of other seemingly more pressing concerns.
I am very happy about their open post-mortem, so that anyone can learn from it. Reading it, it looks to me that the "rm" was not the cause of the disaster, it just triggered it. The real problem was the whole setup, which failed. And that is something, which falls under managements responsibility.
likewise all tty's have red backgrounds on prod.
select * from table > script
(drop all the tables)
It was in prod, he thought it was a dev db, the backups had never worked. After this the edict was all terminals for prod will be red. A simple solution
- Automated testing of recovering PostgreSQL database backups https://gitlab.com/gitlab-com/infrastructure/issues/1102
- Build Streaming Database Restore https://gitlab.com/gitlab-com/infrastructure/issues/1152
How do you reliably check if something didn't happen? Is the backup server alive? Did the script work? Did the backup work? Is the email server working? Is the dashboard working? Is the user checking their emails (think: wildcard mail sorting rule dumping a slight change in failure messages to the wrong folder).
And the converse answer isn't much better: send a success notification...but if it mostly succeeds, how do you keep people paying attention to it when it doesn't (i.e. no failure message, but no success message)?
The best answer I've got, personally, is to use positive notifications combined with visibility - dashboard your really important tasks with big, distinctive colors - use time based detection and put a clock on your dashboard (because dashboards which mostly don't change might hang and no one notice).
>> Why did replication stop? - A spike in database load caused the database replication process to stop. This was due to the primary removing WAL segments before the secondary could replicate them.
Is this a bug/defect in PostgreSQL then? Incorrect PostgreSQL configuration? Insufficient hardware? What was the root cause of Postgres primary removing the WAL segments?
PgSQL, Mongo, and MySQL all use a transaction stream like this for replication and they all have to put some kind of cap on it or risk running out of disk space, but the cap should be made sufficiently large to allow automatic resumption of disconnected slaves without manual redumping, except in extraordinary circumstances. Log retention should be long enough to last at least a long weekend so that someone can come in and poke the DB back into action on Tuesday morning, but preferably more like 1 week. Alarms should be configured to fire well before replication lag gets anywhere near the log expiration timeout.
In particular, PostgreSQL has a feature that allows automatic WAL archiving (i.e., it confirms that the WAL has been successfully shipped to a separate system before it removes it from the master) and a feature called "replication slots" that ensures that all WALs are kept if a regular subscriber is offline. If either of these features had been correctly configured, there would've been no need to do a full resync; the secondary database would've come back and immediately picked up where it left off.
Additionally, if one must resync the full database (and I've had to do this many times), tools like pg_basebackup and innobackupex are basically required to consistently perform the process of pulling the master dumps, and the old (unsynced) data directory should be allowed to linger until the full master snapshot has been fully confirmed and is ready to resync. It's very reckless to go around removing binary data directories until you're certain that the new stuff is running, even if you're "just on the replicant".
With pg_basebackup, you run it on the replicant server and it streams down the files, no need to log into the master server at all. With innobackupex, you need to have read access to the master's binary data directory, but should achieve this safely through something like a read-only NFS mount. mydumper is a possible alternative to innobackupex that tries to capture the binlog coords and doesn't require any direct access to the host beneath the database server.
innobackupex works fine locally on the server, streaming out to netcat or ssh on the remote side. Nothing wild like read only NFS required. It also copies all binlogs. Mydumper is pretty old at this point and doesn't do most of the things innobackupex can. I wouldn't recommend it.
Are you by any chance looking for any DevOps/Ops consulting? I just founded my third startup Elastic Byte (https://elasticbyte.net) and always looking for smart people. We're a consulting startup that helps companies manage their cloud infrastructure.
I do think that's a great startup, though, and this post-mortem and incident only proves how badly it's needed. A lot of people already think they're getting something like what you're offering when they sign up with a cloud provider.
There is a bug that might have been hit, but it appears as though there were other issues at play as well.
Is that correct? http://monitor.gitlab.net/dashboard/db/backups?from=14859419...
definitely monitor your replication lag--or at least disk usage on the master--with this approach (in case wal starts piling up there).
 - https://www.youtube.com/c/Gitlab/live
Thanks for taking the interest to check it out.
It's an unlisted YT video, so that's why it might be hard to find.
Here it is: https://www.youtube.com/watch?v=nc0hPGerSd4
I moved from AWS to Azure years ago. Mainly because I run mostly .NET workloads and the support is better. I've recently done some .NET stuff on AWS again and am remembering why I switched.
Are any organizational changes planned in response to the development friction which led to the outage? It seems to have arisen from long-standing operational issues, and an analysis of how prior attempts to address those issues got bogged down would be very interesting.
...they were moving away from the cloud, to their own servers.
It's good to be humble and know that mistakes can happen to anyone and learn from it, etc., but when you do in 2017 still the same stupid mistakes that people did a million times since 1990 and it's all well documented and there's systems built to avoid these same basic mistakes and you still do them today then I just think it cannot be described any different than absolute stupidity and incompetence.
I know they have many fans who just look past every mistake no matter how bad it was only because they are open about it, but common, this is now just taking the piss no?
Maybe part of the problem is that I think that the industry has a problem that it isn't willing to learn from the experiences of others? (I feel like we have 'just enough learning' and 'experienced folk who raise concerns are considered stuck in their outdated ways' and 'people who make a silly mistake like that must be an idiot'). I think that since we clump together those that have had formal training with those that haven't we aren't encouraging the value of this education. I'm also fully aware that some self-learned developers are much more competent rather than a college educated developer.
Any half intelligent engineer would always first research good practices, pitfalls and existing information which has been gathered from decades of other experienced engineers before doing anything stupid on their own. It seems that GitLab is lacking this attitude.
This is all nice theory, but in the end we're all humans. Pretending that you'll never make a serious mistake is just tat, pretending.
The decades of experience are not documented.
1. notifications go through regular email. Email should be only one channel used to dispatch notifications of infrastructure events. Tools like VictorOps or PagerDuty should be employed as notification brokers/coordinators and notifications should go to email, team chat, and phone/SMS if severity warrants, and have an attached escalation policy so that it doesn't all hinge on one guy's phone not being dead.
2. there was a single database, whose performance problems had impacted production multiple times before (the post lists 4 incidents). One such performance problem was contributing to breakage at this very moment. I understand that was the thing that was trying to be fixed here, but what process allowed this to cause 4 outages over the preceding year without moving to the top of the list of things to address? Wouldn't it be wise to tweak the PgSQL configuration and/or upgrade the server before trying to integrate the hot standby to serve some read-only queries? And since a hot standby can only service reads (and afaik this is not a well-supported option in PgSQL), wouldn't most of the performance issues, which appear write-related, remain? The process seriously needs to be reviewed here.
And am I reading this right, the one and only production DB server was restarted to change a configuration value in order to try to make pg_basebackup work? What impact did that have on the people trying to use the site a) while the database was restarting, and b) while the kernel settings were tweaked to accommodate the too-high max_connections value? Is it normal for GitLab to cause intermittent, few-minute downtimes like that? Or did that occur while the site was already down?
3. Spam reports can cause mass hard deletion of user data? Has this happened to other users? The target in this instance was a GitLab employee. Who has been trolled this way such that performance wasn't impacted? What's the remedy for wrongly-targeted persons? It's clear that backups of this data are not available. And is the GitLab employee's data gone now too? How could something so insufficient have been released to the public, and how can you disclose this apparently-unresolved vulnerability? By so doing, you're challenging the public to come and try to empty your database. Good thing you're surely taking good backups now! (We're going to glance over the fact that GitLab just told everyone its logical DB backups are 3 days behind and that we shouldn't worry because LVM snapshots now occur hourly, and that it only takes 16 hours to transfer LVM snapshots between environments :) )
4. the PgSQL master deleted its WALs within 4 hours of the replica "beginning to lag" (<interrobang here>). That really needs to be fixed. Again, you probably need a serious upgrade to your PgSQL server because it apparently doesn't have enough space to hold more than a couple of hours of WALs (unless this was just a naive misconfiguration of the [min|max]_wal_size parameter, like the max_connections parameter?). I understand that transaction logs can get very large, but the disk needs to accommodate (usually a second disk array is used for WALs to ease write impact) and replication lag needs to be monitored and alarmed on.
There were a few other things (including someone else downthread who pointed out that your CEO re-revealed your DB's hostnames in this write-up, and that they're resolvable via public DNS and have running sshds on port 22), but these are the big standouts for me.
P.S. bonus point, just speculative:
Not sure how fast your disks were, but 300GB gone in "a few seconds" sounds like a stretch. Some data may've been recoverable with some disk forensics. Especially if your Postgres server was running at the time of the deletion, some data and file descriptors also likely could've been extracted from system memory. Linux doesn't actually delete files if another process is holding their handle open; you can go into the /proc virtual filesystem and grab the file descriptor again to redump the files to live disk locations. Since your database was 400GB and too big to keep 100% in RAM, this probably wouldn't have been a full recovery, but it may have been able to provide a partial.
The theoretically best thing to do in such a situation would probably be to unplug the machine ASAP after ^C (without going through formal shutdown processes that may try to "clean up" unfinished disk work), remove the disk, attach it to a machine with a write blocker, and take a full-disk image for forensics purposes. This would maximize the ability to extract any data that the system was unable to eat/destroy.
In theory, I believe pulling the plug while a process kept the file descriptor open should keep you in reasonably good shape, as far as that goes after you've accidentally deleted 3/4 of your production database. The process never closes and the disk stops and the contents remain on disk, just pending unlink when the OS stops the process (this is one reason why it'd be important to block writes to the disk/be extremely careful while mounting; if the journal plays back, it may destroy these files on the next boot anyway). But someone more familiar with the FS internals would have to say definitively if it works that way or not.
I recognize that such speculative/experimental recovery measures may have been intentionally forgone since they're labor intensive, may have delayed the overall recovery, and very possibly wouldn't have returned useful data anyway. Mentioning it mainly as an option to remain aware of.
That only depends on the # of files. If it's even a thousand files, any modern Linux rm -rf will remove them in less time than a blink.
> The theoretically best thing to do in such a situation would probably be to unplug the machine ASAP after ^C (without going through formal shutdown processes that may try to "clean up" unfinished disk work), remove the disk, attach it to a machine with a write blocker, and take a full-disk image for forensics purposes. This would maximize the ability to extract any data that the system was unable to eat/destroy.
Their infrastructure is cloud based. No way to get a physical disk - if there is a "disk" at all and not a couple of huge fat NetApp filers providing the storage to the CPU nodes. (This is how a couple web-hosters operate)
> 1. notifications go through regular email. Email should be only one
> channel used to dispatch notifications of infrastructure events. Tools
> like VictorOps or PagerDuty should be employed as notification
> brokers/coordinators and notifications should go to email, team chat, and
> phone/SMS if severity warrants, and have an attached escalation policy so
> that it doesn't all hinge on one guy's phone not being dead.
> 2. there was a single database, whose performance problems had impacted
> production multiple times before (the post lists 4 incidents). One such
> performance problem was contributing to breakage at this very moment. I
> understand that was the thing that was trying to be fixed here, but what
> process allowed this to cause 4 outages over the preceding year without
> moving to the top of the list of things to address?
> Wouldn't it be wise to tweak the PgSQL configuration and/or upgrade the
> server before trying to integrate the hot standby to serve some read-only
> And since a hot standby can only service reads (and afaik this is not a
> well-supported option in PgSQL), wouldn't most of the performance issues,
> which appear write-related, remain? The process seriously needs to be
> reviewed here.
> And am I reading this right, the one and only production DB server was
> restarted to change a configuration value in order to try to make
> pg_basebackup work?
> What impact did that have on the people trying to use the site a) while the
> database was restarting
> and b) while the kernel settings were tweaked to accommodate the too-high
> max_connections value?
> Is it normal for GitLab to cause intermittent, few-minute downtimes like
> that? Or did that occur while the site was already down?
> 3. Spam reports can cause mass hard deletion of user data?
> Has this happened to other users?
> What's the remedy for wrongly-targeted persons?
> And is the GitLab employee's data gone now too?
> How could something so insufficient have been released to the public
> and how can you disclose this apparently-unresolved vulnerability? By so
> doing, you're challenging the public to come and try to empty your database
> because LVM snapshots now occur hourly, and that it only takes 16 hours to
> transfer LVM snapshots between environments :)
> 4. the PgSQL master deleted its WALs within 4 hours of the replica
> "beginning to lag" (<interrobang here>). That really needs to be fixed.
> Again, you probably need a serious upgrade to your PgSQL server because it
> apparently doesn't have enough space to hold more than a couple of hours of
> WALs (unless this was just a naive misconfiguration of the
> [min|max]_wal_size parameter, like the max_connections parameter?)
> There were a few other things (including someone else downthread who pointed
> out that your CEO re-revealed your DB's hostnames in this write-up, and that
> they're resolvable via public DNS and have running sshds on port 22), but
> these are the big standouts for me.
> Not sure how fast your disks were, but 300GB gone in "a few seconds" sounds
> like a stretch.
> Some data may've been recoverable with some disk forensics.
> Especially if your Postgres server was running at the time of the deletion,
> some data and file descriptors also likely could've been extracted from
> system memory
I'm aware that hot standby is supported, though it's not the default configuration for the standby server (default and safest is a standby mode that you can't query at all; hot standby introduces possible conflicts between hot read queries and write transactions coming in from the WAL, so if failover is your primary intention, you should be cold standbying). I'm saying that mixing read queries in and dispersing them over hot standbys is not well-supported, which is why you need third-party tools to do it.
It can also be risky if your replication lag gets out of control, and you've indicated that it easily does. PgSQL replication is eventually consistent and you risk returning stale data on reads, which could cause all sorts of havoc if it's not accounted for by the application internally.
> We've had a few too many cases like this in the past. We're aiming to resolve those, but unfortunately this is rather tricky and time consuming.
This may take some upfront work, but it's pretty routine. A serious commercial-level offering should not need to take itself offline without announcement in order to restart the single database server and apply a configuration tweak.
> Code is written by developers, and developers are humans. Humans in turn make mistakes. Most project removal related code also existed before we started enforcing stricter performance guidelines.
The point is not that humans make mistakes, nor that bugs exist. The point is that such a feature was released without considering its easily-exploitable potential and the permanent consequences of its exploitation (permanent removal of data). That should trigger a process review.
> There's no point in hiding it. Spending a few minutes digging through the code and you'll find it, and probably plenty other similar problems. If somebody tries to abuse it we'll deal with it on a case by case basis.
There's a lot of risk in drawing attention to this type of vulnerability. I think GitLab should be taking this more seriously. All code has bugs, but this isn't a bug; it's an incomplete, dangerously-designed feature that can be easily used by a malicious actor to permanent destroy large quantities of user data. Your CEO has just highlighted it before the whole world while it's still active and exploitable on the public web site.
Reading the code isn't a dead giveaway because it takes a lot of effort to find the specific code in question and realize what it means, and because the general assumption would be that GitLab.com is running a souped-up or specialized flavor of the code and that such dangerous design flaws must have already been resolved on a presumably high-traffic site. However, this post highlights that it hasn't been, and that's bad. This is effectively irresponsible self-disclosure of a very high-grade DoS exploit.
> Probably just a naive configuration value since we have plenty of storage available.
Having the storage readily available means that the hard part is already done! Each WAL segment is 16MB. You have about 350 GB of unused disk. Set wal_keep_segments and min_wal_size to something reasonable and you won't need to do this obviously-risky resync operation every time you have a couple of hours of heavy DB load.
> Revealing hostnames isn't really a big deal, neither is SSH running on port 22. In the worst case some bots will try to log in using "admin" usernames and the likes, which won't work. All hosts use public key authentication, and password authentication is disabled.
See discussion at https://news.ycombinator.com/item?id=13621027. The worst case is not a bruteforced login, it's an exploited daemon that leads to an exploited box that leads to an exploited network that leads to an exploited company. The secondary concern would be a DoS attack; everyone now knows that you have only one functioning database server that everything depends on, and that that server's IP is x.x.y.y. That's enough to cause trouble even without exploits or zero days.
> When using psycial disks not used by anything else, maybe. However, we're talking about disks used in a cloud environment. Are they actually physical? Are they part of larger disks shared with other servers? Who knows. The chance of data recovery using special tools in a cloud environment is basically zero.
Yes, this complicates things significantly. Something like EBS may be able to be used pretty similarly to a dd image, though there is no way to "pull the plug" on an EC2 server afaik (maybe it's exposed through the API). I've never used Azure so I don't know if this would be practicable there.
> That only works for files still held on to by PostgreSQL. PostgreSQL doesn't keep all files open at all times, so it wouldn't help.
Indeed. While PgSQL doesn't keep all files open at all times, it does keep some files open, and they may or may not have contained useful data. I personally would've also been interested in trying to freeze the memory state (something you can do with a lot of raw VMs that you can't do with physical servers, but admittedly probably not something the cloud provider exposes).
> Root Cause Analysis
> [List of technical problems]
Mark my words, the board members from the VC firms will be removed by the VC partners due to letting the kids run the show. Then VC firms will put an experienced CEO and CTO in place to clean up the mess and get the company on track. Unfortunately they will probably have wasted a couple years and be down to the last million $ before they take action.
I am not a GitLab customer, I am not a startup junkie, and I'm usually considered one of the more conservative (in action, not politics) engineers in my peer group in technology adoption.
The cloud is just someone else's computer.
However, I've also seen graybeards who should have known better fuck something up. I've seen a team of smart people who in a moment of crisis made the wrong decision. I am currently in an organization that is full of careful people and have still seen data loss.
I went trawling through LinkedIn for GitLab employees, and they certainly have their fair share of senior engineers. If you want to fault them for being a remote company, that's fine, but is it that different than a fortune 500 company that has developers in the Bay Area, Austin, India, China, Budapest, and remote workers in other locations?
Or is a company only legitimate if it's in an open space in the Valley?
As your beard greys you realize absolutely everywhere is a mess. Everybody is an imposter and nothing matches the ideals you think should exist. The most capable people are just as prone to fat fingering critical commands as the greenhorns.
People have the wrong attitude towards failure and it's actually quite harmful. If you don't actively study it and make avoiding failure the #1 priority of your company you're absolutely doomed to commit a serious error at some point, and usually it's fine.
We're talking about a distributed version control system. Half the point is resilience to data loss. Compound that with the final result which was a site down for a day and loss of 6 hours of data. I've lost a day of work before to doing nothing and worked hard for a few hours and accidentally deleted it. If you haven't, you're probably lying to yourself. I didn't fall on my sword. It's just not that big of a deal. If it happens frequently? Sure. But it's going to happen once to a lot of people.
One of the very most important aspects to avoiding failure is being amiable when it happens. Fear of failure causes quite a bit of failure and stupid behavior to try to hide and avoid it.
I also simply don't understand the vitriol towards remote work.
They didn't just lose data. They lost data and all of their actual backups were invalid. They had to restore from a system image that was taken for non-backup purposes, and, as luck would have it, was able to function as a backup in this instance. Not having working backups for months-long stretches rises to the level of negligence or incompetence from whomever is supposed to be supervising their infrastructure.
We all know that backups in the general sense are crucial and that they don't get done nearly often enough, but being lazy about backing up the home directory on your laptop is a lot different than allowing the company to sit without working backups for months.
I'm not saying that this doesn't happen to senior engineers who are victims of bad management, but qualified leadership doesn't allow it.
On top of that, it emerges that this condition occurred because they don't have good practices around when to log in to the master database server, they remove binary data directories before they pull down new copies, they don't know how to configure PgSQL and have to do a full standby resync after a couple of hours of high DB load because they don't have WAL archiving, replication slots, or even a semi-sane wal_keep_segments/min_wal_size set, they have no automated backup sanity check (let alone a schedule of human-verified backup restores) and other inadequate monitoring and alarming practices, and do I really need to go on? I could, because in this thread alone there are several other major faux pas mentioned.
I'm not sure how many of these sloppy, amateur errors you want to allow to stack on top of each other before you start thinking that GitLab is semi-responsible for this and that it's not within the typical senior-person margin of error, but it passed that threshold a long time ago for me.
GitLab severely underpays for any candidate not based in a top 10 real estate market, talking like 50-60% under market, because they punish candidates based on how much cheaper the real estate in their home market is than in New York City. The consensus is that this impedes their ability to obtain good talent and I would say that the events of the last couple of weeks have demonstrated that with spectacular clarity.
At least in my case, the impression has nothing to do with their operation as a remote company -- I'm a full-time remote worker and I learned about GitLab's atrocious salary formulae when I was checking them out as a potential employer because I wanted to move to an all-remote company (instead of the partially-remote company I work in now).
I'm sure that most of GitLab's engineers are good engineers relative to their experience levels. I'm also sure a small handful who accidentally align with their salary formula are senior in their particular fields. And I'm thirdly sure that no one with any inkling of experience in running a stable, reliable, production-level service and infrastructure has been allowed any fractional amount of influence in their infrastructure and deployment procedures.
I have never, ever, seen this. Every firm has its own internal politics  but rarely if ever would they do this. They are more likely to just ignore it.
There is a belief that it takes a decade to tell if a VC is any good or not, and that includes "learning experiences" (all on the LP's dime of course).
> Then VC firms will put an experienced CEO and CTO in place
Now this I have seen. It even works sometimes (e.g. Eric Schmidt/Google).
 I have had a firm invest in which all decisions were made by a single partner. I also had a firm invest (a sizable sum!) in which other senior partners never met me and only learned what my company even did when I gave a presentation at one of their LP meetings. Also some very large funds allow senior partners to make small seed investments ("science projects") without formal approval from the partnership.
But the first month I was there, I kept pressing him on what our disaster recovery plan was. His answers were weak at best. It was never tested, and he only had broad ideas of how much time it would take for a full recovery. I don't understand his reticence for testing full disaster recovery, but as everyone knows, unless you have tested DR, you don't have DR.
It was very scary, but in the several years I was there, we never had the database go down hard and lose data. But that was more blind luck than anything else. If we had a data outage, it would have probably been worse than the Gitlab's outage by far.
For instance I just looked at their jobs listings and they advertise a range for annual compensation, the high end of that range looks about right for the locations I checked SF, and London in the UK for PE positions.
The tl;dr is that they start out with a kind of OK but not very interesting rate for New York City. Then they punish residents of other cities based on how much cheaper the rent in their city is than the rent in NYC.
For example, if the cost of rent in your city is 35% the cost of rent in NYC (as determined by the third-party rent index they reference), your salary multiplier will be 0.35, meaning GitLab will offer someone in NYC 130k for that job, but they'll only offer you 45.5k. The experience modifiers range from -20% to +20% so they're not going to help much.
As NYC is literally one of the top five most expensive real estate markets on the planet, most non-NYC cities get totally pummeled by GitLab's salary calculator, and the result is what we see here: an enterprise with $30M in funding that can't figure out how to make backups.
They are out of their minds if they think the top rate for a Senior engineer in Salt Lake City with above average experience is $78k. Most other areas also seem pretty low from what I've looked into but I suppose a few areas could be outliers in their calculator.
Their docs on the calculator are here: https://about.gitlab.com/handbook/people-operations/global-c...
It's a bit strange that they cut or raise your pay by that calculator when you move cities.
That is odd. Usually if you make more based on your previous location a company won't actually claw anything back you just will likely not be getting any raises.
Having worked on both coasts, I can say that the quality is the same. Culture and quantity appears to be the biggest difference.
I wouldn't call their salary range a spit in the face, but its probably around $20k below market rates where I live. Benefits are competitive in my opinion.
1) It's hard to tell how they assign rank. If they accept any programmer that shows up, the rates are fine. If they have Google level interviews to filter for only Google level candidates, who will join at "junior" level, it's terrible.
2) The numbers are in dollars, thus they are utterly meaningless. It's not a job, it's gambling with the exchange rate and the exchange fees.
They openly publish their database hostnames in this postmortem (db1.cluster.gitlab.com and db2.cluster.gitlab.com). These actually have public DNS that resolves. The last straw: port 22 on each server is running an open sshd server (the fact that password auth is disabled is of little consolation).
A production database server should NEVER HAVE a public IP address to start with. This is simply unacceptable and proves they don't have a single person qualified to handle infrastructure. Their only concern is that their developers can ssh into every production server without having to deal with vpns or firewalls.
Huge red flag that your data cannot be trusted.
There's absolutely nothing wrong, whatsoever, with having a public IP address on a production database server.
The only things that should be public facing are things that clients need to access directly. In most cases, that's just an HTTP server. In GitLab's case, it's an HTTP server and a git server.
One of the most important principles of infrastructure security is to minimize the attack surface. No matter how locked down you have it, there are always zero days and other exploits out there. This is a concern even if you block the database port at the firewall but leave some other services (like SSH) open; if any of those services get compromised, it has the potential to allow for the compromise of the rest of the box.
If there's no need for the public to connect to the server, there's no need to take the risk of leaving any of its services open. And if there's no need to have its services open, there's no need for the box to even be addressable from the public internet (i.e., no reason to have a public IP address).
Put the server on the internal network and connect over a secure mechanism like VPN and not only do you not have to worry about strangers connecting to your servers, you don't have to worry about whitelisting or blacklisting individual IPs in your firewall (instead, you whitelist the applicable internal subnets, which should also be restricted based on resource access level), you don't have to worry about your firewall's rules getting wiped for whatever reason and accidentally letting the whole planet in (common if you use iptables, most distros manually require the admin to configure iptables-restore to run on boot), don't have to worry about someone zero-daying your SSH or FTP daemon, don't have to worry about the box being affected by network-level attacks like DDoS, which can sometimes target whole subnets, and so forth. Just much tidier and safer all around.
Secondly, why would anyone use a load balancer? Almost every website in existence doesn't need a load balancer.
Almost every public website in existence use multiple web servers and load balancers to balance between them. That's the only way to do failover and handle more traffic than a single box can take.
> There's absolutely nothing wrong, whatsoever, with having a public IP address on a production database server.