Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Docker just ate 19GB of production data
75 points by fhackenberger on July 22, 2019. | hide | past | favorite | 53 comments
Be very careful with the live-restore feature of docker. Running 'docker volume prune' just removed all my named volumes, which were used by running containers.

See https://github.com/moby/moby/issues/38883

Automation of any sort will sometimes accidentally your data, whether due to periodic hiccups, system instabilities and bugs, operator misunderstandings or errors, or random cosmic ray strikes.

The exact reason it blows up isn't even necessarily all that important, other than in its effect on what you should be doing to reduce the probability of downtime. Well-engineered systems are routinely developed from less than completely reliable parts. Stuff fails, we design for it.

It's certainly not reason not to use it, if it's resulting in a net positive gain in your ability to get things done and maintain control and transparency over your deployed systems.

But it's certainly a good reason (among a long list of good reasons) to make sure you have a good backup routine in place, including regular testing of both their integrity and your ability to restore a working prod system from them quickly.

>Automation of any sort will sometimes accidentally your data,

Distributed scalable automation will accidentally your data slightly more often. The more stuff you have the more edge cases and bugs you have.

Scale big fail big as I like to say.

Accidentally what? =)

Delete, but the word delete is deleted. Yo dawg.

accidentally the whole thing

You weren’t around in the slashdot days were you?

your data.

This definitely sounds like a bug.

docker volume prune says:

"Remove all unused local volumes. Unused local volumes are those which are not referenced by any containers"

If it removed a local volume that was being used by a container that is kinda bad.

1. Why are you running docker volume prune in production?

2. Why are you running docker on ad-hoc machines you need to prune?

3. Why do you even need root access on production machines to fiddle around with docker commands?

While this is obviously a bad bug (and there are many with Docker), it seems more of an operational procedures failure than anything else. You could be saying:

“Beware of rm -rf /, it just deleted 20gb of production data”

Ok. Sure. But why are you tools and procedures putting yourself in a position to make that mistake?

One of the most bothersome part of HN is when someone tells us about something that happened, and out come a ream of second-guessing replies. "Why didn't you just do this?" and Why didn't you just do that?" and any number of "It's so easy to just thing instead!"

We don't know his environment. We don't know his company's policies. We don't know his hardware, connectivity, or budget issues. These kinds of passive aggressive responses are almost never helpful.

When you reduce it down the title here is “giving people access to running arbitrary, manual and presumably unrestricted maintenance commands in production leads to issues”.

That’s not a surprise, and maybe the issue at the core here is not really Docker. That’s all.

I agree with both of you. It's not helpful to not know the context and the op wasn't necessarily in control of it. But at the same time if you are someone in control of the context (which you aren't really if you are a line level employee) you should be aware this is a bad pattern. If you are a line level employee and this is being imposed on you for some reason or other you should sound an alarm if you know to "hey, for the record - this is a monumentally bad idea - just saying".

I've seen plenty of stuff in my career where I've gone on record to say "hey - we really shouldn't do this". Nothing got done about it. But hey, I did what I could.

Recently I learned about Rasmussen's dynamic safety model. I think this is a very handy mental model to have. It's the human factors that make what we do really hard. Often line level practitioners know better than they are allowed to do in practice and trying to fight organizational politics to Do The Right Thing can be an uphill battle.

Sure, but regardless of people doing dumb things, it's still worth asking "why did docker delete non-orphaned named volumes?" -- though you could also question whether someone was actually mistaken about them not being "orphaned" - you could probably arrange an unfortunate timing collision between someone running prune and a container being respawned.

Right, that's what raising an issue with the software maintainers are for.

Aside from anecdotes, there's little value in further discussion beyond the PSA that is the original post; save for prevention/recovery of such events.

It almost sounds as if the daemon was in the process of starting the containers and the prune command was issued. If it were run with `-f` and the container wasn't running those volumes would be deleted. I tried this on a test system and didn't get the results in the issue.

Well, no. rm -rf / is a completely different beast. It is documented and expected for starters.

They may have valid reasons to do that, even if not common.

thinking of the `rm -rf` one, here is a fun take:

  export $WORKDIR=Home/me/proj
  rm -rf /$WORKDIR
If something unsets $WORKDIR or does not set it at all, wave bu-bye to your everything. And before you say "who would do that?!" -- I believe I heard that happened to a build of RedHat that also had some kind of force push and auto-pull and build on their version control so every connected person had their version of the software nuked. If not for the non-connected individuals, the entire software would be gone apparently. Or so the legend goes.

Also the Steam Linux client.

I really-really hope you are not relying on Docker only when protecting 19G of data. Docker volume operations are the equivalent of playing with sudo rm -rf, shit's going to happen once in a while.

I am a Docker newbie but I thought it wasn’t considered best practice to use Docker for anything where you care about the data in the container. I’ve only used for API’s batch jobs, etc.

You're correct that container root file systems should be considered ephemeral, and writing anything that needs to be persisted to them is a smell. However you can mount persistent volumes to a container specifically for the purpose of deploying stateful applications and referencing persistent data. How safe you can consider that data to be depends on the underlying tech that is provisioning the storage. For example a GCE persistent disk with the retain flag set is probably pretty damn durable. I would still back it up, however.

What advantage does Docker really give you for long live mostly stable resources like databases?

For batch processing, my usual pattern has always been to move the data from (slower) network storage to local storage, process, move results back to network storage.

Well whichever advantages of containers are important to you I don't think you lose them just because the process reads and writes persistent data. It seems like you're really questioning whether it's as compelling a use case, which is a bigger topic and depends on a lot of specifics.

Docker volumes are persistent, unless they're not :-)

I've never been willing to consider docker volumes persistent. In the big picture, a requirement of "posix filesystem semantics" and "persistent" is a pretty inconvenient and/or expensive requirement.

I am super paranoid about docker volumes. But I have application-level backup which is tested daily.

Well, I guess it also doesn’t help that I’ve only used Docker with AWS Fargate (aka “Serverless Docker”) - where nothing is persistent.


Well, I wasn’t completely crazy. There was just a subtlety I missed.


This is an ill-informed blogpost. Don't believe everything on the internet, really.

I misread your comment, double LOL. I read "I thought it WAS considered best practice". Which is obviously not True.

Not everyone's startup can necessarily afford purchasing lots of machines for each independent service, and they use docker to get around that cost barrier.

But persistent data is so important that if your startup is so underfunded that you can’t setup a proper reliable data base, isn’t that a bigger issue? Why add the complication of Docker for production databases and why take the risk?

Well, I don't think named docker volumes are the way to go. They should map to disk and back up the native file system.

And/or automate a pgdump-type setup

In the Moby issue you mention that you are using live restore (https://docs.docker.com/config/containers/live-restore/) which is most likely where the problem is. Docker daemon restarts, existing containers are kept alive, but the restarted Docker daemon doesn’t know about those existing containers yet and thus thinks their volumes are unused.

Not sure what kind of company you work at, but I'd export a copy of your logs so you don't get canned

This makes it sound like it's quite common to use docker containers operating in a heavily stateful fashion. Is that indeed common nowadays? (Though, the state in this case is only counted on to persist in the named volumes.)

That's completely fine given that the important state is on volumes (named, persistent, bind mounted, whatever).

Well, you are supposed to be able to delete all your volumes, containers and networks and then regenerate it by running the recettes (edit:I mean.. recipes :D) (Dockerfile, docker-compose, kubernetes, volume backup, etc.).

Well, you are supposed to be able to delete all your volumes, containers and networks and then regenerate

So then Docker is designed to treat all of those as disposable.

I just searched "recette" and only came up with French cooking references.

> So then Docker is designed to treat all of those as disposable.

I don't really know if it's designed like that but I treat them as disposable and unreliable so I must have a way to resuscitate the thing when something bad happens.

> I just searched "recette" and only came up with French cooking references.

Perhaps a phonetic spelling of "resets" by a French person :)

Something like that :D. I was thinking "recette" as in "recipe" but somehow only the word "receipt" came to my mind so I thought "oh, it must be one of those words that are the same in both language".

I am a bit tired.

It's french for "recipe".

You just won the “I dropped the production db” achievement.

It’s surprisingly easy with docker, especially when dealing with .... legacy systems.

My browser ate 16GB of ram while I've been reading this. The system crashed but the tabs were here there after a reboot. I'm not even mad anymore.

> please assign this bug to an engineer.

The joys of open source users..

Seems like a reasonable ask.. I’m not sure what the problem with this is?

Maintain an open source project and you will understand the problem of users having the ‘just fix it’ attitude.

I have maintained open source projects before. Docker is a venture-backed company trying to sell its product which is mostly just hosting and support for their free product... and in addition, the user did ask nicely, they did not make demands. There's a big difference between asking that an engineer be assigned to a task and demanding something be fixed immediately for no bounty.

Computers are wonderful. They can do the same work that would require a thousand people to accomplish in the same amount of time.

Flip side . . .

Computers are terrible. They can screw things up so bad it would require a thousand people to accomplish in the same amount of time.

`docker volume prune` is specifically there to remove volumes, so backing up before using it seems to be mandatory, just in case. But yeah, if this is a bug, it's a nasty one.

This bug specifically says it affects anonymous volumes. If you had it delete a named volume that sounds like a new issue.


Nobody said anything about big data.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact