Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye Docker: Purging Is Such Sweet Sorrow (zwischenzugs.com)
161 points by leandot on July 27, 2019 | hide | past | favorite | 89 comments



> was likely due to a script that had got out of hand starting up too many containers.

So there wasn't any actual problem with Docker, it was the OP's own problem that they "solved" by switching container platforms instead of just fixing their own buggy script?


I facepalmed when I read that...

This is an all too common problem in development and it's made me very skeptical of coworkers at times who use it "X technology doesn't work or X is garbage" as a justification to switch technologies or spend time prototyping several alternatives.

I'm not gonna say Docker is a perfect tool by any stretch either.


Very true.

I know a lot of people in school who have to use a generally well regarded technology, but hate it because they had to use it in a class. For example, they might see someone on github and go “oh is that github? I hate using git” when in reality they had to use it in a group project with 4 other people who have never used it, and had no understanding of branches, merging, etc.

I have seen the same thing with LaTeX, Python, and Vim.


>git...LaTeX, Python, and Vim

To be fair, LaTeX, git, and Vim, beyond shallow use, have fairly steep learning curves and can initially seem like a huge inconvenient mess before one learns, through trial and error, enough of the basics to really unlock enough utility to justify their use.

In a perfect world we'd all first sit down with manuals/tutorials, absorb everything from the getgo, and hack away with bliss; but most people, myself included, do not have enough intrinsic interest to muster the will to sit through documentation while paying enough attention to absorb a bunch of seemingly irrelevant details about some involved tech tool, when we have one simple goal to accomplish now, so instead we piece everything together with a combination of Google searches and keyboard mashing until things work and then optimize on top of that later.

In my experience this can be one of the factors that define a so called 10x dev - the willingness and ability to plod through docs with a genuine interest before using a powerful tool. I imagine such crowd is overrepresented on HN but quite rare in the general population.


Really hard to use tools and half obsolete by some metrics.

Vi predates keyboards with arrow keys and numpad. You can do most of what vim can do in any editor if you can use your keyboard effectively. Then there is the question of VI vs VIM. Some schools force students to use VI, which is really a different and antiquated beast.

LaTeX lost a lot of relevance since Office 2010, that added a great equations editor and better handling of sections/subsections.


You totally can't do "most of what vim can do in any editor if you can use your keyboard effectively". For example: fXctX - find next X on current line, change to next X. You can, fortunately, get a pretty decent vim emulation in various IDEs.

TeX still renders equations better than Office does. Also it's programmable.


I totally agree with you. Those are _hard_ tools, and if on top of it you have the attitude of "I am just going to learn this for this class because the teacher told me too", you are definitely going to dislike it.

I think you are spot on with the intrinsic interest.


This same pattern also hurt Scheme's reputation. For a while, when I mentioned I was doing big production work in Scheme, it was ordinary for someone to mention hating it from school.

(ProTip: Maybe you hated it from school because it was used as a tool for teaching theory and beginner stuff, and you were given difficult homework assignments. Maybe you'd like it more if you learned how to use it practically, while you were working on things you actually wanted to do.)


I think it's valid, if I'm understanding the context.

OP seems to run a lot of infrastructure on his "home servers" not because he directly needs it but as a learning/playground environment.

So here he's chosen to learn about some new tools rather than track-down and fix a misbehaving script. That seems reasonable to me.

(Though, I'm inferring here. If that's really was his thinking was, he could have stated it clearly and prevented the confusion.)


> I facepalmed when I read that...

I encourage you to read the entire article, so you understand what was his actual thought process deciding to replace docker, and the pros of the new solution.


It was on his home servers and he got a valid point regarding redundant daemon in docker. So I don't understand why you facepalm because its not your business what he runs at home or how he rocks his boat.


I'm facepalming because he literally went through all the effort to replace Docker without fixing the underlying problem which is his script.

At a minimum it means he doesn't understand how his own code works which is going to be problematic if your career involves coding.

The "it's none of your business" argument falls flat the moment you decide to broadcast to the entire world what you're doing and then get it onto a the front page of a website known for debating the content of said articles.


You should read articles without assuming Hacker News is the intended audience. They usually aren't submitted by the author.


> I'm facepalming because he literally went through all the effort to replace Docker without fixing the underlying problem which is his script.

Well, he didn't say that the reoccurring problem came back, so maybe it was Docker after all?

Either way, I agree somewhat #facepalm


No, some container went rogue, and in the process dragged the Docker daemon which annoyed him PLUS he'd learnt that it was possible to run containers rootless and daemonless SO he changed his setup. Please RTFA.



Not the default hence not well tested. AND you still have a Docker daemon running. Furthermore, last time I checked, Docker does not support user namespacing in a released version yet.

I maintain the "duh fix your script dummy" is not the right attitude. The author clearly states it's used for a build farm. That is bound to fail ...

I read "Docker daemon was using 100% CPU" to be the recurring incident; not a given script. And when that happens, it ends up cascading and stuff get OOM-killed before you know it.


Last I checked it isn't out yet and Podman offers this already.


You missed the point. RedHat has replaced the docker command with their own tool that does the same thing. He's just trying the new tools and it's poorly framed as a deliberate migration. It's not a migration and there is no effort involved. Docker is simply gone, the tools he mention are the new de facto standard and he will have to live with them anyway.


It seems that "abandoning Docker" is the latest fashion fad, this is not the first article of this kind on HN recently.

I look at each of those articles carefully and my takeaway every time is the same: there are great alternatives to Docker if you have unlimited time and are willing to accept lots of limitations.


Sure. Gotta use that time for HN comments.

I agree with the actual comments on Dockers design.

But I’m in infra and security so I’m biased.

It’s my job as a tech nerd to care about that stuff? Or maybe I’m leaning into the role too much.

Why even bother working in tech when this march forward is what everyone complains about but has been here forever?

Growing up means getting over this shit internally.


>Anyway, I didn’t necessarily blame Docker for it, but it did add force to an argument I’d heard before:

Why does Docker need a daemon at all?


I am similarly confused as to why docker needs a daemon. For what's provided in the API, I don't see the need...it could all happen in the client.

Contrast with LXD. It has a daemon as well, but it tracks containers and state across multiple hosts, so the need is obvious. The single host LXC doesn't have a daemon.


Well for one, it can restart the container when it fails or on boot up. I don’t see how having no process minder improves anything, you could argue systemd shouldn’t be a daemon too and sysv init scripts are better too.


> you could argue systemd shouldn’t be a daemon too and sysv init scripts are better too.

You certainly could.


And you'd be wrong... The init system will always be a daemon, for obvious reasons ;-)


You're begging the question that systemd should be the init system


I'm not. The original questions were:

> systemd shouldn’t be a daemon too

A. False.

> and sysv init scripts are better too

B. Debatable, but the start is a false premise, SysV init was a daemon plus the scripts.


Systemd not being a daemon doesn't make any sense; both it and SysV are init systems which are by necessity daemonized processes.


Well technically PID 1 is not a daemonized process because there is no one who could have daemonized it.


That’s the point. The same principle applies to Docker. Although conceivably the daemon could be just about process/container management instead of image management as well.


Well, the argument that Red Hat makes is that there already is systemd as a daemon for process management; all other functionality that docker provides does not necessitate a daemon. Thus Podman, Buildah etc.


Auto-restart isn't even default behavior though. I'd argue for "start a daemon when needed behavior."


In such cases I tell people that computers do what they are told. Especially when your code does something you didn’t expect.


Hm, I remember reading other unrelated reasons the author mentions for switching away. Maybe the article was edited after you read it.

Is this gonna be another systemctl situation?

“It’s new! Hiss!!!!”


I’ve managed to resist the hype and still not ever used docker for anything ...

I still really struggle to understand what the practical benefits to this kind of containerization actually are ...

It seems like people reach for it because they want to have some kind of “compile target” into which they can stick “all the things” their application needs to run — which is supposed to then help them “deploy” into their development environment or onto their production infrastructure in a way that serves the goal that the applications within the image should behave the “same way” in either location ... but does anything about this kind of container abstraction actually help with doing this? Don’t you still inevitably end up having to manage assumptions about the differences between these environments in order to make this work in practice (oh if you are in development environment make sure you don’t actually submit payment to stripe, if you are deploying to this cloud provider make sure you get secrets from <here> instead of <here> ...) ...?

Do you actually get any good abstractions out of the container which empower better solutions to deployment challenges? “I don’t have to think about which hosts the (random) http client used in my application is configured to talk to because I can just magically retarget it at the container level by manipulating networking configuration” — is that a thing — does it actually work? To my understanding the extent to which you _can_ do that requires you to write your application a certain way with a crazy service discovery layer like istio — and you’ve got to make sure you build your application completely to use service discovery fabric ... but if you built your application to use a service discovery fabric do you gain anything extra by also using docker at that point ...?

And what about the impact to developer ergonomics? Is it easy and smooth to use debuggers to quickly edit code running in an image ...? Do people regularly run production docker images locally to debug production application issues ...? Are there solutions that allow one to say, attach to a remote qa testers chrome instance and then automatically attach debugger to the set of production containers handling the requests associated with that browser ...?

Those are the kind of features I want ... I’m not sure exactly if the containerization abstraction model would really help me get there or just create another set of configuration knobs that _also_ have to be correctly aligned for me to get the right environment specific behaviors out of my application ...


> I still really struggle to understand what the practical benefits to this kind of containerization actually are ...

Here's why I started using it years ago. I had a CentOS 6 machine that I wanted to run Plex, Subsonic and Transmission on, but I couldn't, because they had different (EDIT: and conflicting) requirements for various packages. I might have been able to hack it, but it was looking really tricky.

Enter Docker. I have all three running in separate Docker containers. And it just works. I've never hand a single problem. I made a yum exception for the Docker packages I use, so I can control when they are updated, which is about once a year.

The alternative was to build a new box that met all the requirements, but that seemed like a big waste of resources (electricity and my money).


If you didn't have that requirement of conflicting packages, would you still have used docker?

Isn't once a year update a large security risk?


I probably wouldn't have, had there been no package conflicts.

As for security risk on the yearly update: all the services are only locally accessible. None are exposed to anyone but me. I do apply CentOS updates daily, though.

My only risk in an attacker on my LAN, and I think I have that locked down well.


One would hope they're still taking security updates but on CentOS I kind of doubt they bothered to implement that level of granularity.


> I don’t have to think about which hosts the (random) http client used in my application is configured to talk to because I can just magically retarget it at the container level by manipulating networking configuration” — is that a thing — does it actually work?

Yes, this works, and we've been doing it for a long time, since well before containers became a thing. When you're running services at any significant scale you can't have clients connecting to hard coded host addresses. The only way to get durability and scalability is to place the service behind a load balancer, have the clients find the load balancer through a stable DNS record, and have the load balancer direct traffic to healthy instances of the service. That's essentially what you have described above, without any containerisms.

What do containers add to this picture? Nothing. Containers themselves have almost nothing to do with networking and how traffic gets to your services, beyond establishing a net namespace, a veth pair, and a bridge to connect the pair to the host interface. That's just basic plumbing, not traffic management. Containers are about process isolation, not networking.

The networking problem for containers is addressed by container orchestration platforms like kubernetes, or compose/swarm. From a networking perspective what orchestrators bring to the table is basically an abstraction layer over the underlying provider resources for creating and tearing down all the stuff that implements the client->load balancer->service pattern.


I have used them maybe in a weird way. It might be a really stupid way too, but it made my life way simpler.

I had some legacy code that was being repurposed and I couldn't touch the production environment, not even to build again.

I could have tried building new VMs, but in the end I defined the build process with a Dockerfile. Environment variables, compilers, external dependencies, all of it done when you ran "docker build". There was a small script to orchestrate that and copy out the results too.

The build environment was consistent, it was easy to tweak, and I could build for centos 6, centos 7, and suse 12 on any machines. That let me use my test box to implement and test any environment changes before it moved to production. It also let the primary developers build their own versions of it whenever they had made a change that needed testing.

It's not at all like any of the use-cases I read about, but it made my life so much easier.


Being able to package an application and dependencies in a container, then run locally or using one of several AWS Compute services is pretty powerful.

I think containers have been through the hype cycle, but they definitely have their uses. Anybody else use Thinstall (now VMWare ThinApp) on Windows? It did something similar by presenting your application with virtualized versions of the Windows subsystems, and was super useful when you were faced with DLL dependency issues.


The biggest benefit for edX, which is where I started using Docker and gained the most experience, was being able to get developers up and running with our key services in a couple hours as opposed to days. Everything is scripted, and ephemeral. Broke something? Refresh the environment. Want to try a new configuration? Build a new image. This was not as easy in our previous Vagrant devstack.

> Don’t you still inevitably end up having to manage assumptions about the differences between these environments in order to make this work in practice...?

Yes, and I would do that regardless of my deploy target. If you follow the Twelve-Factor App methodology [1], your settings are always in a config file or passed in as environment variables. If you're using external services (e.g. Stripe) while developing set the appropriate variables to your test credentials.

> Do you actually get any good abstractions out of the container which empower better solutions to deployment challenges? “I don’t have to think about which hosts the (random) http client used in my application is configured to talk to because I can just magically retarget it at the container level by manipulating networking configuration”

The solution to this problem is the same regardless of whether you use Docker or a cloud provider. Use a load balancer to direct traffic to a group of hosts responding to a specific port.

> And what about the impact to developer ergonomics?

Admittedly, debugging is not perfect (at least not with my Docker Compose setup in PyCharm). That said, there are debuggers that attach to Docker containers.

[1] https://12factor.net/


> Admittedly, debugging is not perfect (at least not with my Docker Compose setup in PyCharm) A bit off-topic, but how so? It works fine for me using the setup described here https://www.jetbrains.com/help/pycharm/using-docker-compose-...


Some of these may be outdated now.

We had an issue with anchors in the YAML, but that seems to have been resolved.

The remote interpreter only started the one service. In the case of the example you linked, the db container is not run.

The other issue I recall was due to PyCharm needing to install its own debug tools. These were not available to be installed at image build time.

Again, it’s been a while since I’ve used PyCharm and Docker together, so these may all be resolved.


I have things like Postgres, Redis, Rabbitmq, etc running in docker on my local machine for development. I have a recommended configuration/install prebuilt and kept updated for me, and if I want to test my apps against a new release of redis for example (major or minor) it’s trivial to run parallel versions.


It's useful for bypassing a system administrator or devops when adding new dependencies.


The actual reason is that sysadmins don't have arcane requirements of what you can install in your Docker images. It will continue to be popular until sysadmins accomodate developers, or until the sysadmins catch on and start restricting docker environments.

Ok, that's kind of cynical, but still true to a large extent if you ask me.


HDD : Hype Driven Development.

"Yeah, we should just give up on Docker and rebuild all the container ecosystem using IBM technology."


Well, if it was cheaper, I'd run my "containers" on a z series cluster.

Automatic failover during hardware failure, no need to program anything special. HA without the need for having to do any work.


It's not hype, it's just realizing that "Docker the container runner" is mostly a relatively thin layer above kernel APIs.

If you're not embracing the full Docker but using Kubernetes, the "runc" could be any of the dozens of alternative launcher.

You don't have to rebuild everything. That is what standards are for. Of course, this commodification is a problem for Docker Inc. The reason why they're "hyping" their brand ;-)


My coworker likes to call it RDD. Resume Driven Development.

Just stick with something long enough to comfortably add it to your resume, then drop it and never maintain what you made.


I still don't "get" docker. I've deployed systems with it, and it works fine (maybe except for postgres containment, there still seems to be some "bare metal assumptions" that postgres may use that docker may invalidate), but why not use Ansible/Vagrant to build images? It appears (to me) to be the same thing, but with new mental overhead and/or cult involved.

Lots of ways to make sure that images get built and tested and deployed by scripts that keep a lot of config and directory in stand-alone (twinned) services. Why there is a "docker" cult (which may be the wrong word - it may be very very valid) escapes me.


I just started using Docker again in production. And it's pure garbage when you hit a corner case, but works well when you don't.

I'd like to stop using it as well.


Example?


The reason why lazy docker exists: "Something's not working? Maybe a service is down. docker-compose ps. Yep, it's that microservice that's still buggy. No issue, I'll just restart it: docker-compose restart. Okay now let's try again. Oh wait the issue is still there. Hmm. docker-compose ps. Right so the service must have just stopped immediately after starting. I probably would have known that if I was reading the log stream, but there is a lot of clutter in there from other services. I could get the logs for just that one service with docker compose logs --follow myservice but that dies everytime the service dies so I'd need to run that command every time I restart the service. I could alternatively run docker-compose up myservice and in that terminal window if the service is down I could just up it again, but now I've got one service hogging a terminal window even after I no longer care about its logs. I guess when I want to reclaim the terminal realestate I can do ctrl+P,Q, but... wait, that's not working for some reason. Should I use ctrl+C instead? I can't remember if that closes the foreground process or kills the actual service.

What a headache!"

And war stories like these are similar to what I have experienced: https://thehftguy.com/2016/11/01/docker-in-production-an-his...


Another example - Stuff like this: https://stackoverflow.com/questions/19688314/how-do-you-atta...

I just hit this one today.


> podman pull downloads get all layers in parallel, in contrast to Docker’s.

Hmm? I could have sworn docker pulled multiple layers at once the last time I used it.


Docker pulls three layers at a time, not all of them.


Only because that's the default setting. You can pass in `--max-concurrent-downloads` with any number you want.


We use docker to spin up and down tens of thousands of short lived (~10 minutes) containers per day. Docker 17.05.0-ce on Ubuntu 16.04 tolerates this fine, but Docker 18.06.0-ce hangs after only a few hours. I've not bothered troubleshooting in detail yet; I've just pegged the docker version and moved on with my life. Still, I've lost a lot of trust in docker.


Interesting. Where exactly does it hang? When you create the container, or when you run it?


> I’d never really got to the bottom of it

Perhaps it might have been worth the extra time to get to the bottom of it instead of switching and, in the end, not noticing any big differences?

Also, I noticed there was an ad at the end of the post for the book Docker in Practice. Ironic placement.


Wait, isn't the author of the blog post the author of the book?


Yes, wow! The blog post is by Ian Miell, one of the authors of Docker in Practice.


I glanced over at the Docker in Practice book on my nightstand and there is his name. Surreal.


Docker is the Myspace of container engines. It's stupidly unreliable, at least on MacOs. I've resorted to running system prune every week or so to keep it from getting totally out of control.

Reminds me of NPM and rm-rfing node_modules on every build. I can't wait till something comes along to replace it.


> Docker is the Myspace of container engines. It's stupidly unreliable, at least on MacOs. I've resorted to running system prune every week or so to keep it from getting totally out of control.

Well Docker for Mac is really just binding the CLI towards the daemon, that is running in a Linux VM. But the integration is far from optimal.


I've been using latest Docker, sometimes with the bundled Kubernetes on macOS for years without such issues.

Of course it's still just a Linux VM.


It may be related to how many containers I run. Usually at least ten, sometimes 30. Usually the problem isn't daemon itself, but all the garbage integration attached to it


Docker on anything other than linux is a meme right now, the support is pretty bad and stuff straight up doesn't work for arcane reasons that aren't mentioned


Have you tried kaniko for docker builds? I'm using it to do docker builds on Kubernetes in my CI pipeline.

https://github.com/GoogleContainerTools/kaniko


Works great with Skaffold


I personally like keeping my root partition to a minimal and ever since docker came into the picture I keep running into spacing issues. So the allure for me to switch isn't technical but for peace of mind storing images in userspace


Why not simply change the storage path by setting the data root in your configuration?

https://docs.docker.com/engine/reference/commandline/dockerd...


You could still keep your root partition to a minimal size, just create a larger /var partition.


It is possible to limit a containers resources, cpu, memory, network

https://docs.docker.com/config/containers/resource_constrain...

Docker can really grind if it starts hitting swap. I have not added resource constraints but I've been planning on doing it.


So did he even fix the original issue?


I just wish docker wasnt slow as shit on mac's (then again I guess relative to vagrant it's not that bad)


Because they have to use a hypervisor to get the benefits of the Linux kernel. If you want to speed things up and get native benefits then try switching to Linux or convince Apple to add native containers.


Is Docker faster on Windows/WSL than Macs?


Not that I know of but it is definitely much faster on Linux.


He replaced Docker with something that's basically the same thing?! I don't get it.


The article did a pretty good job of explaining the differences. Did you just skim it quickly? Mainly, no need for a daemon or running as root.


It's not him. Docker was retired last year in case you are not aware of that.

RedHat replaced docker with their own software. The command is still "docker" but it doesn't run docker. Since RedHat basically runs Linux, docker is dead, it will be transparently killed from all the Linux distributions soon.


tl;dr

My script/code doesn’t work efficiently, I changed to a similar runtime and in summary noticed no real difference.



The first rule of lobste.rs is...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: