So there wasn't any actual problem with Docker, it was the OP's own problem that they "solved" by switching container platforms instead of just fixing their own buggy script?
This is an all too common problem in development and it's made me very skeptical of coworkers at times who use it "X technology doesn't work or X is garbage" as a justification to switch technologies or spend time prototyping several alternatives.
I'm not gonna say Docker is a perfect tool by any stretch either.
I know a lot of people in school who have to use a generally well regarded technology, but hate it because they had to use it in a class. For example, they might see someone on github and go “oh is that github? I hate using git” when in reality they had to use it in a group project with 4 other people who have never used it, and had no understanding of branches, merging, etc.
I have seen the same thing with LaTeX, Python, and Vim.
To be fair, LaTeX, git, and Vim, beyond shallow use, have fairly steep learning curves and can initially seem like a huge inconvenient mess before one learns, through trial and error, enough of the basics to really unlock enough utility to justify their use.
In a perfect world we'd all first sit down with manuals/tutorials, absorb everything from the getgo, and hack away with bliss; but most people, myself included, do not have enough intrinsic interest to muster the will to sit through documentation while paying enough attention to absorb a bunch of seemingly irrelevant details about some involved tech tool, when we have one simple goal to accomplish now, so instead we piece everything together with a combination of Google searches and keyboard mashing until things work and then optimize on top of that later.
In my experience this can be one of the factors that define a so called 10x dev - the willingness and ability to plod through docs with a genuine interest before using a powerful tool. I imagine such crowd is overrepresented on HN but quite rare in the general population.
Vi predates keyboards with arrow keys and numpad. You can do most of what vim can do in any editor if you can use your keyboard effectively. Then there is the question of VI vs VIM. Some schools force students to use VI, which is really a different and antiquated beast.
LaTeX lost a lot of relevance since Office 2010, that added a great equations editor and better handling of sections/subsections.
TeX still renders equations better than Office does. Also it's programmable.
I think you are spot on with the intrinsic interest.
(ProTip: Maybe you hated it from school because it was used as a tool for teaching theory and beginner stuff, and you were given difficult homework assignments. Maybe you'd like it more if you learned how to use it practically, while you were working on things you actually wanted to do.)
OP seems to run a lot of infrastructure on his "home servers" not because he directly needs it but as a learning/playground environment.
So here he's chosen to learn about some new tools rather than track-down and fix a misbehaving script. That seems reasonable to me.
(Though, I'm inferring here. If that's really was his thinking was, he could have stated it clearly and prevented the confusion.)
I encourage you to read the entire article, so you understand what was his actual thought process deciding to replace docker, and the pros of the new solution.
At a minimum it means he doesn't understand how his own code works which is going to be problematic if your career involves coding.
The "it's none of your business" argument falls flat the moment you decide to broadcast to the entire world what you're doing and then get it onto a the front page of a website known for debating the content of said articles.
Well, he didn't say that the reoccurring problem came back, so maybe it was Docker after all?
Either way, I agree somewhat #facepalm
I maintain the "duh fix your script dummy" is not the right attitude. The author clearly states it's used for a build farm. That is bound to fail ...
I read "Docker daemon was using 100% CPU" to be the recurring incident; not a given script. And when that happens, it ends up cascading and stuff get OOM-killed before you know it.
I look at each of those articles carefully and my takeaway every time is the same: there are great alternatives to Docker if you have unlimited time and are willing to accept lots of limitations.
I agree with the actual comments on Dockers design.
But I’m in infra and security so I’m biased.
It’s my job as a tech nerd to care about that stuff? Or maybe I’m leaning into the role too much.
Why even bother working in tech when this march forward is what everyone complains about but has been here forever?
Growing up means getting over this shit internally.
Why does Docker need a daemon at all?
Contrast with LXD. It has a daemon as well, but it tracks containers and state across multiple hosts, so the need is obvious. The single host LXC doesn't have a daemon.
You certainly could.
> systemd shouldn’t be a daemon too
> and sysv init scripts are better too
B. Debatable, but the start is a false premise, SysV init was a daemon plus the scripts.
Is this gonna be another systemctl situation?
“It’s new! Hiss!!!!”
I still really struggle to understand what the practical benefits to this kind of containerization actually are ...
It seems like people reach for it because they want to have some kind of “compile target” into which they can stick “all the things” their application needs to run — which is supposed to then help them “deploy” into their development environment or onto their production infrastructure in a way that serves the goal that the applications within the image should behave the “same way” in either location ... but does anything about this kind of container abstraction actually help with doing this? Don’t you still inevitably end up having to manage assumptions about the differences between these environments in order to make this work in practice (oh if you are in development environment make sure you don’t actually submit payment to stripe, if you are deploying to this cloud provider make sure you get secrets from <here> instead of <here> ...) ...?
Do you actually get any good abstractions out of the container which empower better solutions to deployment challenges? “I don’t have to think about which hosts the (random) http client used in my application is configured to talk to because I can just magically retarget it at the container level by manipulating networking configuration” — is that a thing — does it actually work? To my understanding the extent to which you _can_ do that requires you to write your application a certain way with a crazy service discovery layer like istio — and you’ve got to make sure you build your application completely to use service discovery fabric ... but if you built your application to use a service discovery fabric do you gain anything extra by also using docker at that point ...?
And what about the impact to developer ergonomics? Is it easy and smooth to use debuggers to quickly edit code running in an image ...? Do people regularly run production docker images locally to debug production application issues ...? Are there solutions that allow one to say, attach to a remote qa testers chrome instance and then automatically attach debugger to the set of production containers handling the requests associated with that browser ...?
Those are the kind of features I want ... I’m not sure exactly if the containerization abstraction model would really help me get there or just create another set of configuration knobs that _also_ have to be correctly aligned for me to get the right environment specific behaviors out of my application ...
Here's why I started using it years ago. I had a CentOS 6 machine that I wanted to run Plex, Subsonic and Transmission on, but I couldn't, because they had different (EDIT: and conflicting) requirements for various packages. I might have been able to hack it, but it was looking really tricky.
Enter Docker. I have all three running in separate Docker containers. And it just works. I've never hand a single problem. I made a yum exception for the Docker packages I use, so I can control when they are updated, which is about once a year.
The alternative was to build a new box that met all the requirements, but that seemed like a big waste of resources (electricity and my money).
Isn't once a year update a large security risk?
As for security risk on the yearly update: all the services are only locally accessible. None are exposed to anyone but me. I do apply CentOS updates daily, though.
My only risk in an attacker on my LAN, and I think I have that locked down well.
Yes, this works, and we've been doing it for a long time, since well before containers became a thing. When you're running services at any significant scale you can't have clients connecting to hard coded host addresses. The only way to get durability and scalability is to place the service behind a load balancer, have the clients find the load balancer through a stable DNS record, and have the load balancer direct traffic to healthy instances of the service. That's essentially what you have described above, without any containerisms.
What do containers add to this picture? Nothing. Containers themselves have almost nothing to do with networking and how traffic gets to your services, beyond establishing a net namespace, a veth pair, and a bridge to connect the pair to the host interface. That's just basic plumbing, not traffic management. Containers are about process isolation, not networking.
The networking problem for containers is addressed by container orchestration platforms like kubernetes, or compose/swarm. From a networking perspective what orchestrators bring to the table is basically an abstraction layer over the underlying provider resources for creating and tearing down all the stuff that implements the client->load balancer->service pattern.
I had some legacy code that was being repurposed and I couldn't touch the production environment, not even to build again.
I could have tried building new VMs, but in the end I defined the build process with a Dockerfile. Environment variables, compilers, external dependencies, all of it done when you ran "docker build". There was a small script to orchestrate that and copy out the results too.
The build environment was consistent, it was easy to tweak, and I could build for centos 6, centos 7, and suse 12 on any machines. That let me use my test box to implement and test any environment changes before it moved to production. It also let the primary developers build their own versions of it whenever they had made a change that needed testing.
It's not at all like any of the use-cases I read about, but it made my life so much easier.
I think containers have been through the hype cycle, but they definitely have their uses. Anybody else use Thinstall (now VMWare ThinApp) on Windows? It did something similar by presenting your application with virtualized versions of the Windows subsystems, and was super useful when you were faced with DLL dependency issues.
> Don’t you still inevitably end up having to manage assumptions about the differences between these environments in order to make this work in practice...?
Yes, and I would do that regardless of my deploy target. If you follow the Twelve-Factor App methodology , your settings are always in a config file or passed in as environment variables. If you're using external services (e.g. Stripe) while developing set the appropriate variables to your test credentials.
> Do you actually get any good abstractions out of the container which empower better solutions to deployment challenges? “I don’t have to think about which hosts the (random) http client used in my application is configured to talk to because I can just magically retarget it at the container level by manipulating networking configuration”
The solution to this problem is the same regardless of whether you use Docker or a cloud provider. Use a load balancer to direct traffic to a group of hosts responding to a specific port.
> And what about the impact to developer ergonomics?
Admittedly, debugging is not perfect (at least not with my Docker Compose setup in PyCharm). That said, there are debuggers that attach to Docker containers.
We had an issue with anchors in the YAML, but that seems to have been resolved.
The remote interpreter only started the one service. In the case of the example you linked, the db container is not run.
The other issue I recall was due to PyCharm needing to install its own debug tools. These were not available to be installed at image build time.
Again, it’s been a while since I’ve used PyCharm and Docker together, so these may all be resolved.
Ok, that's kind of cynical, but still true to a large extent if you ask me.
"Yeah, we should just give up on Docker and rebuild all the container ecosystem using IBM technology."
Automatic failover during hardware failure, no need to program anything special. HA without the need for having to do any work.
If you're not embracing the full Docker but using Kubernetes, the "runc" could be any of the dozens of alternative launcher.
You don't have to rebuild everything. That is what standards are for. Of course, this commodification is a problem for Docker Inc. The reason why they're "hyping" their brand ;-)
Just stick with something long enough to comfortably add it to your resume, then drop it and never maintain what you made.
Lots of ways to make sure that images get built and tested and deployed by scripts that keep a lot of config and directory in stand-alone (twinned) services. Why there is a "docker" cult (which may be the wrong word - it may be very very valid) escapes me.
I'd like to stop using it as well.
What a headache!"
And war stories like these are similar to what I have experienced: https://thehftguy.com/2016/11/01/docker-in-production-an-his...
I just hit this one today.
Hmm? I could have sworn docker pulled multiple layers at once the last time I used it.
Perhaps it might have been worth the extra time to get to the bottom of it instead of switching and, in the end, not noticing any big differences?
Also, I noticed there was an ad at the end of the post for the book Docker in Practice. Ironic placement.
Reminds me of NPM and rm-rfing node_modules on every build. I can't wait till something comes along to replace it.
Well Docker for Mac is really just binding the CLI towards the daemon, that is running in a Linux VM. But the integration is far from optimal.
Of course it's still just a Linux VM.
Docker can really grind if it starts hitting swap. I have not added resource constraints but I've been planning on doing it.
RedHat replaced docker with their own software. The command is still "docker" but it doesn't run docker. Since RedHat basically runs Linux, docker is dead, it will be transparently killed from all the Linux distributions soon.
My script/code doesn’t work efficiently, I changed to a similar runtime and in summary noticed no real difference.