Terrible performance. One of my engineers recently thought that 150ms was terrific for a HTTP request. Break out of the container and it was <10ms. YMMV.
Fragile everything: Because one expects a “pristine” environment, often any slight change causes the entire stack to fall apart. This doesn’t happen at the start, but creeps in over time, until you can’t even update base images. I’ve seen it a lot. It ends up only adding an additional layer of complication.
There are definitely reasons to do this... But when a pedantic developer that needs everything to be “just right” does it, it often becomes a disaster, leading to shortcuts and a lack of adaptability.
There’s also the developer that has no idea WTF is going on. They use a standard Rails/PHP/NodeJS/etc container and don’t understand how it works. Sometimes, they don’t even know that their system can run their stack natively. I’ve been on teams that have said “Let’s just use Docker because X doesn’t know how to install Y.”
Docker is fantastic for many things, but let’s stop throwing it at everything.
You also point to a lot of problems that are container-independent and lay them at the feet of docker, which is unfair.
Upgrading the OS is always hard unless you have some awesome, declarative config and you managed to depend on zero of the features that have changed. It doesn't matter if you're in a container or not, switching from iptables in Centos 7 to nftables in Centos 8 is going to introduce some pain.
And somehow we get mad at people for not knowing how to install things, but the complexity of installing them is itself a problem. More steps means more inconsistency, which means it's more likely that "it works on my machine, but breaks on yours."
Yes, but this is true generally; it's not specific to containers. Any dev environment naturally tends disorder with unsynchronized versions, implicit dependencies, platform-specific quirks, etc. It takes an effort to keep chaos at bay.
At least with containers you have a chance of fully capturing the complete list of dev dependencies & installed software. I'm interested in how CodeSpaces/Coder.com solves these issues.
Then standardization crept into development. Two years later, it was essentially impossible to run it outside Docker built by Bamboo, deployed by Jenkins in on-prem OpenStack, components were tightly coupled (database wasn't configurable anymore, filesystem had to look a certain way, etc.), and it required very specific library versions, which largely haven't been updated ever again, and cannot be updated easily anymore by now. No individual team had an overview of everything inside the container anymore (we ended up with 3 Redis, 1 Mongo and 1 Postgres in that container. The project to split it apart again was cancelled after a while). Production and development were the same container images, but in completely different environments.
If you want code paths to work, you need to exercise them regularly through tests. Likewise, if you want a flexible codebase, you need to use that flexibility constantly. Control what goes into production, but be flexible during development.
My experience is the opposite. I once had started a job with totally outdated software that couldn't be run anywhere else than the old server it was currently running and had never been touched since 2008. We were able in the end to bring everything back up to date and create containers that are:
- easy to update
- allow devs to work on their favourite os (windows, linux or macos)
- does not require someone help devs to fix their dev environment regularly
On a native setup, you get a feel for the fact that X config file might be in different places, or that Y lib is more robust and more widely available than lib Z. You end up with a more robust application because you have been "testing" it on a wide range of systems from day one.
I don't see how that point can be argued at all, particularly if the project is expected to be deployed with Docker.
When developing inside docker, you are fooled into thinking that various things about your environment are constants. When it comes time to update your base image, all these constants change, and your application breaks.
No, you really aren't. You're just using a self-contained environment. That's it. If somehow you fool yourself into assuming your ad-hoc changes you made to your dev environment will be present in your prod environment although you did zero to ensure they exist then the problem lies with you and your broken deployment process, not the tools you chose to adopt.
A bad workman blames his tools. Always.
Updating libraries or the base image that one's code depends on always has the risk of breaking from API changes or regressions, and in a container, at least it's easy to reproduce the issue.
Always using containers make it harder for you to tell when you're making your setup brittle. If your environment always is exactly the same, how will you notice when you introduce dependencies on particular quirks with that environment? If your developers use different operating systems, different compilers, etc., you have a better shot at noticing undesirable coupling between the system and its environment.
If you run on Linux right now but think you might one day switch to running natively on Windows server... Ok sure, but who's in that position?
The upgrade treadmill is exactly that, a treadmill--it's exercise. The alternative to not exercising is poor health and an early death.
Software was written for their workstations that ran their UNIX OS IRIX. This is where Maya and many other awesome programs were built. Maya now runs on Windows, Linux, macOS, etc.
Cross platform code is fantastic.
I'm not sure I agree with the original poster though. I both dislike doing dev inside a container and dislike complicated manual dev environment setups. Containers for deps like dbs are more reasonable. This is faster perf-wise, more friendly for fancy tooling/debuggers and such, and it introduces just enough heterogeneity that you may catch weird quirks that could bite you on update in the future.
But you should be able to spin up/down new deploys easily, without having to do manual provisioning and such, which means the env on your servers should be container-like, even if it's not directly a container. Pristine and freshly-initialized. And then if you regularly upgrade the dependency versions, from linux version to third part lib versions to runtime versions, then you will still avoid the brittleness.
Try Linux and all those lags,spikes,inconsistencies are magically gone.
But for dev? Yeah. Linux on Linux is less impactful if you’re building something simple like a blog.
If you're not using Linux (presumably you're using MacOS), your "containers" are actually VMs so it's unsurprising that the performance suffers somewhat (not to mention that file accesses are especially slow with the Docker-on-Linux setup). The performance impact of being inside a container on Linux is as close to zero as you can get.
Arguably this is one reason why containers are popular in the first place. Devs don't want to spend time dealing with dirty environments.
It's not that bad for everyone.
For example on my Windows dev box, I have HTTP endpoints in volume mounted Flask and Phoenix applications that respond in microsends (ie. less than 1 millisecond). This is on 6 year old hardware and the source code isn't even mounted from an SSD (although Docker Desktop is installed on an SSD).
On Linux, I have not noticed any runtime differences in speed, except that starting a container with Docker takes quite a bit longer than starting the same process without Docker. Apparently there's a regression: https://github.com/moby/moby/issues/38077
But for the OP it may be perfect. On the blog he indicates that he's a CS professor. I could imagine that in a research environment maybe he gets better mileage out of this than someone coding in a for-money work environment.
 Not that there's anything wrong with that at all, it's just a different kind of person with different strengths, but DL is not one.
Did you happened to develop with Macs? Because Docker for Mac has a known network performance issue.
... but latency is still an issue regardless. It’s why there’s a premium to go bare metal with cloud providers.
People now call native installs "bare metal" installs.
I've developer for linux on the machine, and in a container.
Once you get the container mentality and start writing dockerfiles it creates a pretty predictable organized haven.
also, author seems to be using ubuntu. i wonder if he has considered multipass? https://multipass.run
The Docker image isn't fragile, it's your software that risks becoming fragile if it's too strongly reliant on a specific environment.
Anything that inserts itself into iptables feels like a no no. That’s meant I’ve not really put much effort into LXD or Docker beyond discovering they are kind of heavy.
The latter had poor IPv6 support the last time I tried to use it (9 months ago.). It’s there, but it felt like a second class citizen.
LXD just feels like Ubuntu to LXC’s Debian, so I also didn’t play with it beyond the initial few hours.
LXC itself is a joy. I run Alpine, Debian, and Ubuntu depending on my needs. Everything is disposable, with actual data either in github or a filer. I don’t even bother changing the hostname on the VPSs I provision your use it. Boot, install a firewall, create containers, and forget the original host OS even exists.
I’d really recommend getting to grips with the low lever (non-LXD) LXC stuff, especially when you are one “vagrant up” away from having LXC tools on your non-Linux OS!
I have a setup which works great with VirtualBox. I have pfSense installed in a guest VM and the host machine routes through it -- without that guest VM running the host machine can't connect anywhere. It's really handy to have a consistent firewall interface despite every host OS having a different idea of what a firewall should do or look like.
Docker works with the VirtualBox setup too.
I tried to do the same thing with libvirt for four weekends and eventually gave up. I couldn't get libvirt to play nice with iptables at all.
LXC is for containers, of course, instead of VMs. But since Docker works with the VM-guest-as-a-router setup, perhaps I'll really give LXC a try too.
A bit of a funny thing to say, given that effectively the same group of developers work on LXD and LXC -- and most are employed by Canonical. LXD was never meant to replace LXC, it solves its own set of problems.
I personally like using LXD because it reduces the need for me to write my own scripts to do trivial container management (and it can manage images and containers on a ZFS pool by itself), but if you're more comfortable with LXC then you do you. But I disagree that LXC is significantly more "low level" than LXD -- it just requires more manual work and the configuration format is more transparent about the container setup, but you ultimately have the same capabilities in LXD.
LXD certainly looks great from the documentation. I would use it if I had dynamic container requirements, or had a lot more containers to manage.
One odd decision is that, I think, the lxc command line tool — a significant improvement over the lxc-<thing> suite of commanda that come by default with LXC — only ships with lxd.
That has a smell about it that’s a bit funny.
Arguably it should've been called "lxd-client" or "lxdc" but they probably just felt it was too wordy -- most admins that used LXC and wanted to switch to LXD felt more comfortable typing "lxc <command>". And those who used LXC had no need for a top-level "lxc" command because they used the individual programs directly. Again, since the people working on the projects are the same there's nothing wrong with borrowing the name from the other project. :D
(Not running systemd? Not modern enough. Get with the times, boomer.)
Nix is a similar idea and makes the environmental identical for all developers. Makes on-boarding trivial. And speeds are "native" on all platform.
As a nice side effect: it normalizes all tools like sed, grep and find.
With that said, as a consumer of a Nix setup, it's about as easy if not an easier experience than using Docker.
But properly learning to create your own Nix packages, etc, involves a very, very steep learning curve. Far higher than learning the basics of a Dockerfile.
 Catalina issues notwithstanding, though there is a 4 command solution now: https://github.com/NixOS/nix/issues/2925
Linux' VM hosting subsystem, KVM, as well as its VM guest drivers, support all the features needed for zero-overhead VM environments out-of-the-box, including PCI passthrough if you want completely native disk and network I/O.
The problem with containers as a performance and behavior testing environment is that everything is using the same kernel. Kernel behavior is a significant factor, and sometimes a huge factor, in the performance of various applications.
My preferred method is simply to download the VSCode Remote Extension Pack: https://marketplace.visualstudio.com/items?itemName=ms-vscod...
Then in a project, simply `ctrl+shift+p` -> Add container configuration files. Then `ctrl+shift+p` -> Rebuild Container
Couldn't be easier.
The only downsides are lack of GUI, often in python it's nice to just do e.g. plot.show() rather then export to a file and view that file. But for most of my non-visual programming work, a docker is amazing. Once it works for me, I can guarantee it will work for all other developers, and more importantly, my dev environment is really similar to my prod environment.
It's so comfortable now, with the advantages of containers I can spin up and destroy, ensuring consistent environments.
For lack of GUI, I first tried VNC and X Window client/server, but a bit awkward.
As a sibling comment mentioned, I find it simpler to put together and serve a "web frontend" from the container.
I really don’t understand why all the hate with working in containers. If you can bring up and tear down quickly, it’s almost free like branching in git. It’s consistency by design!
-v /tmp:/tmp -e DISPLAY=$DISPLAY
The productionizing happens on my local VScode, which is a lot more "decked out".
In practice it’s slow, laggy, and a maintenance time sink.
I haven’t tried it yet, but something like NixOS seems far better suited to this problem than containers.
You don't need to go whole NixOS to get those benefits. Installing Nix on your OS can also provide it. Although experience is better on NixOS.
Also worth noting that the "permissions issues" he mentions are handled automatically by Docker for Mac. I had the exact problem of files created inside the container being owned by root on Linux, and all my mac-using colleagues just stared at me like I had horns on my head. "It works on my machine" still exists, even with Docker.
I made a couple of Docker images to work with the official VS Code plugin to develop for C++, Rust, Go, Java, PHP and Python
It is command line compatible to docker (and uses the same image format), but instead of launching containers through a demon, it launches container directly as the calling user (needs user namespaces, Linux 3.8+).
Now you can share volumes between host and container, and don't run into those pesky permission problems that come from different UIDs on the inside and outside.
Very useful in CI contexts, for development environment and so on.
The other issue you do have is the issue of configuring them for each application, but luckily most teams have had at least one or two people take up the role of maintaining the images and the startup scripts.
This machine has hyper-v installed, so I created a Windows 10 VM and worked in that fullscreen, permanently.
Gods but I hate working for this mega corp!
As for distro versioning... surely you only get the user-land package versions, and the kernel is still the version of your main OS?
It was the first use case for docker that I thought might make sense. The second was the headache we were likely to start running into on our CI server when dealing with different versions of Node/NPM for different codebases, a drift that is inevitable as older codebases get less support.
Does anyone have any general advice on the best way to deal with Docker file permissions issues on recent Ubuntu LTS distros? As a new Docker user this has proven surprisingly intractable. As mentioned in the quoted text, simply setting UID / GUID to the same doesn’t seem to do the trick most of the time. I see podman has been mentioned in this thread but is there a native, no-dependency, no-new-tool way to handle this? I feel like I must be missing something simple. Grateful to hear anyone’s experience.
docker run \
--net host \
-v /etc/passwd:/etc/passwd \
-v /etc/group:/etc/group \
-v /usr/local/share/fonts:/usr/local/share/fonts \
-v /usr/share/fonts:/usr/share/fonts \
-v /run/user/:/run/user/ \
-v /tmp:/tmp \
-v /home:/home \
-u $UID \
-e DISPLAY=$DISPLAY \
-t container \
Once you start developing your shared repos from a container, you realize that it's much easier to automate running things from it than to develop inside of it, so it's actually easier to get people to build CI/CD pipelines now where before they'd wait for someone else to do it for them.
And the only problem there is there's no easy + good + free self-hosted CI/CD software out there. Yes, you probably use X just fine, but X probably doesn't scale, or isn't enterprise compatible, etc. The biggest barrier to automation is both a business problem and a technical problem.
Smacks of "you're holding it wrong" thinking about consumers. Professionals should sharpen the saw by continually customizing their environment.
I’ve tried this before, but there’s still a lot of overhead maintained the pristine state. For example troubleshooting why a python package won’t run, you end up I installing and upgrading a lot of other packages. You’re not sure if that helped or it was something else — now what? You’ll spend time wondering if you want to carry your changes over or deal with the drift.
It was such a relief
Different packages being installed on Windows vs Linux.
Different packages being installed pip vs (some other package manager)
Simple user mistakes in the requirements file (not strict version).
Going to docker fixed #1 and #2, and the constant rebuilding of the environment meant we quickly identified issues in requirement files. Working in docker is the equivalent of fail-fast programming imo, its just applied to the environment.
Indeed. Nothing will drive me to containers on a new project faster than trying to get Python working on my mac.
Funnily enough, Docker can trace its roots back to frustrations with Python packaging (dotcloud was originally an easy way to deploy Python apps).
A properly configured VM has near native performance nowadays.