RedHat seems to be pushing a standard ecosystem for Linux: systemd, Wayland, SELinux, GNOME, and now maybe podman. I've been on Linux for a while; it's a welcome change from all the fragmentation I'm used to. Whereas others try to work around the kernel and implement their own things in parallel (see: Canonical's AppArmor, LXC, OpenZFS), RedHat just goes with what Linux already has like SELinux/cgroups v2/btrfs, which I think is more likely to last and just feels better. If RedHat goes away, I'm fine since I'm ultimately only relying on Linux features. If Canonical goes away, then I'd have to switch to a different stack. That's probably why government, enterprises, Amazon, etc. still prefer RedHat.
This reduces the amount of code they are responsible for maintaining, and everyone involved gets the benefit of open source collaboration.
They have a blog post about it here:
More like, Redhat/IBM defines what Linux has simply by paying most Linux devs, and of course they take advantage of it in userspace and sys management.
cgroups/namespaces, systemd, gnome (and it's integration with systemd) plus others are two-sided swords: one one hand they create new functionality, but on the other hand the price to pay is concentrating Linux know-how in one place, and greatly diminishing portability of Linux apps vs other Unix O/Ses and even other Linuxes that don't want to go with the program of absorbing ever more functionality into kernels and system frameworks for no real reason other than monopolization.
Unix was designed as a simple portable operating system in very short time. With 30 years of development, the situation today could also be interpreted in such a way that Linux devs just can't stop to add code. Time will tell if Linux can be maintained if the original generation of devs step down. I'd feel more comfortable if we've let kernels stay minimal rather than becoming kitchen sinks. Apart from better portability, this would've also enabled younger devs to start from scratch rather than having to maintain daddy-o's monstrosity.
Thank's god for that! An OS should build on its strenghts, not be a least common demoninator for portabillity's sake. If that was ok we might just as well just use one OS
To your point, it seems like eventually this ends poorly for the incumbents, when they become too bloated and too difficult to extend or modify, since you have to work around some over-engineered high-availability process clustering to touch anything.
In Linux's case, I think the kernel itself is actually fine, but the user land is increasingly tightly glued to an increasingly convoluted set of RedHat stacks.
I mean, it always has -- RedHat made the modern posix threads library, Linux's PAM, and several other core parts, but at least those were pretty modular and "unixy", by comparison to the monolith being developed now.
Some other Linux vendors now also do so but started much later and even now do so to a much lesser degrees.
Lots of people don’t want to recreate their stack every few years for the latest cool thing.
On the other hand isn’t btrfs becoming the default in Fedora 33, thus eventually making a comeback in RHEL?
It's also noteworthy that the BTRFS change was driven by community members, mainly from Facebook.
So if you want to use Btrfs, you might as well use it on something besides RHEL.
Its focuses on XFS from 1993.
Its Stratis idea is just a joke comparing to what ZFS is.
They should finally admin (like Canonical) that OpenZFS is the way of future filesystem for now.
Linux world should put their religion (GPL) aside here.
Besides above filesystem 'issue' the PulseAudio, systemd and SELinux are not hated without reason ... many people just 'love' what Lenny has to offer.
When we shifted to RHEL 8, we attempted to move this over to Podman and it went miserably (this was back in November 2019). The main reason being is that podman-in-podman doesn't work and had bugs (at least back in Nov 2019). Maybe it fixed now but this was our experience. We ended up doing quite a bit of analysis on podman only to conclude it's simply not there yet relative to docker (ecosystem and ergonomics).
There are quite a few corner cases that docker quite simply supports out of the box beautifully that podman doesn't support or just has bugs.
I like the what the project is trying to solve by being daemonless, but this is not as simple as a drop in replacement for docker that RedHat markets it as (alias docker=podman).
We ended up sticking to docker professionally and personally, I am still using docker over podman. The ecosystem and ergonomics are just far too nice to give up over podman.
I encourage you to not take the standard workflows as a given and really think about what you need and I bet you either end up with a use case that can be covered by rootless podman or something that requires real VMs anyways.
Having build and push as part of the same job is frustrating and I view it as a sign that a CI system is built with the expectation of having everything happen post commit by shoveling money into a CI auto-scaler. I know there's `--no-push`, but that's a poor substitute for independent `build`, `tag`, `push` build steps IMO.
Do you have any way of running / debugging locally with GitLab CI plus kaniko? Can you run your build pipeline locally on your workstation against uncommitted code?
IMO I can build a way better local workflow that allows me to run builds _before_ committing with Drone (`drone exec`). I can toggle between a locally bound Docker daemon or a DIND environment that's going to be virtually identical to the DIND environment on a build runner. The `push` step doesn't run locally plus the secrets needed to push are only accessible from an official runner. I can run it on Windows or Linux (and likely Mac) too.
I've been trying to find a self-hosted CI system that's really good at building Docker (or OCI) images and I don't think any exist. They all have short-comings. Having build, tag, push act like an atomic build step is one of the areas where I think most fail. So many claim to enable repeatable builds but none actually do AFAIK. Whoever writes the Dockerfile needs to know a lot about how images are built to have the slightest chance at creating a repeatable build. A great example is having `apt-get update` in a Dockerfile. That command _always_ returns zero, so by itself it makes builds non-repeatable.
Sometimes I have a tough time reconciling the development industry because things just don't make sense to me. I remember people complaining about Gradle start times so much they came up with the Gradle daemon. Now no one bats an eye at CI based build systems where you have to commit your code, wait for a runner to get provisioned, wait for Docker or the OCI runtime to spin up, and wait for your project to actually build on some anemic VM.
People used to complain about seconds because the wait was "too slow" for good local iteration, but now waiting for minutes is a "good" build system. Seriously WTF?
I guess I got on a bit of a rant...
I want this framed or sewn onto a pillow or something.
It's amazing what we can build, it's baffling what we have built.
No, the idea is to build and push images you can test directly afterwards in the same conditions. With cache and such, build times shouldn't be too long
Gitlab CI just runs shell commands, it's pretty trivial to pull the same image its using in the job, and run the same commands locally.
If you have long CI times, that can hinder development productivity and should be improved as much as possible, or a local replica of the CI needs to be created.
In fact, they already provide a utility for doing this: gitlab-runner exec.
Now what happens when two people push code that make changes to the containers at the same time?
* Switch to a shell runner
* Put the CI dockerfile into your repo
* Provide an entry script for CI that builds the container on-demand (and manages caching/cleanup) and then runs the tests/whatever inside that container
The point here is that docker/podman provide you with everything you need as long as you have full control. By using gitlab's default CI, you relinquish this control.
After using dind for some time we chose to just mount /var/run/docker.sock and keep using the host machine’s docker instance (mostly for the cache), but all in all dind was working fairly well.
To be honest, to say “you shouldn’t be doing that” is missing the point; one should be able to do anything they want. In my opinion, the root cause here is docker’s client/server model, which is fixed by other container runtimes such as podman and rkt (which unfortunately is deprecated). One should be able to just launch containers as if they were just another process.
You started out with asserting "you're too far into Docker", we bring up valid use cases for docker-in-docker, and then you saying "This herd mentality [..] is really fundamentally problematic" is really not adding a lot to the discussion.
For instance the "how do I compose multiple docker containers" is trivial when you can just execute a script that runs docker or podman. If you really want, you can use docker-compose.
Userland docker builds
I'm not sure why it doesn't do this by default. Performance I guess.
We're using podman in containers inside gitlab ci.
We're _also_ running tests of containers inside containers in containers using gitlab-ci.
The main workaround we've applied is using crun as the runtime rather than runc.
In my experience, that is not true at all. Docker-in-docker allows me to deliver smaller images that can fit into a CI flow as language plugins instead of shipping a beastly 5G docker image with every possible language runtime I need to support for my CI tool.
Docker-in-docker is a workaround to make docker work in CI.
Basically a security nightmare and bad design that podman doesn't have.
Docker in Docker in CI is like a lock on a door. It keeps honest people from being naughty, and is fairly efficient about it.
I don't think the question is "should I run CI in docker in docker," it's whose CI should I run in docker in docker. Me in my coworkers can share docker images. Customers or freeloaders cannot. So if that's in your problem domain, then you're right, it's a bad idea. But it isn't for most people.
This works great if you own or rent the hardware, but most cloud providers don't allow nested virtualization.
Although there are tools to convert docker images to vm images. I expect if I were running community CI infrastructure, getting really familiar with those would be high on my priority list.
The huge issue with that is security which is why it's only really practical for a single user or a small group of trusted users. A secondary issue is that (I think) builds can't run simultaneously because they can trample each other when tagging images (since all images are on the runner's daemon).
If I had to build a Docker focused CI system I'd think about using Weave Ignite (AWS Firecracker) to spin up VMs for runners with the Docker socket bound like described above. That way you get all the convenience of binding the Docker socket, but the isolation of a VM that gets thrown away after the build step (or pipeline) finishes. That idea also fits well with local running / debugging IMO because you can bind to the Docker socket on your development workstation (assuming you're not running a large build of parallel tasks which might be an unrealistic assumption).
You could separate those into two builds, but the reason they are together is so people think about deployment, and in case any structural changes to the code need to coincide with deployment changes. For instance, breaking changes in APIs. I need a new version of tool/library and I need to change how I call it.
I don't think kubernetes is a solution for context of building an image (a rootfs tree into a .tar.gz file).
Unless you are using kaniko which extends the kubernetes api to add the capability of creating images, but that is handled by kaniko itself via the same api.
my beastly 12GB image that even includes Matlab wants a word with you
Perhaps in the next 10 years we will be rediscovering packages. :P
If you are in the business of charging complex prices per bits over the network, then docker seems to be quite a good investment and making it as popular as possible is a good strategy to print money. /s
To be fair, at least it allows me to avoid lots of the brokenness of Python packaging.
tl;dr pip silently breaks my environments, mostly connected to upgrading numpy and other scientific/data science libraries.
I use docker for most of my clients' work but for in house stuff I just use nix.
That has not been the case for a good while now... Docker has been running directly on a hypervisor on the Mac.
In Linux iirc, a hypervisor can share such resources with the host system (since they are both Linuxes).
podman-compose doesn't support as much as I needed for my current deployments - although offhand I cannot recall what was missing.
I'm looking forward to trying things again in a few months, but those corner-cases can be real pains to deal with.
Apologies for the patronising comment, but do you really mean that? Docker in Docker works but is intended for the developers of Docker to debug Docker itself. Usually for running Docker from within a container, you just hook up the Docker client to the TCP port of the Docker daemon running outside the container, which isn't strictly Docker-in-Docker.
I ask this in case you're trying a wildly use case (if you really are running true Docker-in-Docker), or are making an unfair comparison (if you're just using the usual Docker client in container to daemon on host). In the latter case, I must admit I don't know what the idiomatic alternative would be for podman, given that I know nothing about it except that it's daemonless (and even that I only learned by reading your comment).
Sure, there are some details you might want to control, like image caches and such being shared with host, i just find there is a lack of documentation and best practices of how to do nested docker, if that is even intended to work or if mounting docker.sock is an unsupported hack. Most information found about this is scattered on shady blogs.
For the examples of CI brought up the use cases are fairly obvious, you have a jenkins installation with x plugins installed - running as a container. Within this jenkins you are building multiple different projects which all require their own respective image to build. As a project developer here i don't even want to know if this jenkins is a bare metal, a vm or a container. Here docker is more used to bundle all the dependencies, not for strict security with perfect containerization.
This way you don't need to grant "docker" group to the "ci" user and you avoid having your cluster compromised one commit away :)
I also never said that it requires two levels of Docker or that it requires DinD. I was responding to the general question of the parent who was asking a question of someone who was running DinD. I responded to the GP below that Kaniko also solves this problem so clearly I'm not advocating for running DinD or that this is even needed.
Also if your application is shipped as orchestrated containers (like docker-compose), or as multiple containers in a 'pod' (e.g. sidecars), you may want the ability to run containers from containers as part of CI.
A while ago, I, unfortunately, decided to add "proper tests" to all of my Ansible roles and decided to use Molecule (2.22, at the time, IIRC). As I don't use Docker, I was using "lightweight VMs" I had created (w/ Packer, converted into Vagrant boxes) with VirtualBox for all of this testing.
I spent I don't know how many hours across several days learning the "toolchain", getting everything setup and working properly, adding full test coverage, and so on. Not long afterwards, they released Molecule 3.0 which required using Docker. :/
I'm much more confident with packer now though. Next time I do any major work on our Jenkins infrastructure I'm ripping out docker-in-docker for the workers and replacing it with packer built images.
Heh. We use DIND with docker compose to have a container which has KUBERNETES inside. And it even works.
How's that for a wild use-case?
EDIT: That's done to create a local dev environment, with K8s, localstack, infra, etc. Instead of having multiple machines or deploying everything outside containers.
That said docker-in-docker doesn't work without running privileged or forwarding the host port.
It's a non-starter for me, there are the obvious security problems, but also practical non-security issues.
Forwarding the port causes encapsulation issues, a build job can finish leaving stuff running, can also interfere with other jobs on the system.
Using privileged containers isn't an option when using things like ECS fargate.
I did try that and it doesn't work, and podman compose isn't as useable. So I switched back to docker.
Podman is (mostly) a drop-in replacement for docker. However, docker-compose is a separate package from docker that requires explicit installation. So too is podman-compose separate from podman, though unfortunately it still needs more work.
It doesn't quite work yet today because we are still implementing some of the REST verbs, but it's close. It's definitely a strategic direction for the roadmap. Stay tuned.
Their tools solve complex problems and are free to use - so I'm thankful that they exist - but I can't help but think that there is some lack of elegance&design that causes lot's of complexity - there is no "unix philosophy" to abuse different tools or components to solve problems - it's more like either you'll use the high level APIs with certain non-obvious assumptions (i.e. worked for us, good luck :) or feel free to hack on it if you grok our complex low-level frameworks and libraries... went after a NetworkManager bug once and it was a tour de force between c, glib, dbus with zero documentation. systemd and Keycloak feel very similiar. Powerful if you fit their usecase - horrible if need to tinker with it. But to be honest I've got no idea how to solve these complex problems otherwise. It's probably the best we can do at the moment. Or are there any non-cloud/non-sass solutions that actually have all the features?
Even professionally, I tend to steer clear of immature Red Hat projects.
We evaluated Keycloak but went with a vendor solution. OpenShift I believe was also evaluated at my firm and hit a dead-end.
The cost in time / $ / config to maintain and operate at the time was not pretty (all this no doubt has changed a lot).
It always seemed like added complexity to me in exchange for free hosting. I liked the idea in concept more than in practice. But we also weren't big enough to really justify it so eventually we switched to simpler VPS hosting.
The latter is the best option as it's much more scalable and doesn't require ugly proprietary hacks. Docker in Docker requires vertical scaling and more complex management for intermediate states/maintenance.
Podman probably isn't ready to replace Docker yet, but rootless containers are the easy-mode for federation of clusters. If you can supply the other features you need (and I think most can) it's probably worth it in the long run.
And the CLI compatibility is great. Until it isn't. At work we switched to Podman for a small deployment because Docker didn't yet work with cgroupsv2 and many hours were spent debugging Podman-specific issues. In the end switching to cgroupsv1 would have been significantly less work.
Therefore claiming you can `alias docker=podman` is a bit disingenuous. You can, but only if you don't do anything Podman can't handle and what it can and cannot handle isn't immediately obvious.
All that said, I wish the project the best and hope it reaches the maturity where this alias actually does work.
EDIT: Upon further reflection, I think Docker doesn't really work with nftables either, so that one isn't on Podman. It just so happened we made that switch at the same time. Regardless, there were other problems. I'll check to see if I can find any records of the problems later.
One thing I want to point out though for anybody not familiar with the differences between podman and docker, for the most part alias docker=podman will "just work" except for these situations:
1. docker-compose. podman-compose attempts to cover this but I've heard it's not quite there yet
2. Mounting the docker socket into the container. Podman is daemonless which means that won't work. There is work going on right now to allow Podman to be driven in a similar way if needed, but I recently tried to set it up and hit a bug. CRI-O brings a daemon, and it is used extensively in Kubernetes and OpenShift, but not so much outside of that.
I've swapped back to docker on Fedora and CentOS by forcibly installing docker on those unsupported platforms because podman-compose doesn't work, containers just didn't work like expected in several situations (forgot detail) and portainer wouldn't work against it.
Either they should have sped up the podman development or shouldn't have deprecated docker at this point. Maybe RedHat just didn't like a third party taking the stewardship in the container business.
On the other hand, I don't think you fully understand where I'm coming from. "Optimizing for stagnation", "stuck on the same plateau". Idk, I just want to run a container. I'm totally fine being on the same Docker plateau if something like Podman makes me re-learn what I already know how to do.
Unlike Git, which, from the start, was different and BETTER than SVN, can you really make the case that Podman would be BETTER if it didn't try to follow Docker conventions? If not, what are we talking about here?
- Build layer cache doesn't seem to work. If I rebuild locally with podman, it correctly detects cache hits and the build is fast. On our Jenkins server (RHEL 8) with podman 2.0.5 it doesn't. It randomly doesn't cache hit, causing builds to take 20x longer than with docker CE.
- Podman is insanely slow at building images in general. COPY emptyfolder/ /emptyfolder/ takes 2 seconds. We have dozens of things to COPY and it's stupid slow compared to docker CE. Buildah doesn't seem any better.
- Systemd integration has bugs. If you use the default generated systemd unit file, it does not kill processes when exiting and leaves them dangling. Even after removing the strange KillMode=none it says to put in there, it still leaves processes dangling. Podman sometimes loses track of the container. It will list nothing in "podman ps" but the processes will still be running.
> Images to utilize as potential cache sources. Podman does not currently support caching so this is a NOOP.
On my local computer (Arch) podman is v2.1.1, which seems to have whatever bug I was hitting fixed.
So I guess my complaint isn't about podman specifically-- It had bugs and they were fixed, and that's great. But I hate that RHEL 8 touts it as a docker replacement, and won't carry docker in their repositories, when the version they have in their production releases is so broken.
We eventually sledgehammered docker CE's CentOS repo into our RHEL 8 jenkins server and now everything works perfectly.
Running podman on the production webservers seems to work okay though-- apart from the process killing problems.
You can get access to Moby in Fedora, but it just wasn't viable to include docker in RHEL 8 for both legal and community risks.
Last time I checked both podman and CRI-O double fork and have reimplemented process supervision from scratch (through conmon) whilst they could get all those features for free if they didn't daemonize themselves and let systemd handle running things in the background.
I found this very surprising. I still don't understand why they made that choice.
At least they do play nice with the whole "systemd owns the Cgroups tree" story. A thing that was always a bit painful with docker.
Fun note. Systemd-nspawn actually can run OCI containers directly as well these days. However I'm not sure if it's feature-complete
As far as docker-compose support, in Podman 2.0 we have added APIV2. This is a socket activation REST API. This API has a compatibility mode that implements the "Docker API" meaning podman can be setup to listen on the docker.sock and launch containers. We also have the more advanced Podman API for support of concepts like Pods. The API should be able to work with docker-py based scripts and Compose. We are getting lots of community support in fixing up our inconsistencies.
This is a fully open source project and we love to get contributions.
* you are limited to 1024 FDs for your entire container. So running make -j40 usually ends up with various errors due to running out of file descriptors. You can of course raise the limit for your own user, but this may not be trivial on a shared system.
* getting podman to obey a raised user FD limit is non-trivial on CentOS 8. podman is not well supported on CentOS, you need latest Fedora if you want the various bugs/limitations fixed.
* fuse-overlayfs is a single process for the entire container and quickly becomes the bottleneck for any IO intensive operation (e.g. don't try running AFL inside podman, it'll be way too slow and peg the CPU at 99% running fuse-overlayfs).
Fuse shouldn't be necessary though if I could just give it access to the files that my user has access to (of course you'd loose the snapshotting/committing ability), perhaps using -v would improve performance?
Using docker on CentOS 7 was a lot easier (and faster) than using podman on CentOS 8.
* Podman's official stance is to use Kubernetes YAML. The provide tools to generate this as well as convert docker-compose to kubeyaml
* podman-compose is not official, but it works well. you have to note that its not 100% compatible with dc but a lot of dc files run out of the box without modifications
* ansible with podman
* i personally use the podman cli with makefiles to get docker-compose like functionality
* also, podman v2 supports a docker comptaible rest api so its possible that docker-compose can be modified to support podman in the future
For my stacks, the only thing i can't get working is routing a specific container through a host defined network (not a container). I'm certain podman can do it, i just haven't had the time to "fiddle" with it.
With docker (compose) i just define an ipam network (external) matching the host adapters network, and connect the containers to that network, and the spice begins to flow.
Last time I checked both podman and CRI-O double fork and have reimplemented process supervision from scratch (through conmon) whilst they could get all those features for free if they didn't daemonize themselves and let systemd handle running things in the background and do the process supervision.
* It doesn't require special privileges to run
* It runs containers as the same UID as the calling user, so directories mounted into the container can't be polluted with files belonging to other UIDs, as Docker tends to do
* It's easy to get it to only pull images from custom registries. With docker, this requires some fiddling (at least last I tried).
The only thing missing is Debian packages in Debian Stable (currently available in Bullseye aka testing).
Docker was created to give an extra life for legacy applications depending on outdated packages, the concept of gluing all dependencies withing a compressed rootfs and shipping that, with low effort.
Docker became popular because of that, but seems that did more harm than good for the ecosystem in terms of security, that why podman might have a great future. The docker interface is bad and comparing that with systemd is quite a stretch. ;)
Anyhow, aren't systemd units ini files? The journalctl ships with man pages though.
Aren't containers just processes with namespaces and cgroups?
I suppose a more secure runtime doesn't hurt.
I don't know, systemctl is always a pain to interact with. The Docker CLI is pretty intuitive--you have different resources and different verbs for interacting with those resources. This includes logs. No need to use a separate (and confusing) tool nor dig through man pages. I'm of the philosophy that it's better for a tool to be intuitive than obscure-but-has-manpages.
> Anyhow, aren't systemd units ini files? The journalctl ships with man pages though.
I guess I meant that INI isn't a standard file format--different parsers behave differently, and structured data is often coded in strings in some bespoke format.
> Aren't containers just processes with namespaces and cgroups?
My mind isn't made up that containers are the ideal process unit, but I do like some things about them (and the container ecosystem more generally). That they come with their dependencies bundled is pretty nice, but I think the toolchain needs to improve to mitigate security concerns and so on. Something like Nix would be helpful, but Nix also has its own problems. Again, it's directionally correct. In any case, users don't need to be managing their own namespaces and cgroups.
And systemd actually comes with two image launching systems systemd-nspawn and systemd-portable. And then with systemd-machined you can add software that needs virtualization too.
The interface to journald is more complicated than it should be but it’s also really powerful — docker logs doesn’t hold a candle to the kind of filtering it can do.
Overall systemd is a stupidly powerful and featureful supervisor compared to Docker. Just the dependency management alone should demonstrate that. Then you get mounts, swap, socket activation, more powerful restarting policy and the whole suite of isolation and security features.
I thought the point of logging to stdout (i.e. docker logs) is that you just take that output and dump it to another server for processing and filtering.
journald/journalctl seem to be a solution a few decades late to the party. For a single user machine or a single app prod environment, I would take a plaintext log any day of the week. At least I can remember how to grep the damn thing. And then when you get to a distributed system, what's the point of journalctl? You would hopefully have all of that logging aggregated together in one place with a much nicer web interface for it all.
journalctl | grep
The big ease-of-use win for journald is that it captures process stdout. No need to run daemons in the foreground ever.
sudo journalctl -u $service -f | grep
Note also that if I just want the log stream, I have to pipe through less to get the full log messages. There’s also a flag for it, but I can’t remember my workaround is easier than digging through man pages.
Again, no big deal, just friction. Like everything in the systemd ecosystem—everything is manageable, but it’s tedious.
There is, frankly, a lot of appeal to me in a simple INI file that starts up something I unzipped.
Maybe it already has something like this, but I feel like systemd could provide an API endpoint for applications to send a simple status when they are "ready" - at which point it would be up to the developers to provide that at the right time.
Perfect. Thank you!
at some point systemd is an embrance-and-extend formula to control the linux ecosystem
And of course there's a Podman Machine: https://developers.redhat.com/blog/2020/02/12/podman-for-mac...
The conclusion that I have to reach is that there are more docker users on Macbooks than I realized.
Wow that's a great adoption hack; just alias another program as another
Edit: apparently now net4slirp is recommended... Maybe I'll give it another try.
Having a deamon or not is a technical detail that most people do not care about in my opinion. And it has advantages too, like accessing Docker remotely or from another VM on the same host, or directly from the host which is nice for Docker on Mac or Windows.
The only reason I moved to docker years ago was I wanted a tighter reproducible workflow (ie docker-compose up) for all other developers, turns out vagrant has (at some point in the last 5 years) solved this problem too, with packer (I didn't know about this 5 years ago for whatever reason)
Makes me long for going back to that. At least once you built your base image, it was done and you could just make fast linked images from there. I bet with Alpine you could get a vagrant up in a few minutes tops, then it boots in seconds.
Never liked Ruby as a config language, but doing complex things like setting up shared networks and folders was a breeze comparatively I felt.
Never used vagrant in a production capacity though. Everywhere I ever worked always deployed to bare metal or essentially we bought our services (cloud functions, semi-managed containers etc)
So many setups would be achievable with simple ifs.
The lack of filesystem isolation and volume support are the last things keeping me from jumping ship.
1) why the hell would you want Docker-in-Docker
2) I can’t live without Docker-in-Docker
It's great to hack around with.
SystemD is particular is very polarizing and I don't want to start a flamewar, but it helped me "get" init systems.
Podman seems to have a better security model than Docker, so we were trying it out at work too.
Redhat is clearly very invested in this.
There was excellent documentation and good tooling around all this.
The podman 'create' command docs  do not list a '--digest' argument. I found no example of specifying the digest as part of the image name.
Docker does not support this. You can get an image file and calculate the SHA-256 digest of it. But Docker does not let you say "start a container using this particular SHA-256". That's because the SHA-256 that docker uses internally is just a hash of dockerd's internal metadata about the image. That metadata is different on every machine. I felt extremely disappointed when I discovered this. And I lost a lot of respect for the Docker developers.
If you want to deploy based on image hash with Docker, you must add your own verification step before creating each container instance. Terraform's docker plugin does not perform this step. And the local dockerd will not check the SHA-256 on reboot. If you just follow Docker's "best practices" and use the provided tools, all of these sources will needlessly have root on everything you deploy:
- Your docker image repository
- Any machines with credentials to push to your docker image repository (CI system, engineer machines, anyone with access to engineer machine backups)
- Anyone with permission to push any public image that you use. The way most CI systems are configured, all jobs get the same credentials. So this includes your build and test jobs. That fancy linter Docker image you're using? Anyone who can push to its Dockerhub account can replace the image version you selected with one that roots all of your systems.
- Dockerhub itself
Why would RedHat switch to a Docker replacement without this crucial security feature?
Am I missing some important point that makes all of this ok?
I tried using it as Docker replacement, but various tools that use docker (using dockerized pip in serverless framework) and complex docker-compose files (dockerized Magento) were broken.
At work we dont get root access so I thought it would be perfect but sysadmins couldn't figure out how to configure either so was bust there too.
Work also uses a special docker RUN command that can use the host's ssh keyring to, eg. Install something from a private github repo. Doesn't seem to exist in podman.
That is amazing; thanks for pointing it out:)
That's the theory (and Red Hat's marketing). I wish I could just be using CentOS/RHEL and not worry about the choices Red Hat makes. But the shit is piling up (sorry for my harsh words, no offence intended).
So, let's containerise something simple on top of the "centos" container image, let's say postfix. Oh, postfix on centos needs systemd for logging. But systemd in a container is a nightmare. You can get systemd to work in a container if the host system is using containerd and you pass certain special files from the host to the container. So just use Podman instead of Docker, right? Podman has functionality to make systemd in a container work. OK, I can run my container on my fat workstation (a Centos system with Podman). But is it still portable? Can I just run it in Github Actions, AWS ECS, a Windows machine with Docker Desktop? Nope. It's not portable.
Let's put the systemd rant aside (I really like the systemd CLI UX, but the architecture (dbus, etc) seems to only serve one usecase properly (Desktop systems) and seems to be heavily wrong for the container usecase).
Let's rant about the architectural choices Red Hat is making for containers.
Before Red Hat started to get involved there was the open source, (now) cross-platform containerd that a lot of tools are building on top. It consumes the low level runtime runc and both provides an API for Kubernetes (CRI) and additional things. It is highly pluggable and therefore the "only container runtime you'll ever need". And it's purely community governed (CNCF).
What is Red Hat doing? Are they building their container tools on top of containerd? Nope. They create their own low level container runtime (crun), their own mid level container runtime (cri-o). But cri-o only covers what's needed for Kubernetes. So to be able to build container images, etc (the "working with containers on a single machine" usecase) additional tools are needed (podman, buildah, skopeo) and they have to implement the missing functionality themselves. So Red Hat's Open Shift (Kubernetes distribution) builds on an entirely different stack than Podman.
Fragmentation everywhere. Was that really necessary?
Why should I care as an enduser? Well, I can inspect and debug all tools that use containerd the same way (i.e. using the "ctr" tool). I can relatively easily even write my own tools that use containerd (via it's grpc api) and can access the resources (containers, images, etc) that these tools are managing to solve my special custom needs.
For the Red Hat stuff everything is different. I could work with cri-o using crictl and write custom tools using cri interface. For Podman I would need to use Podman's API. (Which has annoyingly entirely changed between v1 and v2).
ok, enough rant (I could go on with how hard it is to get an up to date version of podman on a supported RHEL system even with app streams, that UBI images don't provide podman/buildah/skopeo, etc etc etc).
Please don't mistake me: Podman, cri-o, crun, open shift are all amazing technologies. And it's great that they are mostly open source via upstream projects. But I wish this whole fragmentation hadn't happened. For me as an enduser it's nightmare.