I'm extrapolating that an advantage of Podman is that it should not require root permissions. But almost every call to podman in the article involves sudo. Can anyone clarify?
Also, another benefit of podman is that it doesn't use the client-server model, it's just fork+exec no daemon needed. Dan has a bunch of talks that are great introductions to podman and container security in general:
[EDIT] - I did not realize that non-root podman was a 1.0 feature -- as others have noted, podman <1.0 required sudo.
That should change as rootless matures.
Second thing -- I haven't looked into it much but do you know how the docker in docker support is in podman? Is this thread (https://github.com/containers/libpod/issues/746) the right one to watch?
My understanding is that many, if not most, operations should not require sudo as of version 1.0.
For example, try the commands at https://github.com/containers/libpod/blob/master/docs/tutori...
Would be nice if someone developed a testsuite that exercises all the usual FS operations that a container uses (permissions, setuid, setcap, symlinks, etc.) and compares vfs vs fuse-overlayfs vs in-kernel overlayfs vs docker?
See this article for more details https://opensource.com/article/18/10/podman-more-secure-way-...
The whole community around containers is not about the technology or understanding how to use them efficiently but who is marketing it and how many layers of complexity and buzzwords can be added on top. A bash script to build containers thus becomes 'declarative' and running a copy of a container becomes 'immutable'.
A non standard OS environment, single process environments, the uncontrolled use of layers, ephemeral storage all add dubious layers of complexity to containers for questionable benefit and increase management overhead and fragility at the base of the your stack. Now 5 years later its back to LXC but 'reinvented' by Redhat. So we get another round of hype to reveal the inadequacies that should have been known 5 years ago but throwing no more light on the core issues lest users get wind its just the LXC project in new clothes.
Now I have a huge appreciation of them, thanks to a better understanding of the underlying concepts. LWN.net is an incredible resource for anyone interested in Linux--both history and the latest updates.
Clearly we have, and clearly Docker solved a use case that LXC was in no way solving. So no, we are not "back to LXC", we are two generations ahead of LXC.
All the improvements in containers are happening in the kernel, for instance the long awaited support for cgroup namespaces. Improvements in layers will happen in the overlayfs project with better compatibility for filesystems including possible support for NFS. All this work is happening outside the spotlight while userland runs away with the hype and credit.
None of this matters? It's just the "user land running away with hype and credit", right? Not because all those actually solved an issue for developers, rather than being a random kernel subsystem that you could use if you knew the arcane incantations to summon it, and that only worked on Linux.
A little bit of research will show it's far simpler to use and manage than Docker , and because it offers a standard OS environment with support for standard networking, daemons, logging, no enforced use of layers or ephemeral storage it doesn't need an app to hang around and manage its networking and other subsystems.
It offers seamless migration of workloads from VMs, is compatible with the ecosystem of apps and orchestration systems without the need for any special daemon handling, network or storage management that comes from a custom environment, thus simplifying container use and management.
Wait, what's that? It doesn't work on Mac or Windows? It didn't have a even half-decent website until 2015? It doesn't have anything like a dockerfile that you can version control? Or anything to compose services together from a simple yaml file?
Damn. I guess it turns out people don't really care about the technical purity of a solution, they care about something that solves their issues. Better luck next time.
Podman does allow you to build containers, but my suspicion is it’s intended for easier transitioning from docker (you can alias docker=podman and it just works). Also the build functionality is basically an alias for “buildah bud” so it’s more of a shortcut to another application than re-implementing the same functionality.
Edit: more reading on the intended uses of each tool if you feel like understanding them better
> Some of the commands between the two projects overlap significantly but in some cases have slightly different behaviors. The following table illustrates the commands with some overlap between the projects.
And this makes no sense at all if you’re purposely designing a tool.
That functionality probably wouldn't be necessary at all if Docker didn't pollute the common understanding of containers in the first place.
What is blurry to you about the purpose of either tool?
> Each project has a separate internal representation of a container that is not shared. Because of this you cannot see Podman containers from within Buildah or vice versa.
> Mounts a Podman container. Does not work on a Buildah container.
^ this here is one of the problems, the containers are not compatible is my interpretation.
The tool feature sets overlap with subtle differences according to the article, that blurs the line on what each one is for. They need to pick a direction, if you’re making a build tool and a run time, the the build tool must only build, and the run time must only run, or just make one tool. Intentional and truthful (meaning the words mean only what they say) design limits the chaos that happens in the wild, these tools aren’t doing that. It may seem clear to you, but the article is littlerly about how it’s not clear and how they overlap confusingly. So you’re going to come across a mess at some point due to this mistake, that or they could explain their rationale for the overlap but they don’t.
Buildah containers and buildah run are far different in concept then podman run is. Buildah run == Dockerfile RUN. So we don't support a lot of the additional commands that are available for podman run and have decided to keep the format different. Podman has a large database, that we felt would confuse matters when it came to podman run.
I tell people if you just want to build with Dockerfiles, then just use podman, and forget about buildah. Buildah and its library are for building OCI images and hopefully embedding into other tools in addition to podman like OpenShift Source2Image and ansible-bender. As well as allowing people to build containwer images using standard BASH commands rather then requiring everyone to use Dockerfile. Podman build only supports Dockerfile.
Why are you concerned about buildah’s internal representation of a container, unless you’re contributing to the codebase?
If you break containers down into three main jobs, with a sort of forth meta-job:
RUN (& FIND) - podman
BUILD - Buildah
SHARE - Skopeo
If you think about it, that's what made docker special, it was the ability to FIND, RUN, BUILD, and SHARE containers easily. So, that's why we have small tools that map to those fairly easily.
[To note, I don't work for Red Hat]
I personally look forward to trying it out.
Might also want to delete cni-plugins from AUR now that it was also moved to community.
Having supported Docker in production, I've found that the stability wasn't quite where it needed to be, but Docker Inc. seemed more interested in adding more and more features rather than stabilizing what they already had. I would definitely welcome a less complex base for running containers in production.
That and, a few years ago, the amount of new bugs that would show up when they had to validate a new version of Docker for kubernetes to support. Eventually Docker adopted a less stressful, dual release cycle.