Hacker News new | comments | ask | show | jobs | submit login
Podman and Buildah available in RHEL 7.6 and RHEL 8 Beta (redhat.com)
89 points by siddharthgoel88 21 days ago | hide | past | web | favorite | 50 comments

The post links to "Podman - The next generation of Linux container tools" (https://developers.redhat.com/articles/podman-next-generatio...), in which the author notes that Docker "requires anyone who wants to build a container image to have root access. That can create security risks".

I'm extrapolating that an advantage of Podman is that it should not require root permissions. But almost every call to podman in the article involves sudo. Can anyone clarify?

It's probably people using podman incorrectly. You can sudo with podman but you don't have to. Check out a recent talk on podman from the last Kubecon by Dan Walsh (https://www.youtube.com/watch?v=HIM0HwWLJ7g&t=1885).

Also, another benefit of podman is that it doesn't use the client-server model, it's just fork+exec no daemon needed. Dan has a bunch of talks that are great introductions to podman and container security in general:

- https://www.youtube.com/watch?v=ZE2nI1SwSVk

- https://www.youtube.com/watch?v=msdaf3lBOn0

- https://www.youtube.com/watch?v=YkBk52MGV0Y

- https://www.youtube.com/watch?v=40ynATurI7k

[EDIT] - I did not realize that non-root podman was a 1.0 feature -- as others have noted, podman <1.0 required sudo.

Well rootless podman works before 1.0, but we have had many issues to work out. Overall it works very well.

Yeah, basically the "sudo" commands all over the Internet are just legacy. Historically, root was required (like docker - most people don't understand they are talking to a daemon which has root), but recently we added running as a user (aka rootless). Still the vast majority of docs you will see will refer to running with sudo or as root.

That should change as rootless matures.

Since there are two redhat/podman powerhouses in this thread, first of all thanks for all the hard work! It's not like I'm anti-docker, but it has some hangups that I'll be happy to get away from when I have the chance. I feel like being tied to docker for so long made it harder to understand other tools like ctr/critool/containerd early on.

Second thing -- I haven't looked into it much but do you know how the docker in docker support is in podman? Is this thread (https://github.com/containers/libpod/issues/746) the right one to watch?

The author explained that his version of podman (0.9.3) required root, but version 1.0 doesn't. And, indeed, version 1.0 does not. I use it regularly. It's an amazing improvement over the docker tooling. [disclaimer: I work for Red Hat]

(not a Red Hat employee)

My understanding is that many, if not most, operations should not require sudo as of version 1.0.

For example, try the commands at https://github.com/containers/libpod/blob/master/docs/tutori...

In some situations rootless works better, e.g. I was not able to use podman with sudo in a container that had `ping` in it [1]. OTOH rootless uses `fuse-overlayfs` instead of the in-kernel overlay2 driver, and initially I ran into a few bugs since it is not yet a fully POSIX filesystem (much like how the initial overlay kernel driver was buggy , and it wasn't only until overlay2 when it started working reliably with docker). Having said that I've been using only podman as an experiment at home, and upstream is very responsive in fixing bugs. There is also a `vfs` backend that can be used as a fallback until bugs in fuse-overlays get fixed.

Would be nice if someone developed a testsuite that exercises all the usual FS operations that a container uses (permissions, setuid, setcap, symlinks, etc.) and compares vfs vs fuse-overlayfs vs in-kernel overlayfs vs docker?


As mentioned Podman 1.0 should not require sudo. But another advantage of Podman is that even when it does use sudo, there is traceability back to to user who invoked it (by checking the loginuid). This works because Podman forks a user process to execute instead of making a call to the Docker daemon which then executes (and uses the init loginuid).

See this article for more details https://opensource.com/article/18/10/podman-more-secure-way-...

Right there are many reasons to run podman as "real" root as well. Using containers within systemd unit files, is one use case. Setting up a docker daemon to run a single container at boot is way overkill. If you just use podman within a unit file to start/stop containers, it works really well, and does not sit around using up memory and allowing others to communicate with it.

This just runs full circle back to the LXC project which Docker 'forked' without attribution in 2013. Run unprivileged , runs daemon less, support layers, advanced networking and more important a standard OS environment.

The whole community around containers is not about the technology or understanding how to use them efficiently but who is marketing it and how many layers of complexity and buzzwords can be added on top. A bash script to build containers thus becomes 'declarative' and running a copy of a container becomes 'immutable'.

A non standard OS environment, single process environments, the uncontrolled use of layers, ephemeral storage all add dubious layers of complexity to containers for questionable benefit and increase management overhead and fragility at the base of the your stack. Now 5 years later its back to LXC but 'reinvented' by Redhat. So we get another round of hype to reveal the inadequacies that should have been known 5 years ago but throwing no more light on the core issues lest users get wind its just the LXC project in new clothes.

Docker did more to hinder my understanding of containers than the advantages I gained from casually using it for a few years. Recently I started using podman and buildah, and I read about many of the kernel enhancements over the last 2 decades that allow us to finally implement containers.

Now I have a huge appreciation of them, thanks to a better understanding of the underlying concepts. LWN.net is an incredible resource for anyone interested in Linux--both history and the latest updates.

LOVE to hear this! The podman team thanks you for your feedback ;-)

The the overwhelming success of Docker (released 2013) over LXC (released 2008) is a great counterpoint to your theory that "we've just come full circle and have learned nothing".

Clearly we have, and clearly Docker solved a use case that LXC was in no way solving. So no, we are not "back to LXC", we are two generations ahead of LXC.

Two generations ahead of LXC in which way? There has been no such thing. No point making blanket statements without substantiating them which is exactly what ails the container discussion, hype driven statements with no technical content perpetuating the cycle of misinformation.

All the improvements in containers are happening in the kernel, for instance the long awaited support for cgroup namespaces. Improvements in layers will happen in the overlayfs project with better compatibility for filesystems including possible support for NFS. All this work is happening outside the spotlight while userland runs away with the hype and credit.

Great news about the kernel! I take it you're purposefully ignoring the improvements to the tooling (docker machine, docker for mac, docker on windows), the ecosystem (docker hub, docker security stuff), the Docker format itself, docker-compose, etc etc etc.

None of this matters? It's just the "user land running away with hype and credit", right? Not because all those actually solved an issue for developers, rather than being a random kernel subsystem that you could use if you knew the arcane incantations to summon it, and that only worked on Linux.

The problem here is you are familiar with Docker but not LXC which has suffered serious misinformation from the Docker ecosystem.

A little bit of research will show it's far simpler to use and manage than Docker [1], and because it offers a standard OS environment with support for standard networking, daemons, logging, no enforced use of layers or ephemeral storage it doesn't need an app to hang around and manage its networking and other subsystems.

It offers seamless migration of workloads from VMs, is compatible with the ecosystem of apps and orchestration systems without the need for any special daemon handling, network or storage management that comes from a custom environment, thus simplifying container use and management.

[1] https://www.flockport.com/guides/say-yes-to-containers

Awesome! With it's 5+ year head start no doubt it's managed to capture developer mindshare like few other technologies have before it, revolutionizing development and deployment!

Wait, what's that? It doesn't work on Mac or Windows? It didn't have a even half-decent website until 2015? It doesn't have anything like a dockerfile that you can version control? Or anything to compose services together from a simple yaml file?

Damn. I guess it turns out people don't really care about the technical purity of a solution, they care about something that solves their issues. Better luck next time.

It would be nice if this was one tool, rather than two overlapping tools with some incompatiblies between them.

They can both build images, and the commands may differ, but the resulting images are all compatible. Under the hood podman uses buildah to build images. What's the specific complaint?

In that it adds complexity to have two tools vs one. And in the wild will add risk of mistakes and wasted time due mixups since they overlap and have incompatibilities. I’d rather give a team one tool, the only reason this is two tools is because they were independent projects, but they realy should be one conceptually.

What you're describing is pretty much contrary to the philosophy behind podman, buildah, skopeo, etc. though which is to have fairly narrowly scoped tools that serve a specific purpose rather than a big application that does everything.

I may not fully understand what the tools can do but it seems overly narrow scoped. Also in that Unix philosophy you don’t duplicate functionality that’s slightly incompatible between tools to the point you need paragraphs and tables to explain when to use which one.

Buildah specializes in building OCI images. Podman allows you to pull/run/modify containers created from OCI images. These are distinctly separate tasks, and it seems a lot more straightforward to me than having a daemon (always running, as root...) that handles both tasks.

Podman does allow you to build containers, but my suspicion is it’s intended for easier transitioning from docker (you can alias docker=podman and it just works). Also the build functionality is basically an alias for “buildah bud” so it’s more of a shortcut to another application than re-implementing the same functionality.

Edit: more reading on the intended uses of each tool if you feel like understanding them better https://podman.io/blogs/2018/10/31/podman-buildah-relationsh...

I think that explanation is a little clearer, however the repos and the article don’t make this clear and the fact that podman also builds images makes it less crisp.

> Some of the commands between the two projects overlap significantly but in some cases have slightly different behaviors. The following table illustrates the commands with some overlap between the projects.

And this makes no sense at all if you’re purposely designing a tool.

podman uses buildah to implement "build a container like Docker" functionality... what aspect of that is difficult to understand?

That functionality probably wouldn't be necessary at all if Docker didn't pollute the common understanding of containers in the first place.

See the table of the subtle differences, why does podman create images that aren’t compatible for example? Regardless of what Docker does, if you make tools that are for a specific use case why blur the lines?

The images are compatible. I’m not sure where you’re seeing otherwise.

What is blurry to you about the purpose of either tool?

I don’t think you’re reading the article, it says:

> Each project has a separate internal representation of a container that is not shared. Because of this you cannot see Podman containers from within Buildah or vice versa.

> Mounts a Podman container. Does not work on a Buildah container.

^ this here is one of the problems, the containers are not compatible is my interpretation.

The tool feature sets overlap with subtle differences according to the article, that blurs the line on what each one is for. They need to pick a direction, if you’re making a build tool and a run time, the the build tool must only build, and the run time must only run, or just make one tool. Intentional and truthful (meaning the words mean only what they say) design limits the chaos that happens in the wild, these tools aren’t doing that. It may seem clear to you, but the article is littlerly about how it’s not clear and how they overlap confusingly. So you’re going to come across a mess at some point due to this mistake, that or they could explain their rationale for the overlap but they don’t.

The difference is that buildah's only job in the world is to build OCI Images. Podman is more about running containers so It's containers are a lot more generalized.

Buildah containers and buildah run are far different in concept then podman run is. Buildah run == Dockerfile RUN. So we don't support a lot of the additional commands that are available for podman run and have decided to keep the format different. Podman has a large database, that we felt would confuse matters when it came to podman run.

I tell people if you just want to build with Dockerfiles, then just use podman, and forget about buildah. Buildah and its library are for building OCI images and hopefully embedding into other tools in addition to podman like OpenShift Source2Image and ansible-bender. As well as allowing people to build containwer images using standard BASH commands rather then requiring everyone to use Dockerfile. Podman build only supports Dockerfile.

It sounds like you’re mixing up containers and images.

I’m just taking the article at face value, they use the word container and say they’re not compatible. So maybe the article could be better, not sure.

The format shared between the tools is an OCI image. Earlier you stated the images are incompatible, which is false. Then you switched to worrying about the internal representation of a container differing between the tools.

Why are you concerned about buildah’s internal representation of a container, unless you’re contributing to the codebase?

In all fairness, the blog is a bit confusing. I know that podman ad buildah both comply with the OCI image spec and that pod man in fact calls buildah. Which makes the various discussion around visibility etc. somewhat confusing to me. It may well be irrelevant in which case perhaps there’s a clearer way of explaining the relationship.

We get this question all the time, and I totally understand the frustration. In a nutshell, here's the breakdown. I will highlight this in blog entries as RHEL8 comes out and emphasizes podman, buildah and skopeo, so you will see more :-)

If you break containers down into three main jobs, with a sort of forth meta-job:

RUN (& FIND) - podman BUILD - Buildah SHARE - Skopeo

If you think about it, that's what made docker special, it was the ability to FIND, RUN, BUILD, and SHARE containers easily. So, that's why we have small tools that map to those fairly easily.

Does Buildah build images in a completely unprivileged environment? There are other tools like Kaniko which have few gotchas like although they don't need docker daemon but they still need ROOT access which does not make it truly secure.

Buildah (1.5+) and Podman (1.0+) will work in an unprivileged environment on Linux systems with Linux kernel 4.18+ or kernels with the relevant changes to userns backported (I believe the latest RHEL 7 kernels qualifies here).

[To note, I don't work for Red Hat]

There's some userspace tooling needed as well. E.g. Buildah can't build images as non-rooot on RHEL 7.6 yet because newuidmap/newgidmap is missing. This will get fixed any day now: https://bugzilla.redhat.com/show_bug.cgi?id=1498628

Look for a backported shadow-utils in RHEL7.7, which is scheduled for the summer. RHEL8.Beta currently supports rootless podman 1.0.

Yep, works perfectly on Fedora 29 with podman from the default repos

Is there a connection between the RH stack and Podman, would it just work on Ubuntu, Debian or even OS X/WSL?

Given that podman is in Arch Linux's aur[0] – and on their home page they refer to installing it with apt-get (debian) – then I imagine everything is handled in the podman tool, its userspace dependencies and the kernel.

I personally look forward to trying it out.

[0] https://aur.archlinux.org/packages/libpod/

Podman was moved to the community repository a few days ago.


It's news to me! I was maintaining the libpod package on AUR and I just saw the deletion request now. Happy it's in community now, but would have been nice to hear before it got deleted from AUR!

Might also want to delete cni-plugins from AUR now that it was also moved to community.


Docker R.I.P ?

Around 8 years ago it was all about Puppet and similar, then Docker, K8s, eventually something else will be the next trend.

For some reason kubernetes community doesn't like docker. I'm not sure why, though. Afaik Docker was what started the whole container hype, and docker-swarm having some nice features like dependency tracking that I'm personally missing in kubernetes (there's probably a project for that already which I haven't tried out yet, tho).

My understanding is that Docker was the preferred container engine for k8s, but Docker did not reciprocate the interest, instead preferring to push Swarm.

Having supported Docker in production, I've found that the stability wasn't quite where it needed to be, but Docker Inc. seemed more interested in adding more and more features rather than stabilizing what they already had. I would definitely welcome a less complex base for running containers in production.

Docker was at times contentious, like when they injected an embedded DNS server into containers of some network types and refused to add a switch to turn that off:


That and, a few years ago, the amount of new bugs that would show up when they had to validate a new version of Docker for kubernetes to support. Eventually Docker adopted a less stressful, dual release cycle.

Google has internal equivalent of docker which they use instead, and they are one of biggest contributors to kubernetes. Im sure there are more reasons though.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact