
Podman and Buildah for Docker Users - devy
https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/
======
cabraca
They claim "Podman provides a Docker-compatible command line front end and one
can simply alias the Docker cli, `alias docker=podman`". That claim alone
shows they dont know or dont care how people use container engines outside the
k8s space. I know a lot of orgs that simply use docker-compose files to spin
up simple setups. There is podman-compose[1] but its "still underdevelopment".

Then there is software using the docker socket.

Portainer? No Podman support [2]

Testcontainers? No Podman support [3]

Traefik? No Service discovery for you [4]

Should i go on?

yeah its rootless and i like the idea podman and buildah represent. What i
dont like is the way they break at least part of the ecosystem.

[1] [https://github.com/containers/podman-
compose](https://github.com/containers/podman-compose)

[2]
[https://github.com/portainer/portainer/issues/2991](https://github.com/portainer/portainer/issues/2991)

[3]
[https://www.testcontainers.org/supported_docker_environment/](https://www.testcontainers.org/supported_docker_environment/)

[4]
[https://github.com/containous/traefik/issues/5730](https://github.com/containous/traefik/issues/5730)

~~~
rcarmo
The lack of composer and Traefik support is the main reason I don't switch my
personal projects to podman (work stuff is all K8s).

That and neither podman nor buildah being at least as easy to install on
Ubuntu LTS then Docker.

Compose is just lovely for personal stuff. Swarm is also pretty decent (and
easier to run for small clusters, if somewhat unstable networking-wise).
Podman can't compare.

~~~
GordonS
I really feel like Docker Compose and Swarm are underrated.

For running groups of related containers during development, Compose is
wonderful - Compose files are simple and easy to understand even if you've
never seen one before.

Swarm is also good for small-scale production deployments - it's just so
simple to deploy and update. "secrets" and "configs" are also really useful,
but of course I see the appeal of a centralised system such as Vault for
complex deployments.

I've never tried to use Swarm it at scale, so don't know what kind of issues
you might face that k8s would solve.

~~~
bkircher
Can’t I just do all what docker-compose does with a Makefile and a couple of

    
    
      docker network create
      docker run
    

and so on? With much less magic involved?

~~~
nickjj
> Can’t I just do all what docker-compose does with a Makefile

There's a lot of value in just being able to run a set of unified Docker
Compose commands and have things work the same in every case with the same
YAML configuration.

But technically yes you could replicate that behavior, however Docker Compose
does a bunch of pretty nice things, like when you run docker-compose up it
will intelligently recreate containers that changed but leave the others
untouched. Then there's the whole concept of override files, etc..

It would take a fair amount of scripting to emulate all of that behavior along
with getting all of the signal processing and piping multiple services to 1
terminal output acting well. Even today Docker Compose has issues with that
after years of development.

You should run docker-compose with --verbose one day just to see what really
happens under the hood. Compose is doing a lot. For example I wrote about this
a while back. It also includes an example output of running a single service
app with --verbose: [https://nickjanetakis.com/blog/docker-tip-60-what-really-
hap...](https://nickjanetakis.com/blog/docker-tip-60-what-really-happens-when-
you-run-docker-compose-up)

------
freedomben
Some useful further reading regarding "why" of podman and buildah (for those
interested in hearing the case):

[http://crunchtools.com/docker-support/](http://crunchtools.com/docker-
support/)

[http://crunchtools.com/why-no-docker/](http://crunchtools.com/why-no-docker/)

I work for Red Hat, but I find myself pretty centrist on this issue. There's
good arguments on all sides. Red Hat isn't doing podman and buildah etc
because they want to crush or destroy docker. There are legitimate arguments
and they have been open about them.

One big one is security. You may think it's overly paranoid to be concerned
about having a daemon (especially one running as root, tho rootless docker is
either here or near), but keep in mind different people have different
requirements. If you're a bank securing billions of dollars, that attack
surface is scary.

~~~
InTheArena
I find this argument not very convincing given that I have been in the
audience when a red hat engineer who was giving a overview of open shift and
related technologies started a presentation and started by making the audience
“swear” not to call things docker containers, but just containers.

Are there things that would be better with a different model then the
runc:containerd? Sure. But is that really the primary factor here? I very much
doubt it.

Red hat and google wanted docker gone, and have spent the capital to do so.
Good business move for open shift, GKE and RHEL, but not necessarily in the
long term interest of the open source community.

~~~
zaro
> Good business move for open shift, GKE and RHEL, but not necessarily in the
> long term interest of the open source community.

But same applies to Docker (the company). In the end, their decisions are also
business moves, and they are not necessarily better for the open source
community. And they have also proven that they don't always work in the
interest of the community - remember the "I don't accept systemd patches" ?

------
InTheArena
Redhat continues to try and eliminate Docker as a competitive threat. Water is
wet. They are winning, but it still leaves a bad taste in my mouth. This won't
be good over the long run for the Linux community.

~~~
xnxn
Huh. I think that Redhat's work here is a boon to the community. Docker was
trying to become synonymous with containerization (they would have me say
"dockerization"). We benefit from open specs and multiple implementations.

------
esamatti
> If you are a Docker user, you understand that there is a daemon process that
> must be run to service all of your Docker commands. I can’t claim to
> understand the motivation behind this but I imagine it seemed like a great
> idea, at the time, to do all the cool things that Docker does in one place
> and also provide a useful API to that process for future evolution.

I guess that would be for the Windows and macOS support as it should make
things easier implement in a cross platform way when you can just proxy the
cli commands to a daemon running in a Linux VM even when you are on Windows or
macOS?

~~~
ffk
There were a few reasons for this. The original golang version of docker
required root, full stop. There was no difference between the client and
server.

The first reason was to reduce privileges of the client interface. This would
provide the possibility to reduce privileges and restrict what unprivileged
could do later on. The communication over the socket is just http, which
allows for remote management of docker containers.

A second reason was to build a strong contact between the client and server.
The client became syntactic sugar for the rest calls, which helped stabilize
docker. This would ultimately lead to enabling osx and win support via a VM on
the host.

Another major goal was to enable the docker in docker use case which helped
significantly when developing on docker itself.

Source: I was there. :)

~~~
marmaduke
Since you were there, can you comment on discussions you might have had, where
tradeoffs were chosen which would have led to buildah instead of Docker e.g.
prioritization on cross platform support? Your comment seems written in a very
after the fact way, but surely someone was agonizing overs these choices
before they became history.

~~~
ffk
Sorry for the slow response, just got off an airplane. These were not after
the fact. Instead, we knew these were the potential benefits and we moved
over. I personally wasn't happy about having to run a daemon to get a
container, but we saw the benefits far outweighed the downside for the
approach. The other approach on the table was for docker (pre client/server
split) to continue requiring sudo. I don't recall anyone suggesting an
alternative approach though.

------
alias_neo
I've started using podman for my personal projects where I want to deploy just
a single service in perhaps a couple of containers on a VM.

I rebooted a box one time, and some sort of tracking for the podman networking
decided that my listening port was still in use even though no container was
running. The only way I fixed it was to uninstall the networking tool podman
uses (I forget it's name; slirp4ns or something) and podman itself then
reinstall and start again.

I use docker daily, professionally, it has it's issues, but a lack of docker-
compose like files and anecdotal reliability issues I've seen, I can't see it
taking over from, or even competing with Docker for some time yet.

~~~
ipbabble
I'm sure that project would benefit from your feedback and information on what
looks like a bug. What did upstream Podman community say? Did they understand
your issues? Were they able to reproduce the error? Were they able to fix it?

~~~
alias_neo
Honestly, I haven't been in touch with the community yet. I needed it working
quickly so I solved my issue and I moved on. If it happens again, I'll be
expecting it and I'll collect data to report it.

------
mwcampbell
Does Red Hat have a solution for using Kubernetes (or OpenShift) in
development? Something similar to Skaffold or Tilt? AFAIK, those tools depend
on DOcker to do builds.

~~~
tyingq
Not Redhat provided, but there's Minishift:
[https://github.com/minishift/minishift](https://github.com/minishift/minishift)
and Codeready Containers: [https://github.com/code-
ready/crc](https://github.com/code-ready/crc)

~~~
Ezku
Anecdotally, both of these were difficult to get to run at all and offered
subpar developer experience on macos. :(

~~~
mroche
What difficulty (or subpar experience) are you running into with CRC?
Initially it was a bit weird, but getting up and running is fairly simple.

1) Go to Red Hat’s cloud site (linked on GitHub) and download the latest CRC
release and your pull secret.

This does require having at least an empty Red Hat account, which may bother
some.

2) Extract the release and run `crc setup` and let it do it’s thing.

3) Run `crc start -p /path/to/pull/secret` and wait for it to finish getting
up and running. First time this may take 10 minutes, follow up starts 4. You
can pass other options as well as needed.

4) Run `eval $(crc oc-env)` and start working with developer:developer
credentials (or the provided kubeadmin creds). Use `crc console` to get to the
WebUI.

We use this on Linux and macOS here just fine for local development and
testing. As always, though, YMMV. CRC still has some other quirks that need to
be ironed out, but it is generally useable. So far for us the weird parts are
the limited self-signed certs and lack of cluster metrics, but both are known
issues to the project. And that you can’t have a VPN process running as the
startup will restart network services on your system.

------
GordonS
The only thing I know about podman is that it doesn't require a daemon like
docker does.

Could someone who knowa more summarize any other benefits over docker?

~~~
coldtea
That's exactly what the article does...

~~~
GordonS
The article is comparing podman to Docker, and discussion of benefits is
largely centred around the part I already know - there is no daemon.

I was wondering if those that have actually worked with podman had more
insights.

------
whalesalad
I can’t get over the name buildah. It’s so bad.

------
ipbabble
Thanks for all the feedback. I will try to address many of the comments in a
follow up blog. I will also try to address some of them here in reply to the
comments.

First I want to mention a couple of things:

1) My blog didn't recognize the diversity of meanings for "Docker" to the
Docker community. This is a problem when there is confusion over Docker
company, Docker community, Docker as a collective of products, Docker as a
single command line project/product. My blog was specifically focused on
Docker CLI users. The Docker command line tool that so many of us grew to
love. To say I "don't know our care how people use containers" is an
unfortunate conclusion to make. I'm sorry of my restricted use made it seem
that way. I will say I wrote all of the original Docker CLI manual pages. So I
can claim a very deep knowledge of the Docker CLI. I had to test almost every
aspect of the CLI in order to write those manual pages. As a result I filed
several bugs too. But I do understand the the Docker CLI is just one part of
the tooling that many Docker community users take advantage of. My definition
of Docker was limited in my blog. It did not address projects like docker-
compose etc.

2) It is unfair to say that Red Hat employees set out to destroy/ruin/whatever
Docker. Very early on we wanted to help the Docker community. Red Hat provided
a lot of validation to the Docker community by jumping on board and providing
a lot of technical expertise and including it in RHEL and OpenShift. People
like Dan Walsh and others tried very hard to explain both enterprise features
required by risk averse users and also how to build a sustainable inclusive
community model. Unfortunately much of our enthusiasm to help make Docker
successful, based on our proven track record, fell on deaf ears. Perhaps their
was s suspicion that were were looking after our own self interests but it
really was a genuine effort to share our experiences in the community. Our
open source first approach is always in the interest of the community and our
customers and we know that that benefits us too. We know that strong inclusive
communities benefit everyone. We sometimes get this wrong. But most times it
works out - consider out move from our OpenShift cartridges technology to
Docker. We didn't try to kill Docker, we knew it had the right approach. We
wanted to make it better through open source community contributions. And we
invested in Docker very heavily. Eventually some of our customer concerns with
security could not be met with Docker's daemon approach (btw dockerd or
containerd) and so wehad to address those requirements.

I have continued to talk about the value of Docker to the container community
and how they revolutionized the industry because of their unique value add on
Linux containers.

3) There are areas that Podman still needs to address. Some hare been worked
on - podman-compose and a Mac client etc. Plenty of work to be done. If you're
interested then please consider contributing to Podman (libpod) Podman-Compose
etc. at [https://github.com/containers](https://github.com/containers)

-ipbabble

------
samtrack2019
podman doesnt support osx via a vm layer or natively, so that's a deal break
for me, docker is really about the developer desktop experience which RedHat
(IMO) is not great at.

------
Proven
I wonder if users of non-RPM distros would be eager use these tools. Even if
these tools are a little better (maybe they are), my guess would be only
medium sized K8s users may find it worthwhile to switch.

Large users usually build their own, and small users are lazy to chase
incremental improvements (if they do change they'll move to public build
services).

~~~
detaro
(without first-hand experience with it) I could see it being popular with
people not going all-in with containers. We have a few machines with services
that are conventionally managed and a few services being stuffed into docker
containers, and it feels slightly odd that some services are managed through
systemd and some through docker commands.

~~~
viraptor
Why not provide systemd service files for your docker services as well?

~~~
detaro
guess one could use --interactive calls to docker to keep the process attached
maybe? Haven't thought of that before.

