Hi, podman-apple-silicon developer here! I want to share some FAQs about this project. :)
Q: Does this run amd64 docker images or aarch64 docker images?
A: aarch64 images currently, but I'm going to patch podman to make it possible to run both amd64 image and aarch64 image. All I have to do for this is to make QEMU call and Linux image configurable, so it won't be very hard. However, if you are running amd64 images, you will have to bear the performance overhead due to CPU emulation.
Q: Is this toy or are you actually going to maintain this?
A: I'm DevOps engineer and I made podman-apple-silicon to actually use this in my day job.
Q: Are you going to merge this to the upstream?
A: I'll keep trying, but it won't be easy unless QEMU merges Alex Graf's Hypervisor.framework patch.
I may forgot to check HackerNews, so please feel free to ask me anything about podman-apple-silicon at https://twitter.com/simnalamburt
> I'm going to patch podman to make it possible to run both amd64 image and aarch64 image
From what I understand, if you had working QEMU-static and binfmt, wouldn't cross-architecture containers just work? I've used that a lot in chroots, and I'm confused as to why that wouldn't just transparently work in this case.
Are you talking about just making that process easier? Does podman enforce extra checks that prevent you from using binfmt?
If you just switch the binary you’re 90% finished but there are some chores to make it perfect
1. The arguments given to the qemu should be changed by the CPU arch. For example, AArch64 uses ‘-accel=hvf’ while amd64 in Apple Silicon must use ‘-accel=tcg’. AArch64 requires ‘-cpu’ option whild amd64 does not. etc. And currently, the arguments of QEMU is half-way hardcoded to the podman source code.
2. You should change the linux image when you change the CPU arch. Currently, podman always downloads the Linux image whose CPU arch is same with host’s CPU arch. This is where configuration should be added.
3. aarch64 uses UEFI while amd64 don’t need to (I don’t know why)
> 3. aarch64 uses UEFI while amd64 don’t need to (I don’t know why)
Sounds like amd64 is falling back to BIOS, while aarch64 never really had any other standard than UEFI. You should be able to run UEFI for amd64 too if you force the machine type to pc-q35-6.1 (or whatever QEMU version you're using).
Tangentially, I wonder if Moby underestimated the amount of human hours that were instantly allocated to alternatives as soon as they announced Docker Desktop was going paid. (I don't know if this project is a consequence of that announcement or not.)
Hopefully Podman will be able to capitalize on this event and get the polish needed for widespread use.
I suspect the push for podman was more about how docker ignored CGroupsV2 for so long that Fedora eventually turned it on anyway which broke docker and then told users to switch to podman.
I think a really big part of it was where Red Hat asked Docker to accept their patch that allowed people to run docker with local registries only (no docker.io), and were told Docker would not be accepting that patch, and to go pound sand if they didn't like it (eh, so maybe not so forcefully).
The first thing I tried to figure out when looking into Docker for work was how to limit the registries it would look at to only be our own when used in production, and I was surprised to find out you can't (at least not without a hack to make it think it's using a mirror and just hitting your registry first).
Docker does this a lot. For example, we were trying to turn off gzipping images on the wire when pulling because it actually cost more when done from the intranet.
You can't. And modifying the source was so convoluted that we gave up.
Then we needed to clean up docker (before there were commands to do that) when it started to eat up all of the disk space.
To our (un)surprise, Docker uses 3 (!!) different storage formats, many of them having redundant information, and editing one of them would cause the other to be corrupted.
One was a binary database format that was specific to Go and didn't have any utility CLI to work with, so you had to write a programmatic interface with it just to edit it.
Or how about the fact that even if you issue commands directly to docker over the HTTP Unix socket, it will deadlock if you issue too many commands to it? This became our nightmare when trying to implement one of the first iterations of custom deployment backends at ZEIT. In fact, the entire project failed because of docker (there was no great alternative at the time).
I am honestly wondering more about your specific use case. Does the CPU cost outweigh the storage cost? Is it a timing problem (speedup) or are you at such large scale (on premise)?
Maybe where I'm getting at is, I can think of 99 problems but docker gzip ain't one :) how was this a priority (at some point)
Pretty sure it's CPU vs network not CPU vs storage. A fast internal network can be cheap compared to the cpu required to pack/unpack images on every download. I wonder if choosing a fast-to-decompress algorithm like zstd would change that.
I agree that it should be possible to disable the default registry, but I'm not sure I agree with allowing you to override it. (These requests appear to be conflated in various comments.) Use your own registry by specifying the domain first `myregistry.example.com/repo/image`; an unadorned `repo/image` being globally reserved as shorthand for `registry.docker.io/repo/image` seems fine. Allowing overriding the meaning of `repo/image` would be a support nightmare for both moby and internal IT, just use qualified names.
I believe that was acknowledged in some of the pull requests, and also a problem with correctly using credentials for repositories and various solutions were proposed (see one of my other comments for links). Ultimately, the reason given in the pull requests I saw was along the lines of "it will fracture the namespace and hurt the community".
Disabling all registries except for those whitelisted and requiring full names for those would probably have been sufficient for this problem, and not fractured the community IMO. There's a difference between what you allow in dev and what you allow in production, where you should have a chance to vet all new requirements and ensure they are appropriate. It's just unacceptable for some organizations to allow stuff to be as ad hoc as that, as much as Docker might want to inject itself into their processes at that level.
> Disabling all registries except for those whitelisted and requiring full names for those would probably have been sufficient for this problem, and not fractured the community IMO.
Yes this is what I was getting at. You don't need to override the meaning of image names, you just need to be able to disable registries outside your control and prefix all your images. It's like an additive vs subtractive blindness effect that caused people to miss this solution.
It wasn't missed, from what I saw in the issue and pull request. It was offered, and ignored.
Part of me thinks that's a shame, because it seems like it's just that a for-profit entity behind the project was the only reason for doing so, but at the same time, the outcome isn't bad I think. Having multiple high-quality choices with different driving causes (and organizations) behind them is more beneficial in the long run than just one. Competition is good, even in open source, most the time.
Anyone is one typo away from installing random junk from the internet on your machines. No one should be using docker in production while it can connect to a public registry where you have zero control of its contents.
Did you read my comment? Because I can't find any interpretation of your response that makes sense assuming you comprehend the actual content of my statements.
I did read the comment. And I comprehended the content of the statements. And I'm scared that this obvious security risk isn't horrifying to you.
> I agree that it should be possible to disable the default registry, but I'm not sure I agree with allowing you to override it. (These requests appear to be conflated in various comments.) Use your own registry by specifying the domain first `myregistry.example.com/repo/image`; an unadorned `repo/image` being globally reserved as shorthand for `registry.docker.io/repo/image` seems fine. Allowing overriding the meaning of `repo/image` would be a support nightmare for both moby and internal IT, just use qualified names.
Literally anyone in your company can forget to say `myregistry.example.com/` at any moment. And then your whole infrastructure runs on some random image that you didn't vet. You're a typo away from having your machines owned, your entire infrastructure falling over, your data being exposed to the anyone.
This is no way to live and it's no way to run a company.
Every distro had it turned off for ages because turning it on would break docker. So eventually fedora decided docker was never going to added it and turned it on anyway. Then shortly after, docker adds support
Between fedora and docker, it didn't help, that docker made a first release for current fedora version always about a month before fedora was going to make a new release (i.e. with fedora's 6 month release cycle, the corresponding docker release for that fedora version was 5 months late).
Podman's been a great tool (on Linux) for a while, it's my daily driver. Rootless, no daemon and networking nonsense, and docker-compose can be replaced with real K8s pod definitions for the most part. I'm actually really happy to see the zeal that has come to it from docker's changes - thank you docker ;)
> docker-compose can be replaced with real K8s pod definitions for the most part
Could you elaborate on this part? Are you running in Kubernetes or somehow using the pod definition format with Podman? I'd like a way to declaratively specify my Podman pods without docker-compose and friends.
Podman has built in support for K8s' Pod definitions. I've never used it so not sure how good (or bad) it is, but it is possible.
It is also able to generate pod definitions from created containers, as well as generate systemd units that you can then enable allowing systemd to manage your pods/containers.
podman-compose is sadly not as good docker-compose. It simply converts compose yaml files into podman commands.
As an alternative, as of podman v3 (rootfull) and v3.2 (rootless) podman has an optional podman socket you can enable. The API is docker compatible, thus allows for full docker-compose support, and will take any other application that interacts with the docker api directly.
Is anyone successfully using podman with docker-compose in the "locally remote" style that Docker Desktop makes simple? E.g. running `podman machine start` on a Mac, then `docker-compose up` to start all the containers in the current dir's `docker-compose.yml`, with mounts from the host and a default network. I'm following https://github.com/containers/podman/issues/11389 and https://github.com/containers/podman/issues/11397
Not everyone has to use the same tool. Even if you continue to use only docker, podman helps you by competing with docker and driving them to improve their software (ex: cgroupsv2).
Also, I believe podman works in WSL with some tweaks.
Moby has the power of using a well-established name ("I want to have Docker on my desktop, let's google that, oh, hi Docker Desktop!") that also appears in a lot of tutorials and training material, both aspects that developers tend not to spend time on.
Given how often I still stumble over massively obsolete documentation and "helpful" articles from 15 to 20 years ago, I'd say they are safe.
Edit: It works. Had a bit of trouble since I wanted to uninstall the "real" QEMU first, but `lima` still depended on it, and then installing the patched QEMU needed to update the version of `lima` I had installed, which then tried to reinstall QEMU, which failed because of some symlinks which were now owned by the patched QEMU...
This is the first time I hear of nerdctl and it's _very_ interesting.
M1 aside, does it work fine on regular arm64 linux? I run a small
Raspberry Pi 4B homeserver and I would have used podman for improved security, were it not for the poor/incomplete Compose support, while nerdctl seems to explicitly support it.
I'm curious every post on podman had a very positive thread. We were kinda forced to use podman and while we enjoy rootless containers conceptually they have caused us a lot of issues. After every restart of a node when a user had pods running, said user will not be able to use podman. Often times, bugs can only be solved by completely resetting your user. It's also not as straightforward to configure as the documentatiom let's on.
Overall I don't have an opinion on the software, it's just confusing to me how much praise it receives?
Podman does rely on a lot of files in the /tmp/run-... directory and this has caused us some similar issues. There was some info somewhere on how to change this to somewhere more suitable.
Sorry it's not too helpful, but might be some clues for you.
Presumably there is a config file somewhere that is causing issues. Resetting the user might be a way of saying rm -rf /home/someuser. That said, I've never used podman and this is the first that I've heard of it.
Mostly because the dev linked to the Homebrew installer when announcing(?) it on her Twitter account[0], but also because the forked repo[1] that contains the patches just shows the original README.
This isn’t an official port to the M1 — it’s a custom version patched by a different dev.
I assume it's pretty slow, not on storage but on compute. x86 docker/podman on M1 must use QEMU software emulation, rosetta doesn't support x86 virtualization.
Behind the scenes, it's using QEMU with Alex Graf's patches for hvf (Hypervisor.framework) support, so it's Virtualization, not emulation. In other words, the performance is really good ;-)
BTW, in case you don't want to depend on a fork, upstream podman is going to gain M1 support (in the sense of 'podman-machine' knowing how to start aarch64 VMs with hvf) very soon.
Why do you think that it uses x86? I assume that it just runs ordinary ARM linux in VM which just works (I also don't really understand this title post, surely podman worked on ARM linux since the beginning).
You can run docker on Windows without Docker Desktop. I think the only thing that requires Docker Desktop on windows is if you want windows containers, but I don't really think that's a common usecase.
Oh wow, really? I thought Docker Desktop was only a UI that helped you start/stop Docker, I didn't realize there's no OSS version at all. That's much worse than I thought, wow.
I guess I'll have to switch to Podman too, even though I don't use Mac, just because we need a unified approach across OSes in our company and can't afford to have Mac-using developers be second-class citizens.
I love this comment. 15 years ago we were refusing Windows-only tools to protect the Linux users, today we are refusing Linux-only tools to protect the Mac users.
how do you mean learn it? So far I've only used these as drivers to run other things so my only concern is that QEMU is supposed to be slow. I don't really do anything directly with it.
My mistake, I took your usage of QEMU as a shorthand for managing and manipulating VMs, and wanting to use VMware et al due to not wanting to use the “QEMU ecosystem”
So my somewhat shaky understanding is that Apple does have some form of jails in Darwin, because they use it on iOS (hence, "jailbreaking"), but for some reason doesn't ship it in desktop Darwin (aka macOS).
Apple’s sandbox focuses on isolating the OS from non-platform binaries. It doesn’t have namespaces or cgroups.
Jailbreaking on iOS is mostly about that sandbox. It doesn’t relate to BSD jails.
On macOS, Apple made the sandbox more lenient and implemented it a bit differently than on iOS. But both have roughly the same goals. They’re also alike in that both use the same kernel-level framework (MACF) to do their job.
But the MACF is completely off-limits to everyone outside Apple. Not even accredited kext developers can use it. So I think that no one except Apple could possibly add container-style isolation to macOS.
A lot of people assume that, but it's only partially true.
Darwin's userland is taken from FreeBSD, the kernel is from NeXTSTEP, although it also borrowed some things from FreeBSD, but I don't think they incorporated jails[1].
Darwin and MacOS aren't really all that related to NeXTSTEP, on purpose.
NeXTSTEP had proprietary UNIX code in it, and required a UNIX license from AT&T to distribute. Additionally, the display manager (infamously) was based on Display Postscript, which also required a license from Adobe.
My understanding (which is by no means 100% certain! the following is my best guess) is XNU and Darwin were almost complete "rewrites" of NeXTSTEP, preserving the "idea" but with new code.
NeXT's Mach 2 based kernel and BSD userland, which seems to have been very similar to the system developed by Avie Tevanian at CMU and used on their VAXen, was replaced with XNU, which used a Mach 3 derivative from DEC's OSF/1 project, coupled with a new (non-AT&T encumbered!) BSD userland and kernel "module"/personality based on an amalgam of then forks of 4.4BSD-Lite/386BSD, namely FreeBSD and NetBSD (several big bits of libc are from NetBSD).
Point being, I doubt they've copied/pasted large chunks of Free/NetBSD into Darwin/XNU since the late 90s/early 2000s. There was been code flow between the two, but I doubt they'd backport big features like jails (and linux emulation).
I don't think it is, they started with NeXTSTEP as a base and just copied interesting bits from FreeBSD. If FreeBSD was the base then that would be surprising.
MacOS doesn't have jails as FreeBSD does. But they are using some kind of isolation for Mac applications, so they can not see data of other applications.
Do you know if the VMs have accelerated graphics? The new version of Hypervisor.framework in 12.x supports accelerated macOS guests but I didn't know if they figured out a way to do it without Apple's tricks.
This looks more like the Podman version of Docker Desktop (which is closed source, and no longer free), effectively, as it's handling the virtualization aspects for you according to[0].
But the Docker engine (that runs containers on the local machine) is only available on Mac and Windows via Docker Desktop which is not free-as-in-beer anymore for all.
This is incorrect. The Docker engine shipped in Docker for Mac is built from the exact same Docker Engine in the open-source release. It’s the native Mac application wrapping the engine in a single-purpose hypervisor that is closed source.
Why? Anyone can assemble an equivalent from available open-source tools:
* Virtualbox
* Your favorite Linux distro
* Docker engine on the Linux VM
* Docker CLI on the Mac host
* A variety of filesystem sync solutions (I don’t remember their names but there are several)
Alternatively there’s also docker-machine.
The closed source app gets you the convenience of not having to set that all up. If you don’t like installing closed source apps you probably prefer to set things up yourself anyway. So what’s the problem?
It's perfectly possible to use podman rootless. Presumably docker too, but I only have experience with podman. That is, the podman process does not have root privileges.
Maybe excessive, maybe a good prophylactic for potentially damaging NPM packages (for example) to not get root on my CI infrastructure.
It certainly is a problem. We want to run local development stacks with Docker, but, since it runs as root, it leaves files into your home directory (database and other files mounted from the guest) that are owned by root, so you can never delete them.
Rootless Podman does this as well but with the subuid owned files. For instance you run postfix in a container and it has files owned by postfix. You can’t change the files outside of the container. You can make yourself root in the namespace and delete with podman unshare however. But it isn’t optimal from UX perspective either.
If you create a user inside of your Dockerfile and switch to that your files will be owned by whoever is assigned to uid:gid 1000:1000 on your dev box if you use a volume mount. This solves the problem in nearly every case because your primary dev box user is almost always going to be 1000:1000. It'll work on native Linux, Windows (WSL) and macOS using Docker Desktop or not.
Oh, good call, I wasn't doing this because 1000 is not always the user uid, but it's almost always, and at worst you'll need root to delete the files, thanks!
Calling podman “the open-source docker alternative” is disingenuous. The Docker engine which it competes with is also open-source. The only closed Docker product is their desktop wrapper, for which podman is not an alternative.
This has little or nothing to do with "podman people". The stuff people are talking about in the top comments have everything to do with people (and corporations) trying to figure out how to run podman efficiently on MacOS. It certainly can be done by bundling QEMU or through a VM.
Docker isn’t true open source in my opinion. With true open source you can compile your own version of the software with your own changes and it works just like the official release. You can’t do this with Docker Engine afaik.
Planning to buy a new MacBook for a family member I have some question to whoever is into the Apple wold: is that true that the next generation of MacBooks is going to have classic Esc&F# buttons, MagSafe, SD card reader and HDMI? When is it expected to be released? How is a MacBookPro better than a MacBookAir of the same specs (RAM&SSD)? We were going to buy a new MacBook now but the classic parts returning sound really motivating to wait.
The current macbook pro 13 with M1 does not differ significantly from the macbook air m1: https://9to5mac.com/2021/09/01/m1-macbook-air-vs-m1-macbook-... . You have one core more and active cooling with fan, but in daily use it is difficult to generate the load that will trigger this cooling. The situation will definitely change with the new Macbook Pro 14/16 to be shown in October or November.
You mean Pro can not throttle even when you want it to? It sounds nice to have a full power at my disposal when I actually want it but most of the time I want a laptop to run near its minimum power to prevent heating, avoid noise and save battery. Even when I run a computation-heavy task I still want to be able to force it to run slowed-down and take its time. Is this not possible with Pro? Even with 3-rd party tools?
I see. I personally am a PC user and I have a habit of controlling the throttle manually (it's not necessary but easy and handy). I have been doing this for almost 20 years, using handy 3-rd party panel applets and system's (both Linux and Windows) built-in tools. So I'm surprised to learn MacBooks still don't allow manual throttle control. I usually prefer to keep my thermal regime below of what the vendors pre-define.
I don't mean controlling the fans. I mean voluntarily throttling down the frequency of the CPU (and telling it it should not up-throttle even when the load goes high) so the computer actually stays cold even without the fans.
Q: Does this run amd64 docker images or aarch64 docker images?
A: aarch64 images currently, but I'm going to patch podman to make it possible to run both amd64 image and aarch64 image. All I have to do for this is to make QEMU call and Linux image configurable, so it won't be very hard. However, if you are running amd64 images, you will have to bear the performance overhead due to CPU emulation.
Q: Is this toy or are you actually going to maintain this?
A: I'm DevOps engineer and I made podman-apple-silicon to actually use this in my day job.
Q: Are you going to merge this to the upstream?
A: I'll keep trying, but it won't be easy unless QEMU merges Alex Graf's Hypervisor.framework patch.
I may forgot to check HackerNews, so please feel free to ask me anything about podman-apple-silicon at https://twitter.com/simnalamburt