Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Container Desktop – Podman Desktop Companion (container-desktop.com)
464 points by istoica 74 days ago | hide | past | favorite | 200 comments



Kubernetes is planned - my devops wants me to add it badly!

Author note - Most of you guys here are power users, for whom UI is a visual poem that you need or not. This is not a commercial project, it is not following any business goals. But this does not mean concessions to quality, it does try to offer minimal resource usage everywhere, easy experience, good UI/UX.

It explains all it does behind the scenes if you enable the developer console. It can help one learn so at a certain moment one understands and automates with scripts and specs.

But everyone these days is either seen as too smart or too dumb, I don't consider users like this. I know everyone started somewhere and a gradual learning experience is the best.

I broke so many radios and toys when I was a kid and I learned so much, by looking at what was is inside.

It is a project done by one dude, after work and when it rains outside (In Belgium it rains a lot).


I don't live my life entirely on the command line either, but GUIs for Docker are just an interesting niche to me, for which I just don't understand what the ven diagram is between people that want Docker containers running locally, know that that's what they want, and know how it all works, but then don't want to do the small handful of commands at the prompt needed to get it running...


I don't necessarily want docker containers running locally as some hobbyist, they might be just part of the process, and if the gui helps me move through that process efficiently without having to add more commands to my memory, I'm happy about that. CLIs are great, but when nearly everything has one, those small amount of commands become quite a lot in aggregate.


there are ways to minimize memorization, most important:

1. keep log, docs, records of whatever you are doing. most commands are repetitive.

2. copilot or chatgpt, they help a lot with command lines and simple utilities

3. amazon Q sucks in comparison.

4. it used to be google, but now LLMs do it better. less scrolling and ads/spam.


Yes, either all that above or just GUI for rare occasions.


yes, when it's possible. but guis may not exist or may be not better than console, like in case of ffmpeg. the best, of course, is smart assistant who can take verbal commands. either human or llm.

but my post was about doing complex tasks in general. try to offload. another advise for developers is to write comments, even in your small hobby projects. this way you don't have to memorize it all. this was learned hard way. i usually also have a separate documentation with plans, ideas, algorithms, useful info. remind: this is for hobby projects.

and important thing: touch typing is must have. this makes it all much easier


I do think these are good suggestions for anyone getting started with CLIs. Initially they struck me as a bit redundant, but only because the point I meant to make was that because I've already been doing most of that for ages, I'm happy to delegate that to a good gui if one comes along, since that is both more enjoyable, less error prone (mostly), and less tedious. Notes, comments, and LLM generated commands are lovely, but needing to rely on them less, particularly in situations where you can perform some common subset of tasks with better information layout, interface, and progress/state feedback, is worth paying for sometimes.

FFMpeg is a good example though of one I'm happy to just have my notes on, but since I do literally only ever use it for one or two types of tasks, I'm happy to have that sit behind the scenes. Others might use it in many versatile ways, for which I'd be grateful to have those options readily available in my terminal.


Life is full of many things to do and so not everyone have the luxury to priorities logging ones life for everything they do. 2 or GUI are very feasible option for busy people.


you don't have to "prioritize logging" to have logs, calling this a "luxury" is quite bizarre. You can simply use tools that do bash history search, or one of the many copy-paste memorizer tools, and you'll save many hours out of those "many things to do" simply by typing ctrl+s. Some people are busy simply because they want to.


My preferred solution is to have a GUI, then I don't need to do anything extra to remember CLI commands.


I would have assumed the same, but Docker makes ~100m ARR on docker desktop so it’s def not niche.

https://sacra.com/research/docker-plg-pivot/


Docker Desktop includes the easy to run Docker Engine / Docker Machine. I think is fair to assume that most of the revenue is not from users that want a GUI but from users that want a stable Docker Engine experience.


Anecdotal, but my experience, as someone who gives DevOps professional services for many organizations, is that windows users that need containers know that they are called Docker and just download that. Must of them absolutely need GUI. Most of them doesn't know that Docker Desktop requires license, and I convert them to Rancher Desktop.


Nothing wrong with paying for good software.


It's nearly a crime when a government pay with tax money for product that is in low usage and have zero advantage over the free alternative.


just because it costs money, doesn't mean it's good software.


That seems like a non sequitur.


How do I install and run docker containers on windows without docker desktop? I’ve made attempts in the past but never actually succeeded, and just enddd up using docker desktop.


Step 1. Uninstall docker Desktop completely (and images and builds and storage and containers) and reboot. Step 2. Install Rancher Desktop.

If you also need docker emulation with podman too

Step 3. Install Podman Step 4. Install Podman Desktop.

Now a. Either work with Rancher Desktop (open it) and Docker is available also in cmd line (docker, docker ccompose , etc) b. Or Start Podman Desktop to configure Podman (or just use comandline to configure)

Now in cmd you not only have docker and friends but also podman and friends

Bonus, you have Kubernetes tools too and you are FOSS.

Happy composing :-)

PS: I think you cannot start both. I have both installed and never looked back. Windows 10 x64 PRO


> want Docker containers running locally, know that that's what they want, and know how it all works, but then don't want to do the small handful of commands at the prompt needed to get it running

Consider the case of a team of people collaborating on a software stack - the prototypical use case includes Docker Compose at the simplest and a full K8s stack at the extreme. There is quite often a minimum of 3 containers here; frontend, API/Backend and a database server. If you start to add observability tools, async/batch/event execution, caching, automated integration testing, etc, the number of "layers" in the stack grows quickly. In addition, each component may have unique per-environment or even per-user customizations.

Often one ore two people will manage the stack itself and provide instructions on how to get the whole thing working for others using a specific defined selection of easy to use tools that essentially offer minimal prerequisite knowledge to use

"Install X, run Y, get to work."

It saves a lot of time for the intern on the UI team who just wants to add a component to one page and test it locally and not also have to learn how to deploy the entire stack from scratch.


Frontenders that need to run backends in my experience are such a cohort.


I use Docker Desktop on both my macbooks, despite shunning IDEs in favour of a decent text editor and the command line. I use it for 2 reasons: to manage the Linux VM, and to twiddle the occasional setting. For running the containers themselves, or running `system prune` when everything gets cluttered up, I use the CLI.


Same reason, don't fight every time need docker upgraded, on Linux I simply use docker package but in windows and Mac docker desktop is my go to route, but I'm trying podman desktop, I use fedora and sometimes I used podman until stopped because the Nvidia, same reason always does not fight over my tooling


Orbstack is absolutely worth the money on MacOS, fwiw


I agree on the docker / podman, but for Kubernetes, Lens is is really useful. It isn't a substitute for knowing the command line but can be much quicker.


I use non Linux systems for which docker desktop was the best way to have docker running without having to do much work.


GUIs excel at exploring. Exploring is a very very large part of what I do when running containers locally.


Your false assumption is that most of its users know what is Docker and how it works.


relatedly, I prefer using the github app instead of using the CLI.


The problem is, I forget the commands all the time, because I use them on rare occasions. It gets the how this shit worked moment off your shoulder.


> But everyone these days is either seen as too smart or too dumb

Vert succinct and poetic way to describe so much these days in this space.


This looks really slick! Quick question for you: the site mentions that other engines are planned. I'm curious what those might be. I would guess something like directly interfacing with containerd or kata, but would love to know more. If I could request one, it would be to directly use systemd, since it now covers all the necessary features to run containers quite nicely.


Incus looks promising, see here https://linuxcontainers.org/incus

As for systemd, podman is amazing https://docs.redhat.com/en/documentation/red_hat_enterprise_...


>> Kubernetes is planned - my devops wants me to add it badly!

Do you mean the Podman "Kubernetes like" functionality (e.g. podman play kube..) or Kubernetes itself?


I never finished it, but I had a lot of fun documenting a basic-ass K8S (well, K3S) setup that costs about 20€/mo on Hetzner.

You don't really learn about sysadmin through it, or even about docker that much, but you get an idea of how you might easily run a few different things on a server while only needing to know YAML, and not some custom DSL like chef or puppet.


> only needing to know YAML, and not some custom DSL like chef or puppet.

YAML may be a known syntax, but the use of it still requires domain specific knowledge, and is still a domain specific language expressing those domain specific concepts, as to what the expected keys and values are allowed to be and how they are interpreted.


YAML isn’t the DSL, it’s just the language used to express declarative config because the tooling is ubiquitous and it’s rare that anyone uses it as anything more than a nicer version of JSON.

For Kubernetes, it’s CRDs that are written in YAML and they conform to a specification.


I did something similar between jobs—built a k8s "cluster" on my home Linux box using kops+qemu. It didn't make me an experienced admin, but it was really enlightening and fun! Projects like these are a great way to learn.


Ansible?


I don’t get why people need Kubernetes integrations. Kind works just fine. You run it from the terminal and it starts a “cluster” as one or more containers. You can define port bindings and volume mounts via the yaml config. Job done.

Also, nice work on Container Desktop!


Poorly documented is one possibility. Also if you find an issue with anything that's not "testing k8s" the devs will tell you you're not supposed to use it for that.


> It explains all it does behind the scenes if you enable the developer console. It can help one learn so at a certain moment one understands and automates with scripts and specs.

An excellent way to learn indeed! Good luck with your project.


Well said. I agree completely.


[flagged]


Like everything in tech, it's all about tradeoffs and understanding how you want to scale your business. Of the three startups I've been at, 2 of them adopted k8s early on and 1 of which didn't. Of the two that adopted k8s, one I would say k8s was our key differentiator in terms of our GTM motion and how our platform powered the business. $MM customer needs a setup running as close to their current infra's region for just about any reason, yeah sure we can spin that up in a week. This was often the key bit that allowed us to take customers from our key competitors before the competitor even knew we were in play. The 2nd one..... giant waste of money that gave me a decent pay check.

The third startup that opted _not_ to adopt k8s is stuck at $100M in revenue and can't land customers fast enough to offset churn. This is entirely because the COO has held the mentality "k8s bad, amirite?" and stuck with a patchwork of ansible scripts to manage configuring VM farms that ran our stack. Years of tech debt piled up and every new $MM customer coming in that needs to run in a specific region for $reasons, takes 6 months to setup and cost so much that we'd lose money on the deal. I genuinely believe this startup would be closer to $500M in revenue in the years since I've left had they invested in migrating from containers running in VM's to k8s. But instead they had to lay off 30% of their staff and get another round of funding, and are stagnating.


NB, did the first company manage their own k8s clusters, or used cloud provider's controllers?


You don’t need k8s to scale. Can you make other poor choices? Sure.


I have found that people who are adamant haters of k8s, usually truly do not understand k8s or the issues it solves.

It's not for everyone - but having a knee-jerk allergic reaction to anything k8s is silly.

That said - k8s isn't just about scaling, or for "web scale" companies. If you are a person that believes that, it means you are the type of person I am talking about.


I don’t hate k8s, I hate that everyone blindly adopts it without understanding the complexity it adds. I understand it well enough to know YAGNI for most applications. Get over yourself, just because you disagree with me doesn’t mean I’m stupid.


It seems you don't really understand it at all. That's the problem.


Your basically-absolutist stance is about as unenlightened as the one that you’re arguing against.


Sorry for being pandentic but you don't learn much by looking at the inside of a radio because it's mostly electronic components except for the knobs, antenna, dial. Without understanding how the the electronics work, you're just looking at parts. Mechanical parts like a bicycle, much easier to reason. Not knowing your background, can you build a radio if giving a box of parts? I certainly can't.


I don’t think you’re being pedantic. You’re just making a weird assumption that the radio itself is the only resource. I learned a ton from this as a kid. And I learned from Radio Shack. You stare at it, you go research, you try to fix it, you fail. Talk to someone who knows stuff. Repeat until it works or you work on a new one.

It’s really no different than how I taught myself to fix a chain or replace a spoke. Or know to use WD-40 to clean, but then apply an oil to keep stuff lubricated and protected.

With the internet, it’s a lot easier. I can look up spec sheets just googling component markings and see the sample circuits.

I’ve stared at the Linux kernel a ton. I messed with some stuff. I couldn’t write a kernel myself, but I program better from doing it and I can troubleshoot things easier knowing the components and topology.

Off the top of my head, I can fumble around and make a crappy amplifier from parts in my closet, or write a crappy FAT-like file system. I’d probably struggle a bit with a nice new bike. I think gear shifters and stuff are a lot fancier than an old 10 speed.


Maybe he's talking about a crystal radio? Those are relatively trivial to put together.


Looks cool, but how is the Kubernetes support? One of the major reasons we use Docker Desktop at work is to host a local Kubernetes cluster with services deployed there. We also support Rancher Desktop since it uses k3s, and k3s is arguably a nicer Kubernetes distribution than the one set up by Docker Desktop.

With that said, I have recently tried OrbStack, and it is able to start up near instantly, while Kubernetes spends at most 2 seconds to start up. The UI is minimal, but it offers just enough to inspect containers, pods, services, logs, etc. It also is very lightweight on memory usage and battery. I personally cannot return to either Docker or Rancher Desktop after having tried OrbStack.

OrbStack also allows using Kubernetes service domains directly on the host. So no need to use kubectl port-forward, and applications running on the host can use identical configuration to what's inside the Kubernetes cluster.

The battery savings, dynamic memory usage, fast startup time, and QOL of OrbStack is pretty much my standard for a Docker Desktop alternative. I am not sure if container-desktop satisfies all of these requirements. (Rancher Desktop certainly doesn't)


+1 for OrbStack, it’s one of the few software subscriptions I pay for, and is worth every penny. Leagues head of Docker Desktop.


I demoed Orbstack to my whole department of 100+ engineers, now we've canceled our Docker Desktop account and switched everyone over. Zero complaints.



I'm torn between https://k0sproject.io and https://k3s.io to use in CI and production.

Any suggestions or personal experience?


I'm a fan of k3s. Mostly because Rancher Desktop, but there are more useful features, like a full k3s distribution within a single docker container. It includes some nice QoL features, like pre-loading images from a mounted folder. Great for CI.


k0s is especially easy to deploy thanks to k0sctl, whether it's single node clusters, or multi node clusters. I haven't looked back ever since I started using it.


I love kind! Used it a lot when I was writing my thesis on Kubernetes schedulers.


Curious to see your thesis!


It's not much, just a simple bachelor thesis https://repositorio.ufsc.br/bitstream/handle/123456789/24495....

I mostly wanted to provide a software/hardware playground for my advisors who were working on their own thesis about algorithms for energy-aware IoT edge deployments.

The TLDR is that you can write algorithms to minimize various parameters within a Kubernetes cluster, like energy consumption.


Literally or figuratively?


What about minikube?


Minikube is more for dev environments than prod. So k0s over it anytime. For dev envs, I adopted KinD, I can even run it in CI for tests.


I've been using Rancher Desktop as an alternative to Docker Desktop, https://rancherdesktop.io/ on macOS and Windows, it's pretty solid.

It has some kinks to work out but I got it working with IDEs too (e.g. the Intellij IDEA Docker Compose integration to work with it).

What I also like is that existing scripts and etc that use the docker-compose cli work with Rancher Desktop too, as it uses nerdctl https://github.com/containerd/nerdctl


Rancher Desktop is great, because kubernetes just works. Not only that, you can "docker build" an image, and then immediately spin it up as a kubernetes pod, without spending ten minutes googling the correct commands to correctly "load" the image.


Yup +1 for Rancher Desktop. Works as smooth as Docker Desktop on MacOS.


Been using Rancher Desktop for 2 years, can definitely recommend this as an alternative to Docker Desktop.


We just completed the switch to Rancher where I work. 1200ish engineers, mostly on Macs. So far it's worked out pretty well..fewer hiccups than I expected.


Does it use the same "containers are really just running in a Linux VM" approach as Docker Desktop on macOS?


unless you run osx on a Linux kernel, it will always be so.

not a personal attack on you, but it blows my mind how clueless the current generation of developers become after the docker phase.


personal attack or not, you could have just left that last bit off and had a good comment.

There's always been a mythos of a true developer. Here's a rant from 1983 about how real programmers don't use Pascal. https://www.pbm.com/~lindahl/real.programmers.html

Kids these days...


I don't understand this comment on any level.

Containers will only ever be on a linux kernel or VM? Never natively on ANY other OS? Only Linux containers exist?

Developers were more clueful about containers before Docker made them wildly popular?


“Only Linux containers exist?“

In practice, yes.


Windows containers absolutely exist in practice.


Yeah, but how often are they needed?


My last job we ran very significant public workloads on windows containers. I don’t know the number of requests but it’s a multi million user application all around the world.


Interesting; I may be biased because I've been involved in helping teams containerize as part of a cloud migration and only one or two cases has there been a real 'need', basically for running a Windows service that was eventually retired in favour of a lambda triggered by consuming a message in a queue.


We were waaaaay too big to fit in lambda layers. Our containers were 8GB when I left, and that was using all sorts of tricks on the host infra to share data between running containers.

The root of the problem was we had third party tools which were windows only.


> unless you run osx on a Linux kernel, it will always be so

Linux is not the only OS that has container like things. FreeBSD had jails years earlier, Solaris had something else which I don't remember any more, and for all I know macOS may have their own native equivalent as well.

Bear in mind that Apple introduced an official hypervisor framework a few releases ago, so they could be doing something similar for containers. It wouldn't be a bad idea. :)


I really like the whole Rancher ecosystem. Setting up a cluster with rancher is such a pleasant experience.


Currently it is the best alternative I have used, in what concerns the same experience as Docker Desktop on Windows.


I would also encourage people to look at Podman desktop which has pretty good support from Red Hat.

https://podman-desktop.io/


support from red hat is not a good thing :nervouslaughteremoji


If you’re on macOS, then Orbstack is a nice alternative to Docker Desktop

(I’m not affiliated with Orbstack)


I would love to use it but I loathe subscriptions, especially for something I’d need work to pay for. I would happily pay a one-time $50-100 and get a perpetual license so I don’t have to deal with the headache…


IMO if Docker is important to you then Orbstack is worth it.

The debug shell feature alone makes it better than any alternative, and hopefully that subscription money is put towards more unique features.

https://docs.orbstack.dev/features/debug


if i understood that page, debug shell is... "exec" with a nicer .bashprofile and injected text editor binary???


It's installing additional packages which may not have been included in your base image.

> Debug Shell works by injecting a debugging environment using: > NixOS for a large package collection, and flexibility with filesystem paths

https://orbstack.dev/blog/debug-shell?utm_source=relnotes


It's an alternative to https://docs.docker.com/reference/cli/docker/debug/, which is also a paid feature.

Debugging slim or distroless images is quite the pain, so a tool like this is worth it if you're frequently working on such images.


Orbstack is wicked good. I love it. I compile to 4 platforms with it (Ubuntu/Mac x x86_64/arm) and it's the fastest emu/docker thing.


Of course Orbstack is fast, it uses LXD, not actual VMs. In fact, Orbstack on Mac is what made me switch to LXD (Incus) on Linux to replace Docker and virt-manager.


Wrong, Orbstack does use VMs.

https://docs.orbstack.dev/architecture

> OrbStack uses a lightweight Linux virtual machine with a shared kernel to minimize overhead and save resources, similar to WSL 2 (Windows Subsystem for Linux).


No. It uses a VM to virtualize a Linux kernel running LXD containers. Those are not virtual machines.

https://github.com/orbstack/orbstack/issues/461#issuecomment...


The VM you just referred to is a virtual machine, that’s what VM stands for.

I think you forgot how this thread got started:

> If you’re on macOS, then Orbstack is a nice alternative to Docker Desktop

We’re talking about running OCI (“Docker compatible”) images. The page you just linked to makes it apparent that you are talking about something orthogonal: OrbStack’s “machines” feature (https://docs.orbstack.dev/machines/).

The original topic is that OrbStack’s support for Docker containers is fast (implied: faster than Docker for Desktop), which cannot be explained by the lack of a VM, as both use a Linux VM to run one or more Docker containers.


lxd supports containers and virtual machines


colima is also good https://www.swyx.io/running-docker-without-docker-desktop

also no affiliation and have not tried orbstack


Colima offers the best experience for docker alternative. LIMA offers the equivalent of WSL, where both docker and podman are supported. I like LIMA a lot as I deal with both, but COLIMA rocks for simplicity. I think COLIMA + Container Desktop are perfect replacement on mac for traditional Docker Desktop users.


Colima has been great to support x86 images on Apple Silicon like OracleDB 19, instead of building arm64 images.

The flexibility of container runtimes and host architecture (via QEMU) has proven useful.


Yeah, I use this to support extremely old C++ project on x86_64 docker images and it's tolerable if not speedy.


Switched to it, and paid for the license. I agree with others about not wanting to get subcriptioned to death, but I feel like it's worth $8/month.

I've also used Colima, and if Orbstack wasn't an option, I'd be happy to keep using it.


It's nice, but only for personal use.

Be aware that you need a license if you use it at work.


As is true with a lot of developer tooling. Including Docker Desktop itself.


Another enthusiastic +1 for OrbStack. It's fantastic.


GPU support would be a real benefit, but for anything not needing that, Orbstack's become my strong preference.


Is there anything you can actually _do_ with the Apple GPUs outside of macOS? I know the Asahi Linux person was working on a driver for it, but is it in a useful state?


Yes. In fact it's accelerated and supports OpenGL 4.6 while macOS tops at OpenGL 4.1, and really mostly only supports Metal nowadays. With Asahi you can use OpenGL and Vulkan.

https://arstechnica.com/gadgets/2024/02/asahi-linux-projects...


Oh neat! Thanks for the tip!


I'm currently using colima, and none of the other alternatives that I have found support forwarding UDP ports, which I use a lot, so that's a bummer!

Thankfully, lima has landed a new port forwarder with UDP support! [0]. I'm hoping to be able to use it soon once it makes into a release.

[0]: https://github.com/lima-vm/lima/commit/13e9cbcabc6a0a05ec389...


I've really enjoyed using Orbstack: https://orbstack.dev/

it also has support for Linux VMs and kubernetes (although i haven't tried that yet)


what does this offer that podman desktop does not?

https://podman-desktop.io/


Last I checked podman's support of docker-compose.yml was very limited to say the least. Has it changed?


There are two approaches to using compose w/ podman:

Replace docker-compose with podman-compose -- somewhat limited capabilities, but works in a lot of cases.

Use docker-compose against podman w/ podman's system service, which provides a docker compatible API endpoint (https://docs.podman.io/en/v5.2.1/markdown/podman-system-serv...). This basically has full docker-compose capabilities, but, you do need run the socket service as a specific user account which end up running all the pods.


I found the most stable to be a third option: 'podman compose' with docker-compose-v2 cli "backend" connecting to the actual podman socket. This will be done if you run 'podman compose' with 'docker-compose' in PATH, and DOCKER_HOST set to your podman socke, since 'podman compose' will just shim through to whichever command it finds available.

Both podman-compose (the Python project) and docker-compose-v1 have significant gaps in the compose spec.


What parts did you find lacking? I haven't had any issues using podman-compose to launch stuff using unmodified docker-compose.yml files.


Yeah, I'm using it and it's nearly everything I need.


What does podman desktop offer that WSL does not (at least for those of us on Windows)?


Ease of use, even used as a GUI for WSL, that doesn't mean it doesn't add value.


Orthogonal rant: Podman allows host mounts during image build, whereas docker does not. Ran into a big headache where a monorepo using podman leveraged this to create container images from source and the equivalent docker implementation had to copy the monorepo into the docker build context every time.

We needed to use Docker for M1 support (probably should've tried Colima, etc).



I may be wrong, but I think BuildKit gives Docker that functionality.


I'd bind-mount the tree into the context. (I assume Docker won't follow simple symlinks.)


While I'm basically fine with Colima on Mac, this seems like a nice alternative to Docker Desktop.


After some initial pains with colima, I tend to agree. Mostly, just needing to specify some VZ args[0] so I could run x86_64 docker images on my M-series.

Is there something in these desktop UIs that colima is completely missing?

[0] `colima start --vm-type=vz --vz-rosetta`


"some initial pains" = Colima VM running out of resources running kind, so I had to raise the CPU and RAM, and then raise the fd's in the VM itself to get it to work. but now it works!


Could this be the answer I needed to run an SQL Server image that refused to run on my M3 MBP? I was about to, sadly, try Docker Desktop, because of that.


That is exactly why I needed it, too! :D

Be sure to increase RAM over the default 2GB as well, that SQL Server container is hungry and will crash without enough resources dedicated to it.


Honest question, what’s wrong with docker desktop? Looking at all the alternatives suggested it’s not clear to me why any other tools are better? I’m not using k8s locally, just docker compose. To connect to our remote k8s cluster, I use IntelliJ k8s extension (I just need to do some basic dev tasks, I’m not administrating the cluster)


One big difference is the licensing. Docker Engine itself is apache licensed (and hence free to use at a company of any scale), but Docker Desktop requires a paid plan if your company has more than 250 employees or more than $10 million in annual revenue [0].

[0]: https://docs.docker.com/engine/#licensing


Which like, seems entirely fair, but when there are suitable enough replacements that cost $0, why pay for it? Sure there are big picture reasons, but companies often don't think that long-term.


Priority tech support when everything blows up is usually the number one reason.


I have a hard time thinking of cases where you need support or priority support for developer tooling like Docker. It’s not like Docker Desktop is running in production.


“The update failed on 200 desktops.”

“Performance is crap when running BlahBlah Management Suite.”

And so on. You don’t necessarily call support when one dev has an issue, you call when they all do.


Docker Desktop requires a paid licence for companies with over 250 employees. While that's totally fair, it can add red tape if you want to use it in a project.

I'm not completely sure about licensing for Container Desktop but the footer suggests MIT license.


For me, it was consuming so much memory. Switching to OrbStack helped fix that


FreeBSD jails? :p


it's not free


Rancher desktop is fine. I did migration within 30 minutes.


Nice!

Unfortunately I got this error upon opening the Mac app:

  Uncaught Exception:
  TypeError: Cannot read properties of null (reading 'setImage')
  at NativeTheme.<anonymous> (file:///Applications/Container%20Desktop.app/  Contents/Resources/app.asar/build/main-5.2.3.mjs:22:537771)
  at NativeTheme.emit (node:events:519:28)
Nothing seems to be wrong, but that was surprising.

Also, it's not obvious from the site that Container Desktop does bundle Podman along wit it, unlike Docker Desktop. The analogy with the latter and the subtitle "Podman Desktop Companion" on the site made me think it might include a bundled Podman installation.

That said I do like the idea, and I'm definitely looking forward to trying it. For context, I'm not a Kubernetes user, mostly just Compose and plain `docker run` for ad-hoc things.


Thanks, just released 5.2.4 to address the flatpak issue you mention above. I am sorry for that, it is extremely hard to support so many formats on linux.

I am documenting myself as much as I can to be able to publish to flatpak hub, but there is a lot to get to do it properly.


I understand that packaging cross platform apps is hard. Just note that I was talking about the MacOS package, not Flatpak.


It was the same issue, solved for mac also.


Unfortunately FYI now the ARM64 app is now rejected by MacOS Sonoma on my M1, saying it's "damaged" and can't be opened. However the x86 version seems to work, presumably under Rosetta.


unfortunately this is because I cant afford to digitally sing mac apps, there is a trick to make it work in the USAGE.md but it is up to you. 200 euros per year for a pro bono project is ridiculous, you can also easily build your own dmg file from the sources as it is an open source project in the end.


Does it support VSCode Devcontainers? That's the only reason I haven't been able to switch to an alternative.


Is this supported by DD?


Yes. It is the most supported option.


colima + docker CLI goes a long way.

$ colima start

$ docker context use colima

And that's it.

And Kubernestes? No thank you, life is already hard as is.


Every time I tried Colima it stopped working after a few days. Not just for me either. Back to Docker Desktop which never gave me a single issue in many years.


My team switched our medium sized org over to Rancher Desktop with no major issues after about 10 months. We don't need kubernetes though.


Tangentially unrelated side queation: how do you make nfs mounts work in podman without running it as root and making running podman over docker kind of pointless or what do you use to share a base fileayatem from somewhere else on the network to a docker container that isnt nfs or samba?


Maybe these blog posts¹² from Red Hat will help. I haven't tried yet, just found these earlier.

¹: https://www.redhat.com/sysadmin/rootless-podman-nfs

²: https://www.redhat.com/sysadmin/nfs-rootless-podman


I dont think thats quite it. I have nfs mounts defined in my compose files. I.e in the container /media is an volume docker creates from an nfs mount defined in the docker compose. That dodnt work withpout podman having root last i checked a few years ago


It's not fully baked. Sigh

- Buggy as heck with bad error messages.

- Bad UX with inadequate help.

- Requires extra tweaking and installing more stuff to get going, which defeats its entire purpose.

- Confusing.

- Can't browse or choose tags of images.

It's not a viable alternative yet, but maybe it will improve sometime in the future.


Creating some tickets would help improve what you find problematic. I understand your frustration, you would want it to just work , but life isn’t always how we want it to be either. It is a free and open source project, no hidden goals, driven only by passion and love for tech.


not affiliated with the project but thanks for the feedback! Now they have some more items in their TODO list which will make their product better.


I saw this, I think posted here the other day, looked interesting. https://github.com/ajayd-san/gomanagedocker

A TUI alternative.


I like TUIs a lot too, they work great for remote connections and just feel good. I even mention one in the Readme of the repo, maybe some could create an awesome tui tools type of page for container management UIs


Is it ok to run the Windows version on a normal desktop (not in a VM). Does it uninstall cleanly. Thx


Yes, get it from Microsoft Store, it is digitally signed, autoupdates and offers you the best windows experience.


Is Ubuntu 24.04 supported? (Docker Desktop doesn't support 24.04 currently)


It is what I am using, so by default, yes!


Why does Docker feel like it was designed by people with no Unix background?


I don't know if your comment was intended to imply that Docker was against the Unix philosophy in some way (a debatable point, but not really one I share), or if you mean that the tools don't follow a lot of common Unix convension.

When Docker was only a few years old, I did keep running into lots of small things which implied that the people developing docker in fact did NOT have a Unix (or even Linux) background. Things like source code files having the wrong type of newlines (or a mix of types), and forgetting to add a newline to the last line in a file. (A correct Unix text file has a newline at the end of _every_ line, even the last one.) There were of course more giveaways than this, I just remember the newline stuff irritating me the most.


Why is newline at the end relevant?

I remember not having a newline breaks some tools... but why? It can't be because of unix philosophy!?


Unix's first job in life was as a documentation processing system. It was made to be very good at dealing with text. All of the tools which process text expect every line to end in a newline. The last line is not exempt from this. All classic Unix text editors will automatically append a trailing newline to any text file you create with them. Some modern tools may be _tolerant_ of omitted trailing newlines, but you shouldn't rely on that. A text file should always have a newline as its last character. Otherwise, it's not _really_ a text file.

POSIX defines it more succinctly than I do: A text file contains one or more "lines" and every "line" is terminated by a "newline."

https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1...

https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1...


Some old tools had bugs where they'd read a line (up to the new line) and then process it, so if the last line didn't end with a new line they'd never do the processing. So a manual workaround for bugs became the convention.


Back in ~2002 this was the case with cron. Found out the hard way when all the backup tapes we desperately needed were completely empty.


No, that is not a bug and there was never such a thing as "manually" adding a trailing newline. All tools add newlines automatically where they should be.

Unix has ALWAYS defined a line of text as being terminated by a newline. The last line in the file is not an exception.


so that you can cat(1) multiple files at once, and their bookends don't get glued.


+1 for OrbStack.


For none gui, lazydocker is perfect


how is this different from the usual podman client ?


Author: It isn’t, if you are a power user, the podman client should provide all you need. It also supports docker too. The difference is that this is a GUI, like Docker Desktop, but unlike it, batteries are not included. I dont dislike DD, but to each their own.


Orbstack


Is another crap electron app?


Author of the project here, yes! - electron done well is amazing, try to write a cross platform gui app in a RAD way these days with other tech. I am experienced in flutter too and would still stick to the curent stack. Maybe one day when text engine is better in flutter i would port it. You cant be that cross platform, cross os, cross arch with anything better than electron these days. Electron done well shouldn’t create any problems.


Personally I just build all my software so it includes its dependencies and then you don't need docker or any complex image manager. Don't rely on a bunch of crap being installed in the system path! Much much simpler this way imho.


Personally, I just ship every user a small Chromebook that runs my software so I can guarantee the environment is the same every time.

(I get your point, but docker has made distribution way easier in a lot of ways, and you accept sole tradeoffs for that convenience)


You can have convenience and reliability with fewer tradeoffs!


That's basically what a docker image does in a more formalized, isolated, and repeatable fashion.


True. But Docker comes with a lot of complexity. And it comes with a meaningful performance hit on macOS and Windows. And it doesn't work at all on Android/iOS.

It's so sad that running software on Linux is so wildly complicated and unreliable than things like Docker had to be invented. :(


For most uses wsl2 on windows is pretty close to a bare metal instalation

https://www.phoronix.com/review/windows11-wsl2-zen4

wsl2 runs under the windows hypervisor as a vm, but so does windows since windows 11. So there should not be much performance issues from running stuff in windows vs wsl2. The major bottleneck is if you need to move files from and to the windows vm to the Linux vm


My interest in running Linux binaries on Windows is zero. I run native windows binaries. Why would I want to run via WSL2 when I can do it natively?

Why people constantly insist on adding unnecessary layers of abstraction is beyond me.


Nobody is forcing you to use anything I just wanted to underline that the performance hit you mentioned is not really there, as we are in a public forum there is value to keep things factual.

As for why to do it, if you develop on server apps Linux is the standard (as an example redis does not have a windows native version), and I say this as a developer of Windows based microsevices on the cloud, my company is actively looking to migrate to Linux due to lack of tooling in the windows space (and also licence cost of windows server), like it or not that is the way it goes. If you don't need it great for you, but for other of us those layers are life saver


In most scenarios it is definitely good-enough but even in just my own personal experiences over a decade I need to asterisk all three of your listed benefits.


I think that's the right way to do it from the software distributor's side, but most software distributors don't do it like you.

So, from a consumer's point of view, if you want to use their software, then docker is the lesser evil compared to all the others. Notably, it's much better than binaries with dynamic libraries that don't come included in the bundle itself.


As a user, I'd rather use a container then figure out how to run a binary. The onboarding process is typically so much easier, and most enterprise folks already have container infrastructure in place. For big customers, getting a Kubernetes namespace can have significantly less friction than a VM these days.


> then figure out how to run a binary

It should never be more complicated than "run the binary". Running programs shouldn't require infrastructure or VMs or Docker images. Deploying a program should be, and can be, as simple as sharing a zip file, extracting, and running.

It's not that hard!


> better than binaries with dynamic libraries that don't come included in the bundle itself.

Binaries should always include the dynamic libraries they require. Docker is one way to include them. But you can also just include them the vanilla way. Works great! Very easy and reliable.


On some projects and teams, more than usually expected, this is more than fine.


I'm sorry but this doesn't work. Over the last 10 years so I was fucked over by countless "software that includes all its dependencies" that stopped working when I upgraded some other totally irrelevant software because "well duh it obviously uses system libC" or whatever. Examples: critical .AppImage binaries stopping working after random system upgrades. Nothing runs on my computer is ever fully isolated, not even Docker. So, any isolation guarantee I get is guarantee I'll take. You claim today that your software is isolated, but I don't know if 3 years down the road I'll upgrade my freaking text editor and your program will stop working because that one library from 1987 has to be exactly version A.X but my text editor upgraded it to A.Y. Thanks but no thanks.


> your program will stop working because that one library from 1987 has to be exactly version A.X but my text editor upgraded it to A.Y.

Perhaps you misunderstand. This issue is fully solved by including dependencies and not relying on anything in the system path. Programs should not touch the system path. If a program requires library A.Y then it should include and use A.Y. But it should not touch the system path and thus should not impact any other program. Nor will it be impacted by other programs wanting A.Z.


It's often literally not possible to ship everything. You wouldn't want to spin up a second X11 (or Wayland) server, for example, because you can't have two of them talk to the same video card device at the same time usefully.


The number of things that can't be shipped is extremely small. And I don't think that Docker is a silver bullet for Wayland vs X11 issues? Although I'm not sure about the fine details as I don't have a ton of experience there. Shouldn't you be using an abstraction that can automatically support which ever is available?

I tend to ship code that needs to run on Linux + macOS + Windows + Android. So Docker is a total non-option. And it's totally fine! Very easy in fact.


It's the same thing everywhere — there are some dependencies you can't ship. On Linux, you can't ship the window server (because you need to share it with all of the other apps also running). On mac, you can't ship Core Foundation. On Windows, kernel32.dll etc. I assume Android is similar — I haven't tried figuring out what a purely static app on Android would be, since I think the bootstrap is Dalvik…

It's literally impossible to _not_ depend on the system path.


Let me rephrase. If a dependency can be bundled then it should be bundled.

The "Linux Way" is to depend on a bunch of random garbage pooped by lord knows bullshit script into one of several global search paths. This is bad, stupid, and wrong. Programs should include as many of their dependencies as is possible.

The number of dependencies that a program can not deploy and must assume are provided by the system are extremely minimal and special case. It's a short and static list.

In general no script or program should add libraries into the global search paths. On Windows user programs do not add random crap to System32. On Linux the existence of /usr/lib is an abomination that should not exist.

Is that better? I'm fairly certain you understand what I'm trying to say.


I know that the Linux way is not perfect, but I don't know how companies can't do better than the distro maintainers. Most repositories packages are driven by some kind of build scripts. I don't expect it would be that hard of a job to create one for your software for the most popular ones. Anyone using obscure distros are familiar enough with Linux to do container or chroot environments. I like the fact that my environment is a complete one, not siloes where the developer is more than happy to let the software lingers. At least macOS force developers to upgrade, Microsoft's backward compatibility's promise is keeping so much crust around in the system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: