Hacker News new | past | comments | ask | show | jobs | submit login
Rancher Desktop, a Docker Desktop Replacement (rancherdesktop.io)
646 points by emersonrsantos 4 days ago | hide | past | favorite | 218 comments





Dropping by to express healthy interest in this project.

Rancher have a pretty good track record so far:

  - the Rancher platform itself (https://rancher.com/) is a really powerful and user friendly way to manage container clusters of all sorts, giving you a self-hosted dashboard for both your cloud and on prem clusters, for a variety of Kubernetes distributions; you can even manage the available drivers and also create deployments graphically
  - the K3s distribution (https://k3s.io/) is in my eyes one of the best ways to run Kubernetes yourself, both in development environments and production ones. I benchmarked K3s alongside Docker Swarm as a part of my Master's thesis and it was surprising to see that its overhead was actually very close to that of Docker Swarm (a more lightweight orchestrator that's included with Docker and uses the Docker Compose format), only exceeding it by a few hundred MB with similar deployments being active, making K3s passable for small nodes
As for this particular project, it's very positive to see that it supports all of the big OSes, even though the 0.6.0 version tag would still advise caution for a while, even though it can definitely be considered as a replacement for Docker Desktop.

Admittedly, it's also nice to see that Docker and the ecosystem around it is still supported and is alive and kicking, since for many projects out there it's a perfectly serviceable stack with tools that a lot of people will be familiar with, as opposed to having to migrate to podman, while it's still becoming more and more stable, yet isn't quite there yet. Now, that may be a controversial take, and Docker Inc also have their fair share of challenges, about which there was a very nice writeup here: https://www.infoworld.com/article/3632142/how-docker-broke-i...


They've also got Longhorn, a distributed container-attached storage solution that's very simple to understand and easy to deploy. Performance is another thing but that's the same with all of the general networked storage solutions (Ceph included).

Rancher's got a well deserved good impression in my mind, though early on I avoided them somewhat.

[0]: https://github.com/longhorn/longhorn


Wasn't it the code name for Vista ?

Yes, but a different Longhorn. The early 00s Windows codenames all had a PNW skiing theme, and that Longhorn is a well-known bar at the base of Whistler mountain.

More than performance, I am worried about security. They don't seem to have considered it at all when building it: https://github.com/longhorn/longhorn/issues/1805

Probably not a good recommendation until this hole is plugged.


Also there's k3d (k3s in Docker). It lets you run all kinds of k3s clusters on the desktop with just a simple command. Great way to test your k3s deployments.

Looking over k3s and it really pushes the IoT/embedded aspect. Any comments on what makes it different to a server-oriented k8s solution? Why wouldnt I run it on servers?

I guess for the most part it's just marketing, since it's a good fit for IoT as well, you can definitely run it on servers as well, as i am doing!

What makes it different from other K8s distros is listed here: https://rancher.com/docs/k3s/latest/en/

And here's an architecture overview: https://rancher.com/docs/k3s/latest/en/architecture/

At a glance:

  - by default packaged as a single binary, so is extremely easy to install, even uses SQLite for storage (though that can be swapped out for others as well)
  - includes the functionality that you'd expect, like local storage, load balancing, ingress, while at the same time getting rid of some of the unnecessary plugins that you'd get in other distros
  - as a consequence of the above, has a small runtime footprint, so running K8s clusters on VPSes with 2 to 4 GB of RAM is no longer a pipe dream, also has way less overhead on the actual nodes that you want to manage (think along the lines of a few hundred MB)
  - also includes a variety of tools with it for managing your cluster more easily, so you don't need to install those separately

I noticed k3s is snappy and responsive on a $5 DigitalOcean VM, whereas full k8s really bogs down on such a small machine

k3s, by default, uses sqlite for storage instead of etcd. This is one of the ways you can get a performance improvement.

If you are going to run a cluster you're likely going to want to have your database be an HA cluster. While k3s can support other databases, like PostgreSQL, it doesn't have handling for clusters in the config.

This is a limitation I see for k3s in clusters, right now.

I would love to see cluster handling in k3s. etcd can be a scaling bottleneck and replacing it with PostgreSQL could help it scale much higher.

Disclaimer, I work on Rancher Desktop which uses k3s


As someone using k3s on a single node right now, could you clarify/expand on "it doesn't have handling for clusters in the config"?

Assuming k8s still uses watch on etcd keys how is that implemented for SQLite ? I suppose polling is possible way if it’s low tps desktop focused


At my job, we use k3s not for IoT or embedded, but for deployment to back-office "servers" in field offices for deployments of services that don't need HA in that environment.

I'm running it on an Oracle Cloud Free Tier server, it's nice because the 1Gb of ram they include is less than the requirement for k8s but plenty for k3s

If you go ARM route that'll give you 4cpu/24Gb free tier which you can slice into multiple instances.

any chance we could look at your thesis when it is done? :)

I finished it about a year ago, but due to reasons outside of my control (administrative decisions), i had to write it in my native language, Latvian, so it's probably a tad useless to a wider audience.

In case anyone desires to mess around with PDFs and machine translation:

  - research praxis: https://files.kronis.dev/s/AJRCs84D7WngLzD
  - development praxis: https://files.kronis.dev/s/Xz6mtbAamoA7Pe8
  - the full text: https://files.kronis.dev/s/ioiW96dpnD5YcLk
Here's a tl;dr summary of what i did during it: essentially i set out to improve the way applications are run within the infrastructure of the company that currently employs me. To achieve that, i researched both the ways to utilize systemd services for everything as well as improve configuration management with Ansible, later introduce containers into the mix and compare their orchestrators (Docker Swarm and K3s in this case), as well as how to manage them, in this case, with Portainer. Then, after finishing that stuff up for the company, i proceeded with some further research of my own, into whether it's feasible to develop bespoke server configuration management tools and to integrate them with container management technologies, as well as do further benchmarks on how well the orchestrators actually handle containers running under load, since this doesn't often get tested.

There are probably a few reasons for those choices:

  - at work, it was really disappointing to see environments where servers are started manually
  - similarly, in the current day and age it's not acceptable to keep domain knowledge about how to start services and where the configuration is to yourself
  - manual configuration management simply increases the risks of human error greatly, especially due to turnover
  - in contrast, containers are a surprisingly usable way to introduce "infrastructure as code"
  - that said, their orchestrators still need a lot of work and it's probably a good idea to compare them
  - i was also curious to see whether it would be hard to create my own automation tools like Ansible, but executing remote Bash scripts through SSH
  - lastly, i was a participant in the development on https://apturicovid.lv/ which was a Latvian COVID contact tracing solution; i was curious about the choice of using Ruby, so i wanted to create my own mock system to see how it'd perform under load in a real world scenario, with tools like K6s https://k6.io/
More importantly, i wanted to see how much load a GPS based approach (as opposed to Bluetooth) with PostGIS would generate and whether Ruby would still be okay, as described on my blog https://blog.kronis.dev/articles/covid-19-contact-tracing-wi...

And, without further ado, here's my findings:

  - if you're not using Ansible or another alternative for managing the configuration of your environments, you should definitely look into it
  - if you can't or don't want to run containers, at least consider having systemd services with environment config files on your servers
  - Docker Swarm and the lighter distributions of Kubernetes, like K3s are largely comparable, even though Kubernetes takes a little bit more resources
  - tools like Portainer can make managing either or both of them a breeze, since it's one of the few solutions that support both Docker, Docker Swarm and Kubernetes clusters
  - despite that, you'll still probably want to configure your applications (e.g. HTTP thread pool sizes, DB pool sizes) and introduce server monitoring or APM into the mix
  - as for Python, it's pretty good for developing all sorts of solutions, even for remote server management: with Paramiko (for SSH), jsonpickle (serialization), Typer (CLI apps) and other libraries
  - for my tool, i used JSON as an application data format, so one command could pipe its JSON output (e.g. returned tokens from cluster leader server for follower servers) as an input for another; there are few cases where this can work, but when it does, it is pretty nice
  - that said, any such tool that you write will probably just be a novelty and in 90% of the cases you should just look at established solutions out there
  - load testing is actually pretty feasible with tools like K6s, but only as long as you just want to test Web APIs - anything more complex like that might be better tested with something like a server farm running instances of Selenium
Here's a few Git repos of the projects:

  - A very simple COVID infection tracking system which generates heatmaps to attempt to preserve anonimity: https://git.kronis.dev/rtu1/kvps5_masters_degree_covid_1984
  - The K3s benchmarks for it: https://git.kronis.dev/rtu1/kvps5_masters_degree_covid_1984_load_test
  - The Python configuration management tool: https://git.kronis.dev/rtu1/kvps5_masters_degree_astolfo_cloud_servant
Oh, and here's a test environment for the mock system that shows the generated heatmaps, you'll need to press the "Start" button to preview the data, though you can change the visualization parameters at runtime: https://covid1984.kronis.dev/

Shameless plug, I had the exact same problem with wanting to deploy some apps to a server (either home, on production at work, or IoT/Raspberry Pis), and I didn't like any of the options (Ansible is too dependent on the machine's state, Kubernetes is too complicated and heavy), so I wrote 200 lines of code and made this, which I love:

https://gitlab.com/stavros/harbormaster

It basically pulls the repos you specify and runs `docker-compose up` on them, but does it in an opinionated way to make it easy for you to administrate the machines.


Docker compose doesn’t get the love it deserves. The most recent versions even dropped requirement to specify the schema version, it’s a beautifully compact way now to describe services and their relationships.

I have used docker-compose for my one man projects for years, removes any need for me to remember any kind of specific deployment steps for a given project. One of the biggest time savers in my entire career.

At last they are finally bringing the functionality into the core docker CLI with a native "docker compose" (not docker-compose) command - why that took until 2020 I have never understood, can only assume internal politics. If compose had been integrated sooner maybe adoption would have been wider.

I've actually come across your project before and keep meaning to try it for my PiHole/HomeAssistant/WireGuard setup etc, will check it out!


+1 for docker-compose or Swarm as executable (always up to date) infrastructure documentation

It's always nice to see Docker Compose and tools that are built around it, same way as with Swarm!

Here's a few other that occupy a similar space:

  - https://caprover.com/
  - https://dokku.com/
Now, if only functions as a service got a bit more love, then things would be really interesting!

I use Dokku and like it a lot, but the automatic ingress complicated things a bit, and I couldn't easily have more elaborate setups like with Docker Compose. However, if you need a Heroku alternative, I wholeheartedly recommend it.

> Admittedly, it's also nice to see that Docker and the ecosystem around it is still supported and is alive and kicking

When the controlling organization is starting to go down the user hostile route (e.g. payed update opt-outs), it is more or less sad to see that the Docker is alive and kicking. Developers should ran away as fast as possible.


I see it as a legitimate way to pay programmers. You still have the core functionality, but the kind of features that the enterprise likes is paid and supports the development of open source. Why should everyone work for free to enrich the Kleiner Perkins of the world?

Let me say it; I love to see it. There's just something that strikes me as harmfully greedy about the idea of "Docker, a gajillion dollar company." There's just no way that THAT kind of service could be tweaked to return that much to stakeholders without seriously screwing up the experience for people who need it to just work.

Is there something I'm missing on the Docker story? Seems like they built something that everyone uses as an integral part of their workflow, and they're looking to get paid for it, which, I may be out of date here, was the entire ethos of this entire site for quite some time. Did the founder of Docker kick a bunch of puppies or something? Is there some reason I'm missing why we should be angry about being asked to pay for something that everyone uses for everything and derives a lot of value from?

(This isn't to take away from Rancher, by the way - good on 'em, I'm all for competition in developer tools)


> Seems like they built something that everyone uses as an integral part of their workflow, and they're looking to get paid for it, which, I may be out of date here, was the entire ethos of this entire site for quite some time

1. Not every widely-used tool needs to be a VC-powered unicorn startup. I'd be pissed if Linus/the Linux Foundation started to demand a per-core licensing fee. I'd probably convince my organization to switch to a BSD, on principle.

2. No one likes a bait-and-switch, and lately, lots of companies see Free/Open Source Software as a "growth hack technique" rather than an actual philosophy, because they'll otherwise face headwinds with a closed-source product. This is akin to the underwear gnome strategy:

  1. Author Open Source product
  2. Get wide adoption
  3. ???
  4. Profit

The problem with this is that if Docker Inc goes under, you can say goodbye to Docker Hub: https://hub.docker.com/

Sure, there are alternative repositories and for your own needs you can use anything from Sonatype Nexus, JFrog Artifactory, Gitlab Registry or any of the cloud based ones, but Hub disappearing would be a 100 times worse than the left pad incident in the npm world.

Thus, whenever Docker Inc releases a new statement about some paid service that may or may not get more money from large corporations, i force myself to be cautiously optimistic, knowing that the community of hackers will pick up the slack and work around those tools on a more personal scale (e.g. Rancher Desktop vs Docker Desktop). That said, it might just be a Stockholm Syndrome of sorts, but can you imagine the fallout if Hub disappeared?

Of course, you should never trust any large corporation, unless you have the source code that you can build the app from yourself. For example, Caddy v1 (a web server) essentially got abandoned with no support, so the few people still using it had to possibly build their own releases and fix the bugs themselves, which was only possible because of source code availability, before eventually migrating to v2 or something else.

Therefore, it makes sense to always treat external dependencies, be it services, libraries, even tools like they're hostile - of course, you don't always have the resources to do that in depth, but for example seeing that VS Code is not the only option but we also have VS Codium (https://vscodium.com/) is encouraging.


Docker hub going down would be a disaster for sure, but I consider "pull image/library from 3rd party hub over the internet on every build" to be an anti-pattern (which is considerably worse with npm, compared to docker). That said,if this is where the value is being provided, perhaps they ought to charge for this service? I guess it's difficult because it's easily commoditized.

> but can you imagine the fallout if Hub disappeared?

I wish that would actually happen - not forever - if it'd go down for a day or 2 with no ETA for a fix, and the thousands of failed builds/deploys will force organizations to rethink their processes.

I think Go's approach on libraries is the way forward - effectively having a caching proxy that you control. I know apt (the package manager) also supports a similar caching scheme.


That sort of happened already when docker started rate limiting by incoming IP:

https://www.docker.com/increase-rate-limits

Large orgs started hitting the rate limits since many devs were coming from the same ip. Most places probably put in a proxy that caches to a local registry.


That's what we did, put a proxy in front that caches everything. Now that Docker Desktop requires licensing, we're going down the road of getting everyone under a paid account.

I'm sure Rancher is great for personal desktop use, but there's no reason large companies can't pay for Docker.


Or even small. At work, I advised that we just pay for Docker Desktop. We got it for free for a long time. Our reason for not paying is that we're an Artifactory shop, so their Docker Enterprise offering wasn't really attractive to us. But we're easily getting $5/dev/mo worth of value out of Docker Desktop.

And I don't really see this as an open source bait and switch, either. Parts of Docker are open source but Docker Desktop was merely freeware.

That said, I believe in healthy competition, and so it was quite worrisome to me that Docker Desktop seemed to be the only legitimate game in town when it came to bringing containerization with decent UX and cross-platform compatibility to non-Linux development workstations. So I'm happy to see Rancher Desktop arrive on the scene, and very much hope to see the project gain traction. Even if we stay with Docker, they desperately need some legitimate competition on this front in order to be healthy.


> but can you imagine the fallout if Hub disappeared?

> I wish that would actually happen - not forever - if it'd go down for a day or 2 with no ETA for a fix

Do people not run their own private registry with proxying enabled? If Docker Hub went down at this point, I think my company would be fine for _months_. Only time we need to hit Hub is when our private registry doesn't have the image yet.


It just never seemed worth the effort when we are paying Docker Hub to be our private registry.

The problem is that most of the companies that rely on the Hub aren’t helping it stay afloat.

You are obviously not part of this problem.


You can already cache dockerhub via the docker repo container very easily. In fact, due to the number of builds, it would be foolish not to do this to avoid GBs of downloads all the time.

> Hub disappearing would be a 100 times worse than the left pad incident in the npm world

This is really overdramatic. If Docker Inc. went out of business and Docker Hub was shutdown then the void would be filled very quickly. Many cloud providers would step in with new registries. Also, swapping in a new registry for your base images is really easy. Not to mention the tons of lead time you’d get before docker hub goes down to swap them. Maybe they’d even fix https://github.com/moby/moby/issues/33069 on their way out, so we can just swap out the default registry in the config and be done with it.


If only we could have a truly distributed system for storing content addressed blobs ... perhaps using IPFS for docker images. This way you could swap the hosting provider without having to update the image references

I’d love for others with more knowledgeable to chime in, since this feels close to the logical end state for non-user-facing distribution. At a protocol level, content basically becomes a combination of a hash/digest and one or more canonical sources/hubs. This allow any intermediaries to cache or serve the content to reduce bandwidth/increase locality, and could have many different implementations for different environments to take advantage of local networks as well as public networks in a similar fashion as recursive DNS resolvers. In this fashion you could transparently cache at a host level as well as eg your local cloud provider to reduce latency/bw.

Sounds a lot like BitTorrent.

I’m not super well versed, but I thought BitTorrent’s main contribution was essentially the chunking and distributed hash table. There is perhaps a hood analog of the different layers of a docker image.

Isn't this what magnet links for torrent files have provided for years? Maybe even a decade? https://en.wikipedia.org/wiki/Magnet_URI_scheme

> Also, swapping in a new registry for your base images is really easy.

This is the exact problem! Sure, MySQL, PHP, JDK, Alpine and other images would probably be made available, but what about the other images that you might rely on, but the developers of which might simply no longer care about them or might not have the free time to reupload them to a new place.

Sure, you should be able to build your own from the source and maintain them, but in practice there are plenty of cases when non-public-facing tools don't need updates and are good for the one thing that you use them for. Not everyone has the time or resources to familiarize themselves with the inner workings of everything that's in their stack, especially when they have social circumstances to deal with, like business goals to be met.

In part, that's why I suggest that everyone get a copy of JFrog Artifactory or a similar solution and use it as a caching proxy in front of Docker Hub or any other registry. That's also what you should be doing in the first place, to also avoid the Docker Hub rate limits and speed up your builds, not downloading everything from the internet every time.

Otherwise it's like saying that if your Google cloud storage account gets banned, you can just use Microsoft's offering, while it's the actual data that was lost that's the problem - everything from your Master's thesis, to pictures of you and your parents. Perhaps that's a pretty good analogy, because the reality is that most people don't or simply can't follow the 3-2-1 rule of backups either.

The recent Facebook outage cost millions in losses. Imagine something like that for CI/CD pipelines - a huge number of industry companies would not be able to deliver value, work everywhere grinding to a half, shareholders wouldn't be pleased.

Of course, whether we as a society should care about that is another matter entirely.


Using an abandoned image that nobody cares to update carries its own set of problems (e.g security)

As i said, if it's not exposed to the outside world and doesn't work with untrusted data, that claim is not entirely valid.

Imagine something like this getting abandoned, or someone running a year old version of it: https://github.com/crazy-max/swarm-cronjob/blob/master/READM...

Its only job is to run containers on a particular schedule, no more no less. There are very few attack vectors for something like that, considering that it doesn't talk to the outside world, nor processes any user input data.

Then again, it's not my job to pass judgement on situations like that, merely acknowledge that they exist and therefore the consequences of those suddenly breaking cannot be ignored.


If you depend on it, you should keep a local copy around that you can host if needed.

Things get abandoned all the time. When you make them part of your stack, you now are forever indebted to keeping them alive yourself until the point in which you free yourself from that burden.


Hub disappearing would be the best thing that happened to Docker in years. People really shouldn’t be running the first result from Hub as root on their machines.

I miss a version of hub with _only_ official images.


I doubt a rubber stamp of "officialness" would make the situation much better.

> Docker Hub

Given that it is extremely trivial to run your own container registry, I think the focus on this as some great common good is overstated. As it is 99% of the containers on it are for lack of a better word absolute trash, so it is not very useful as it stands.


VSCodium doesn't add anything other than build VSCode source without telemetry and provide real FOSS build of VS Code. If VSCode development stopped then VSCodium will stop also.

> The problem with this is that if Docker Inc goes under, you can say goodbye to Docker Hub: https://hub.docker.com/

So you think that Docker Hub is Docker Inc's entire value proposition? And if Docker Inc is nothing more than a glorified blob storage service, how much do you think should company be worth?


Oh, not at all! I just think that it's the biggest Achilles' heel around Docker at the moment, one that could have catastrophic consequences on the industry.

It'd be about as bad as that one time when Debian updates broke GRUB and my server could no longer boot: https://blog.kronis.dev/everything%20is%20broken/debian-and-...

Imagine that, but industry wide:

  - you no longer can use your own images that are stored in Hub
  - because of that, you cannot deploy new nodes, new environments or really test anything
  - you also cannot push new images or release new software versions, what you have in production is all that there is
  - the entire history of your releases is suddenly gone
I don't pass judgements on the worth of the company, nor is there any actual way to objectively decide how much it's worth, seeing as they also work on Docker, Docker Compose, Docker Swarm (maintenance mode only though), Docker Desktop and other offerings that are of no relevance to me or others.

Either way, i suggest that anyone have a caching Docker registry in front of Docker Hub or any other cloud based registry, for example the JFrog Artifactory one. Frankly, you should be doing that with all of your dependencies, be it Maven, npm, NuGet, pip, gems etc.


Most widely-used tools are not VC-powered unicorn startups and nobody said they needed to be. You're free to create all the tools you want, while others can raise money to develop theirs.

If open-source helps the product grow and the community benefits then what's the problem? Who lost here? And why are there headwinds with closed-source products anyway? Open-source doesn't mean free, so what's the objection?

Docker the company executed poorly in monetizing their product but there's a lot of undue hate compared to the value it has created. If you don't like it when it's closed-source, and you don't like it when it's open-source, then what do you want exactly?


> Open-source doesn't mean free

Counterpoint: yes it does. Hardly anyone pays for external open-source products. Managed solutions, yes, but we've seen multiple times that trying to close an open system so you can charge for it is very unpopular. For example, my workplace has a company-wide edict against using or even downloading the Oracle JDK.


Open source literally means they show you the source code. It doesn’t have to mean anything beyond that.

"open source" is by now understood to mean https://opensource.org/osd

.. and let you redistribute it. Which implies that everyone else can have a copy. And therefore usually the binaries.

it doesn't, that's called source-available

To be fair, docker desktop never was open source.

As I understand it, that’s not entirely correct, and the Docker Desktop we all use and … well, _just use_ … is built from a number of components that are or used to be OSS: Docker, Docker Compose, docker-machine, and Kite, amongst others. Granted, Docker Desktop is more polished than Kite was, but it’s also had years of VC money thrown at it so that it’s almost as bloated in appearance as the Dropbox client.

And that’s sort of the problem. I don’t want the Docker Desktop that exists. I want something that does all of the behind-the-scenes stuff that Docker Desktop does and gives me a nearly-identical experience to developing on Linux even though my preference is macOS. I might even pay a _reasonable_ subscription for it.

But the Docker Desktop that is? Not exactly something that I think is worth paying for.


Free means free - but if versions 1.0 to X are free today and version X+1 is paid tomorrow, that is a bait-and-switch. There's no hate here, it's just that I (and any competent client company) have no way of knowing if they "won't alter the deal any further".

The problem is not with the open-source approach: in chasing growth, they commoditized both areas they could have monetized - the client and the service. If they had charged for either (or both) at first, they wouldn't have gained traction, and some other company would have ate their lunch.


So they commoditized and failed, or they could've been commercial from the start and someone else would've commoditized it and they still fail. So what? That's the point of a startup, they tried to build something and it didn't work out as a business model.

The community still benefited greatly from all the development and new projects that came from this. And what is this other company that would've ate their lunch? How would that company survive exactly?

The only objection seems to be the license change, which is still free for the vast majority. Only larger commercial users have to pay, but that seems commensurate with the value they gain from it. Should companies never try to alter terms as the market changes? I don't see why people are entitled to products and services forever, and then hate the company if they try to be sustainable but also hate them if they abandon it.


> I don't see why people are entitled to products and services forever, and then hate the company if they try to be sustainable but also hate them if they abandon it.

Nobody is entitled to anything. Users aren't entitled to free services/products in perpetuity, but the other side of the coin is that companies also aren't entitled to those users. Nor companies are entitled to being free of any criticism.


> How would that company survive exactly?

Let me distill my thinking: a tool does not have to be a company, or be backed by a single-product company.

IMO,the more successful tools tend to be backed by a maintainer & contributors who work on it in their free time, or by a consortium of companies that do not directly make money from the tool, but are willing to put money into it. Docker-like functionality can be replaced by such models, so we are not stuck in a perpetual cycle of ${ToolName}, LLC


Community edition and paid Enterprise plug-ins with support is a standard pattern in the OSS market.

Not quite bait & switch as you put it, and frankly polishing and idiot proofing tools for production workloads is expensive, and requires competent professionals that definitely need to feed themselves and their family.


Just because everyone does it doesn't mean that it isn't wrong and illegal.

That's an extreme claim. How is offering community and enterprise editions potentially wrong or illegal?

It's usually a full open source solution and then the enterprise edition gets introduced later to make money. In other words, it's dumping to gain market share and then later trying to use the cornered market to extract profit.

There is absolutely nothing wrong or illegal with companies offering new additional products.

You're not cornered when you can freely choose to buy the enterprise product depending on whether you get any value from it or stick with the open-source version which remains available and has plenty of competition (like Rancher).

In fact that entire issue with Docker is that they have too little value to charge for and too much competition to defend against, the exact opposite of dumping to clear out the market.


Indeed. I’ve worked with several “enterprises” and often the OSS option was discarded for lack of support for certain enterprise use-case or integration with other commercial products; or for lack of professional support for production issues and SLAs.

Obviously this stuff costs effort and time, you can’t expect that to be available for free.

It’s also a great opportunity to commercially exploit your knowledge and taste and make a living out of it; rather than curse “the powers that be” on a daily basis, while struggling with closed software whose only purpose has always been milking as much profit possible


> Not every widely-used tool needs to be a VC-powered unicorn startup.

Ok, which tools need to be a VC powered unicorn? I’m serious.


If CUDA were a startup, it could be a VC-powered unicorn (not sure about the deserving, but they'd have decent shot at monetization). Unfortunately, my tool knowledge is not broad due to the limitations of the few tech-stacks I'm familiar with.

Honorable mentions: R and Rust, maybe? But I don't see how they'd make the money back (which perhaps is the challenge Docker is running into)

edit: Also SQLite!

2nd edit: I completely misunderstood your question, I think. The answer is "none" - there are no tools that need to be unicorns, at least for those that are downloaded and can be run locally. Those that I listed could be.


People hate it if you take the free toys away, especially after they've started using them because they were useful and free and now have grown used to them.

+ opinions about Docker are generally not universally positive, which doesn't help. It's not a niche thing only used by superfans, but something that has been promoted and pushed widely. Not everybody who uses it likes it, and it's more infrastructure than "cool tool".


Many of us pay for developer tools so it's not a question of wanting to monetise their product. It's the way they've done it. Constantly harassing people to update their desktop versions as a means to drive them to pay is unacceptable.

I have never seen such user-hostile behaviour from an established company like this and I for one will never ever give them a cent to reward them for it.


They also have an “upgrade” button in docker desktop that means purchase, rather than a version upgrade. I understand that they aren’t the first to call buying a licence an upgrade (although I’m not sure you get much software wise for doing so to warrant the term), but when you closed the update nag screen and then open it, it sort of looks like the button is for that.

Chrome for example has update button in similar placement.


There were competitors back then, I remember when docker was announced and their huge investment. To a large degree that broke us, we could not compete with free. Nor was docker safe to bundle in our application, now about a decade later it barely would be but at least others have done it now.

To me they are the definition of worse is better, but clearly only for a while.


Moving from a freely installed tool to a paid license tool changes the workflow. Its a pain in the ass to manage seats.

Not only that, but it was a bait and switch. Lots of people are using docker because of the previous licensing rules and might have chosen another tool if the new licensing was already in place.

I don't fault docker but I can see why people are annoyed and looking for a drop in replacement as well.


Then everybody realized Docker was just idiosyncratic configuration layered on top of Linux cgroups and if they decide to encumber their base software we will just use the latter directly. And this is how podman/containerd/... came to be.

There are no gifts, Docker isn't owed billions for trying to monetize cgroups.


> Then everybody realized Docker was just idiosyncratic configuration layered on top of Linux cgroups

That is true and that is why the Docker runtime is still free.

However this is about Docker desktop, which is mostly used on Mac and other platforms, where docker actually manages virtual machines etc.

And yeah, all software is just a layer on top of moving bits around ... some of those layers create value, some less.


Docker, as the most used container tool, became a utility and hence it is expected to be free and unencumbered so it remains a helpful tool instead of a hindrance and liability to daily development for – well, perhaps the majority of developers by now. To try and monetise it would just assure that a free and open version will take over; which, I guess, is what we see happening now.

Angry implies we're somehow out of line for believing that they are charging too high a price for a product, especially ones that we provide much of the value to.

Think of those "by the pound" frozen yogurt and toppings places. One day they charge a fair price and sometime later they up their prices and start charging different amounts for different stuff, etc.

I'm not angry if I dip on over to the grocery store and get my own stuff instead. Disappointed that we lost something cool, maybe, but that's on THEIR dumb choices.


People are generally jealous. Docker feels small enough that "anyone could code it over the weekend", and so they think there is no way this is worth so much money. Of course nobody is trying to actually code something like this over the weekend, or once they do they just shut up.

The cognitive dissonance and entitlement with some is just funny to watch. They have no problem buying stuff from billion dollar companies who treat workers as crap and don't pay taxes, but a smaller developer wants to make money off of their hard work? Nah...

It's probably because a good chunk of the community is composed of privileged people, who never experienced the struggle.


I’d pay a reasonable amount for a headless docker desktop for macOS.

That’s not on offer. Instead we get a client that’s almost as bloated as the Dropbox client.


Some people think developers should always work for free to enrich the executive and investing classes. I think that making enterprise pay for stuff that matters for enterprises while still providing open source for society should be seen as a positive, sustainable model.

> something that everyone uses

Thankfully that's not the case. Even kubernetes, that still uses containers, moved away from docker.


the opensource monetization playbook is pretty simple: you charge for the higher tier feature that local joe dont need but big corporate management/laywer/compliance mandate.

anything from centralized authentication/auditing/monitoring/provisioning/reporting to checklist certifications gov demands.


Developers expect to get free tools.

I for one don't expect to get free tools, but expact that licensing of something (cough elastic, docker cough) doesn't change overnight in a very hostile manner so I feel ripped off.

I for one mostly use GPL or similarly licensed tools for my development workflow to be able to make sure my building infrastructure doesn't rot, and can be completely replicated by someone when I open the code (with a xGPL license, no less).

OTOH, I pay for some good developer tools, since they make my life easier. However, they're not irreplaceable and they're definitely not "code pipeline infrastructure" tools.


Partly because devs tend to labour under the misapprehension that:

a) pretty much any closed source project or online service can be done just as well by gluing together parts of open source projects…

Which may be true in some cases, but the real misapprehension is the final part:

b) …and it won't be difficult to do.

The classic is the HN comment about Dropbox, but I see at least one comment a week saying something similar. No doubt there are many more.


It's not a misapprehension: selling tools to tool-makers requires that you walk a very fine line. A lot of closed-source tools have pissed off developers enough to be replaced by usually-superior open-source versions, e.g. Git replacing BitKeeper because kernel developers were not happy with BK.

Outside of very specialized tools, open source tools tend to attract more contributors and quickly overtake incumbents (see compilers). JetBrains is one of the few companies that bucks the trend, and one I gladly give money to regardless of my increasing VSCode usage.

Had Dropbox been marketed at developers only, it would have been a spectacular failure in a world where inotify, rsync, cron and scp already existed.


> JetBrains is one of the few companies that bucks the trend, and one I gladly give money to regardless of my increasing VSCode usage.

Large parts of IntelliJ are open-source as well, and sometimes they do get used in other open-source editors. Afaict they're pretty good citizens in terms of F/OSS.


Can you clarify? Like are people saying that Dropbox is easy to replace with OSS? Like ALL it’s functionality?

It‘s a reference to this historic comment when dropbox was first shown on HN: https://news.ycombinator.com/item?id=9224

This is the (in)famous comment https://news.ycombinator.com/item?id=9224 though, as I wrote, I don't think it's particularly special, it's just the one that gained notoriety. I actually responded to one just this week, they're rife.

The optimism bias is a problem even for programmers.


Is this true? I know developers that pay for IDEs, managed git, jira, and CI/CD services. Developers that use Postgres might pay for postico etc.

Popular tools are free. There are few exceptions, most notable one is Intellij Idea, but even Idea seems to be eclipsed by VScode lately.

I struggle to remember a single paid Java library I've used in the last 10 years. Everything is free.

There was commercial ecosystem around Delphi. Paid IDE, paid components. There's paid Lisp IDEs. But it's definitely not mainstream today.


IDEA has an open source edition.

(and of course, don’t forget about Visual Studio)


As a developer I pay for tools the same way I expect to be paid for my own work.

The tools aren't created in vacuum and the creators also have bills to pay, just like I do as well.


Why? Docker provided a lot of value for free, screwed up their own business model and almost went bankrupt before being bought for scraps by Mirantis.

Then they still make their technology available for free (that's right, docker is still free and they donated all the important stuff) and start asking money for just the wrapping and UI stuff in desktop. And just for big corps who can afford it. And still they get a lot of flak for it!

These people just can't get a break. I don't understand why a lot of people in OSS community hate on them so much. What is up with that?


Right, because Rancher Labs is not a gajillion dollar company: https://www.crunchbase.com/organization/rancher-labs

Rancher already sold.

Suse has a functioning business model and makes money from real paying customers who want services in return, not just VCs who want more money in return. Docker has a valuation without a way to live up to it, which is scary because we all have to wonder what awful stuff they're going to have to do in order to make enough revenue for their investors to get their money back.


even though it's true that SUSE has a functioning business model, it worth to mention that SUSE belongs now to https://en.wikipedia.org/wiki/EQT_Partners, a global investment organization, and they (EQT) are probably aiming just for a nice exit.

...and that also already happened, in the form of an IPO in Germany: https://www.google.com/finance/quote/SUSE:ETR

"Docker, a gajillion dollar company" - LOL they barely get along.

While I love Docker as a dev tool putting all strings into your hand to juggle customer environments, I fail to see the use case of Docker on Macs and Windows, which I'm understanding what Docker desktop is about. After all, docker images are about saving the resources/memory for a full VM and using Linux kernel-level compartments instead.


Development.

When you're developing in a container you can guaranty that the environment is the same on your PC or in the cloud.

Also, the same argument you had for shared resources in Linux, when you're running Linux on a VM locally on your desktop because you want linux tools or applications, then you want as much memory management as possible.


I develop on my Windows machine and I use Docker for testing. We have a a microservices architecture in production and using Docker is easier and less messy to run locally parts of our infrastructure such as Postgres, Redis, Consul, Nats and some of the needed microservices instead of using VMs.

"After all, docker images are about saving the resources/memory for a full VM and using Linux kernel-level compartments instead."

That's not what container images are for.

Container images give you a self-contained deployment artifact. The rest is marketing BS.


It’s awfuly handy being able to pop open little recipe OS configs while using my Mac.

They have failed to monetize one of the most important products on the web development. They have never provided profits. I don’t know about the greedy part, it is more about survival at this point. That is capitalism.

If I was them I would've just sold the company to Microsoft a long time ago. I suspect that everyone, including users, would be better off that way.

They missed the boat on this, there were rumors Microsoft offered as much as $4bn back in ~2016. Whether that's true or not, if there was an exit to be had that was around the time it would have been viable.

A fully integrated docker desktop + vscode would be a game changer for all involved. Docker + VSCode is already good, but more integration would be way better.

I thought this was the next logical step after Windows started incorporating containers - for several years you saw the two companies get closer together until suddenly they no longer seemed to be on speaking terms. Docker really missed their opportunity here.

Not sure what you mean, Docker and Microsoft collaborated closely on Windows containers and Microsoft still recommends docker desktop:

https://docs.microsoft.com/en-us/virtualization/windowsconta...


Doesn't Microsoft already make money from private (read enterprise) github docker registries/azure container registry while dockerhub is serving free stuff for them?

Enterprise registry is about access control which Microsoft knows how to do, they already have enterprices in their ecosystem, no?


Is there a non docker desktop path forward for windows containers on windows 10 machines?

Both of you are 100% right. Will be interesting to see how this seemingly no-win scenario plays out for the ecosystem and toolchain, beyond just the Docker products.

And this is how lots of great products and ideas get killed. Failing to monetize them.

Singularity containers not mentioned yet: are unique, they don't change on use and can bind mount resources, run without root privileges. You can actually execute the image files and include them in the well known Unix pipe sequences in your favorite shell. Really powerful stuff.

They block you from even registering your username unless you pay $25k to be in the "verified" program. I'm mad they hold my name hostage, and I'm not ready to spend $25k

What do you mean with ”hostage”?

I mean my name isn't open (on Hub there is no user/edoceo page, but one cannot register that name either.

And they say they respect trademark and to open a ticket, no response.

But, I can join the verified publisher for only $25,000 USD.

Not literally hostage but still feels dirty to me.



Yeah I have no idea what parent means. We have a company branded company namespace on dockerhub with just a dev account.

There is indeed this verified publisher program, which is crazily expensive and I'm not sure what benefits it offers, which is probably the whole explanation of why dockerhub is not in a very good place :)


I'm saying, my name is blocked unless I pay

I was amazed by the news at first, but seeing it again, it doesn't seem to be a direct replacement of Docker Desktop. It makes use of K3s under the hood, which is lighter than a full-blown Kubernetes, but still quite heavy. My nearly empty K3s cluster is consistently consuming ~10% CPU time on my local machine. This is not acceptable to me, so a proper replacement of Docker Desktop should be more lightweight.

Ach, the title seems to be mis-selling it.

Docker Desktop is all about Docker, Compose and Swarm, and has Kubernetes functionality built on top as an extra. Rancher Desktop seems to be all about Kubernetes - can it even run standalone Docker containers, or Compose/Swarm services?

EDIT: answering my own question - I checked the nerdctl GitHub page, and it states that not only is Swarm not supported, but it will not be. So Rancher Desktop is unfortunately not a drop-in Docker Desktop replacement for everyone :(


Swarm is a cluster solution made on top of Compose so it isn't useful on a local machine.

It's very useful on a local machine if you're using Swarm in production, as Swarm supports some additional things that Compose doesn't.

The differences are:

- Replicas: could be made with YAML templating

- Update/rollback policies other than stop-first, very useful on a local machine?

- Resource limits: stayed in v2 to sell Swarm, although there's cgroup_parent which has to be created manually


There are yet more differences.

For example, Swarm supports `configs`, while Compose does not - and I use `configs`, and would much rather not have to have separate service definitions for dev and prod to work around such insufficiencies.


It also comes with nerdctl (docker compatible cli). It interacts with containerd directly and is a drop-in replacement for docker cli.

Oh that's nice, would it be possible to run containerd only, without running any Kubernetes controllers? I suspect the controllers are the root cause of the huge resource consumption, because they are constantly checking for the system components and trying to reconcile them.

Yep! If you’re on MacOS check out Lima, it’s a project by Akihiro Suda who is one of the main contributors to nerdctl (and the original author iirc)

Rancher Desktop macOS version is based on Lima.

Rancher's, and Podman's, efforts on Mac are exciting, but you can also just run the Docker daemon in a VM on Mac and install the free docker CLI using homebrew. This is what I do using Canonical's Multipass to create the VM. I wrote a simple script that makes it a simple `dockerhost create`. https://github.com/leighmcculloch/dockerhost

Thanks for sharing. Does this also support volumes and setup port mapping for local host?

No port mapping, the ports are available on the IP of the VM. Volume mounts are supported by multipass, but dockerhost doesn't set them up automatically, yet!

Whatever Docker thought when changing the license and pricing of docker desktop, they're not considering long enough about the impact of the alternative. It's just about time for the same or better quality replacement to come. At the end, docker relevance in developer machine will be less and less overtime.

Agreed. They should have just sold ads for the unpaid edition users to tolerate.

Where would the ads be displayed? I almost never open the Docker Desktop application itself. It starts up automatically and mostly does what it's supposed to do. I connect to it through the command line.

Would have been the same move in my book, I wouldn't have tolerated that.

Podman also has a docker desktop alternative in the works (Already works on Mac IIRC)[0]. Will be interesting to see how these two solutions play out.

0: https://github.com/containers/podman/issues/11494


I tried Podman on OSX. It was easy to set up but the deal breaker was no bind mounts. I'm sure that functionality will come but for dev purposes it's not a Docker Desktop replacement on OSX just yet.

Yes exactly. I found that lima + docker are the perfect solution for me, in terms of user experience.

When podman will fix this issue (with sshfs like lima does?) I'll return to it


Just so people know, if you use Cockpit on Linux, You can add podman support and it essentially provides a Podman GUI.

This works today, and works very well.

https://github.com/cockpit-project/cockpit-podman


Last time I tried, podman didn’t work on M1 because of a missing upstream patch. I have been able to make Lima work directly out of Homebrew without compiling anything locally.

I've been using podman on wsl2 as my docker desktop replacement _mostly_ painlessly since the news of docker desktop's licensing changes.

Recently we've been suggested to stop using Docker, so I've been on the lookout for alternatives. One of the biggest problems we have is that some of our in-house tooling is written with Windows as an afterthought, and so it's often easier to run things under WSL, then through Docker. But the double abstraction makes some of our apps extremely slow on those Windows machines. Does anyone know if something like Rancher/K3s is any better with this regard?

I'm pretty sure Docker on Windows is going to run in a VM no matter what. And it's worth noting that running on Docker does not have any significant performance difference to running on raw Linux in the same environment.

Hate to be "that guy" but if you want speed in a Linux environment then just run Linux ;). If you're really stuck with Windows and need Docker, I'd say try hyper-v, VMware, etc and see which is fastest.


> But the double abstraction makes some of our apps extremely slow on those Windows machines.

If you keep your files in WSL 2's file system Docker is extremely fast. It's nearly as fast as native Linux and even apps with thousands of files will reload web servers nearly instantly on file change using volumes.


Unless there's a way to share the files from Windows in to WSL, then I can't do that. We need to be able to edit the files from within Windows (people want their VSCode/Sublime Text/Notepad++). I've tried to tell them to get Linux machines or Mac's, but that doesn't work for everyone either.

For the VSCode situation, there is a Remote extension by Microsoft for editing files that live on the WSL2 side.

For everything else you can just use the network mount to WSL2. So the performance hit will be on the editor I/O side but not the application runtime.


> We need to be able to edit the files from within Windows (people want their VSCode/Sublime Text/Notepad++).

You can browse directly to the WSL container filesystem through Windows by browsing to:

\\wsl$

Example: click Start --> Run --> "\\wsl$"

Is that what they're looking for?


I use VS Code's native integration with WSL. So VS Code runs as a native Windows app with nearly everything you'd expect, but the actual files being read/saved are inside WSL. Works great with Vagrant, Docker for Desktop, Git etc.

Yep, this works really well and for directly accessing files in WSL 2's file system from Windows, @lenova's sibling comment goes over that process.

To expand on that, I also have this path in my Windows explorer "Quick access" list: \\wsl$\Ubuntu-20.04\home\nick

It's a shortcut to my home directory inside of WSL 2 for quick access. It's useful in the cases where I want to drag / drop a photo or something from Windows into WSL 2's world.


Some of this depends on where your slowdown is.

For example, on Windows you'll likely use NTFS which is a slower file system than ext on Linux. Then, NTFS can be mounted in which isn't fast. So, filesystem operations though this in WSL are going to be slow.

This is just an example and may not be your setup.

Rancher Desktop uses WSL to run Kubernetes and containerd. This is similar in setup to Docker Desktop. So, if you have performance issues in one setup you're likely to have them in the other. And, it may be related to WSL and the VM than to either of these apps.

Disclosure, I work on Rancher Desktop.


If you want to install the Windows release[0] silently in a script, then use this (where X.Y.Z is the version number):

  .\Rancher.Desktop.Setup.X.Y.Z.exe /S --force-run --allusers
It took me an hour to figure out the necessary arguments until I learned about some Electron-specific behavior in the installer[1].

[0] https://github.com/rancher-sandbox/rancher-desktop/releases

[1] https://stackoverflow.com/a/66906851


* I had this wrong.

It's just:

  Rancher.Desktop.Setup.0.6.0.exe /S
The --force-run option you specifically don't want for a silent install. Also, the --allusers option doesn't work.

Good to see people stepping in after the docker desktop change, things I care about:

- transparent networking, i want to access kube services on 127.0.0.1, i'd rather not run a proxy/manage things myself - access to containerd socket to use non-rancher tooling, for image building etc.

early days for this project, i'll have to spin it up and see how it goes!


For those wondering what the difference between k3s and k8s is: https://www.youtube.com/watch?v=FmLna7tHDRc

According to the very sparse docs, M1 Mac support is planned but not yet implemented.

At my current company we started moving our setup to k8s... And using docker-compose required another deployment platform.

So end 2019 we started using k3s... But developers complained about heavy resource usage, and slow startup time.

So in the end we moved back to docker-compose... This means our local dev environments are NOT the same as our k8s prod clusters.

But if this works well, and k3s is more lean. It could unify our infra stack across developer machines, to production. Same tooling everywhere...

Love it


Recently, I found Tilt [0] to be a good partner of mine to run "all services locally". It can be compared to "webpack for backend" (live-reloading, a lot of configuration possibilities [1]). Tilt uses Titlfiles for configuration, which are written Pythonish Starlark language and you use them to write any specific logic there. You want to run a bunch of services directly? Use local_resource()/local(). You have Procfile? There is procfile() function. You have a docker-compose.yml with services like databases? You can run it too with docker_compose(). You want to separate your Tiltfile on multiple files and include them all-together on tilt-up command? There is load()/load_dynamic(). You need some web-ui for frontend devs and a nice log browser? It is there too. You want to update your local cluster with newly built image on file save? No problem, tilt will do that with k8s_yaml() function. You need to do some extra steps before running a service (e.g. data seeding)? Use aforementioned local()/local_resource(). You need a sample project that uses tilt? Sure, there is some tutorial for C# [2]. Do you use jib instead of docker-build? Here it is [3].

Also, I am not very lucky in having resemble 1:1 k8s cluster locally. You could be close, but as long as you run in the cloud you will have different configuration on local (additional annotations, various quirks that do not exist in kind/k3s but they are on GCP). However, making dedicated dev environments in the cloud might be very costly and incur a lot of additional tinkering.

[0]: https://tilt.dev/

[1]: https://docs.tilt.dev/api.html

[2]: https://docs.tilt.dev/example_csharp.html

[3]: https://github.com/tilt-dev/tilt-example-java/blob/master/10...


From my experience, trying to boot up the entire platform on developer's laptops (or remote cloud servers) for each developer is something doomed to fail and cause pain, whatever you use.

My best experience so far, has been to start services individually, and work with mocks and tests. And do integration on a shared staging environment or, even better, have a way to deploy multiple versions to the same env (like you can do with App Engine, Vercel, etc).

I hope I never ever again have to work at a place where I need to replicate the entire platform for my development environment.


Moved to docker in Virtualbox for my manual / integration testing on Mac and never looked back. Now I'm looking at minikube.

Not very convenient, but not a show - stopper.


> Rancher Desktop is an electron based application....

Oh, well. I guess they copied everything that was "good" then.


What's new is that their new preview release adds Linux support[0] as a "preview". This still isn't mentioned on the main site linked above nor in the main github README yet :)

[0] https://community.suse.com/posts/rancher-desktop-v06-includi...


There was a recent HN post about minikube as a drop-in relacement for Docker Desktop. FWIW I spent some hours digging into minikube and learned a bunch but hit a wall with some networking stuff and tabled it. Stoked to give Rancher Desktop a try, bc even tho Docker Desktop serves my needs for now, it is a resource hog and I'd be stoked for a good alternative.

I actually just switched from k3d back to minikube the other day. Both are great, but have some little problems that block me for hours. They are both moving in the right direction though.

k3d/k3s starts up fast and feels minimal. I still have problems building docker images and pushing them into the cluster.

Minikube solves this well: ``` minikube image build . -t localhost:5000/myapp:latest minikube image push localhost:5000/myapp:latest ```

I will be very happy when one of these runs locally without com.docker.hyperkit running 70% CPU even while idle.


Unfortunately, without Windows Container support, I can't consider this full a replacement for Docker Desktop.

A bit off topic, but does anyone know what tool the author uses to display the stats at the top of the terminal?

It's the iTerm2 built-in status bar: https://iterm2.com/3.3/documentation-status-bar.html

Oh, I missed this feature. Thank you very much!

I wish Rancher didn't adopt Kubernetes. Their Cattle was perfect for running distributed services on bare metal. It was simple and just worked. After the move to k8s I couldn't get anything to work and it was flaky. I am still running some projects on Cattle and they have few years uptime.

(Early employee)

We liked 1.x/cattle as much as the next person but it was clear 4 years ago that k8s was going to win the mind-share & market.

Competing with that was a slow and painful path to irrelevance. Which was the path Mesos chose, along with Docker/Swarm to a large degree.

(Please stop running 1.x, especially if exposed to the internet; it hasn't been updated in ~3 years now and everything up and down the entire stack it supports are going to be full of CVEs.)


What is the WSL2 integration story here ?

That's one of the most important usecases on windows going forward.


As mentioned in the linked page: "Windows Subsystem for Linux v2 is leveraged for Windows systems. All you need to do is download and run the application."

The graphic under that quote shows containerd and k3s running under WSL2 talking to Rancher desktop.


ah cool. i didnt understand the language. oops.

Really nice to see a bit of competition in here!

Slightly unrelated but also, from the same company, Rancher (their main product) has been really good to deploy bare meal Kubernetes setup [almost] 100% container based.


I've seen a couple projects around this, are any of them also tackling the docker-compose-based workflows as well? Or will that stuff work transparently with these replacements as well?

Haven't tried this yet but it mentions using nerdctl which does support compose.

Does it support volume mounts from Windows and macOS to containers?

> nerdctl run -it --rm -v "${HOME}:/mnt" ubuntu

^^ volume mounts appear to "just work" on macOS


It seems to be a kubernetes replacement that comes with Docker Desktop. But what are the steps to just build and run a simple Dockerfile? An example would have been good.

The video demonstrates it, using the "nerdctl" CLI which seems to be API equivalent to the docker CLI but understands k8s namespaces (thus hiding the k8s containers by default).

I agree that having an example in writing would have been helpful.


Any idea if nerdctl can run Compose services and Swarm stacks?

Nerdctl claims to be able to run `docker-compose.yml` files as a drop-in replacement. How true that is (and how volume mounts work, if at all) I don't know.

The video doesn't demonstrate that and instead pivots to talking about k8s. I don't know.

Is this related to Rancher, make of the k8s mgmt platform by the same name? I don't see any reference to it on their official website.

Yes, it is the same company. Which is now part of SUSE. From the footer:

> © 2021 SUSE. Rancher Desktop is an open source project of the SUSE Rancher Engineering group

It is strange that they don't make it super clear. But it is briefly mentioned in the top video on the hero section of the page.


There's this on the bottom

>© 2021 SUSE. Rancher Desktop is an open source project of the SUSE Rancher Engineering group.


One main use for Docker we have is running integration tests on local docker compose containers and/or using testcontainers library

Can rancher solve this usecase?


I don't think it runs Docker. You can run minikube and use it for Docker workloads, which should work with testcontainers.

https://minikube.sigs.k8s.io/docs/commands/docker-env/


Half-OT: I saw many "run Linux in the browser" projects. Does anyone know if there is a project to run a container on WASM?

I did not see it mentioned anywhere but does this have the same performance problems as Docker Desktop for Mac?

I immediately got a good impression of their service because their home page loads so fast.

I'm a simple man.


And with Linux support! Awesome!

I talked about this in a recent blog post on transitioning away from Docker Desktop - https://jason-umiker.medium.com/replacing-docker-desktop-wit...

why do I keep seeing the promotion of tools that require QEMU to run like it's no problem?

What's wrong with requiring qemu?

It emulates the entire hardware stack so it's even slower than virtual machines which in turn are slower than docker on linux.

Only if it's rather oddly configured. It can emulate the entire stack, but it doesn't have to as long as your guest and host architectures match.

Isn't that KVM? Why don't they say they use KVM if it's the case?

Because they use QEMU, which in turn uses KVM where appropriate, or some Microsoft thing on Windows, or an Intel equivalent on Intel-Macs. EDIT: and since this app is for now only for Mac and Windows if I read the page right, it's probably not using KVM obviously.

Do you have licensing concerns? Because they at least claim the tool works fine on macOS and WSL2, so I don't see an obvious technical issue.

No update nags!

what happened to podman

Been using it on wsl2 with Ubuntu 21.04

[flagged]


[flagged]


[flagged]


Set your browser to not autoplay videos if you don't like them. THen you never have to see them.

A more reliable way is to tell your browser to never start autoplaying videos.

So? Just turn that off in your browser settings.

It says it only runs on Windows and Mac?... The top 2 platforms which run docker horribly...

I would urge you to compare this to Docker Desktop - https://www.docker.com/products/docker-desktop

> Docker Desktop is an application for MacOS and Windows machines for the building and sharing of containerized applications and microservices.

The "desktop" part of each refers to "systems other than linux."

You can run rancher on a linux machine just as you can run docker on a linux machine - without the "desktop" part of it. As I understand it, the "desktop" nature of it is the glue between running a virtual machine on a non-linux system and the underlying virtual magic.


What is horrible about Docker on a Mac? I've never had a problem with it, and use it regularly as my daily driver OS for development of apps that get deployed as containers.

Not GP, but for me performance is absolutely abysmal compared to just using podman in opensuse microos virtual machine (what I'm using now instead of docker).

rancher is a good platform overall. https://www.brigadekomarlaheights.net.in/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: