Kubernetes is planned - my devops wants me to add it badly!
Author note - Most of you guys here are power users, for whom UI is a visual poem that you need or not.
This is not a commercial project, it is not following any business goals.
But this does not mean concessions to quality, it does try to offer minimal resource usage everywhere, easy experience, good UI/UX.
It explains all it does behind the scenes if you enable the developer console.
It can help one learn so at a certain moment one understands and automates with scripts and specs.
But everyone these days is either seen as too smart or too dumb, I don't consider users like this. I know everyone started somewhere and a gradual learning experience is the best.
I broke so many radios and toys when I was a kid and I learned so much, by looking at what was is inside.
It is a project done by one dude, after work and when it rains outside (In Belgium it rains a lot).
I don't live my life entirely on the command line either, but GUIs for Docker are just an interesting niche to me, for which I just don't understand what the ven diagram is between people that want Docker containers running locally, know that that's what they want, and know how it all works, but then don't want to do the small handful of commands at the prompt needed to get it running...
I don't necessarily want docker containers running locally as some hobbyist, they might be just part of the process, and if the gui helps me move through that process efficiently without having to add more commands to my memory, I'm happy about that. CLIs are great, but when nearly everything has one, those small amount of commands become quite a lot in aggregate.
yes, when it's possible. but guis may not exist or may be not better than console, like in case of ffmpeg. the best, of course, is smart assistant who can take verbal commands. either human or llm.
but my post was about doing complex tasks in general. try to offload. another advise for developers is to write comments, even in your small hobby projects. this way you don't have to memorize it all. this was learned hard way. i usually also have a separate documentation with plans, ideas, algorithms, useful info. remind: this is for hobby projects.
and important thing: touch typing is must have. this makes it all much easier
I do think these are good suggestions for anyone getting started with CLIs. Initially they struck me as a bit redundant, but only because the point I meant to make was that because I've already been doing most of that for ages, I'm happy to delegate that to a good gui if one comes along, since that is both more enjoyable, less error prone (mostly), and less tedious. Notes, comments, and LLM generated commands are lovely, but needing to rely on them less, particularly in situations where you can perform some common subset of tasks with better information layout, interface, and progress/state feedback, is worth paying for sometimes.
FFMpeg is a good example though of one I'm happy to just have my notes on, but since I do literally only ever use it for one or two types of tasks, I'm happy to have that sit behind the scenes. Others might use it in many versatile ways, for which I'd be grateful to have those options readily available in my terminal.
Life is full of many things to do and so not everyone have the luxury to priorities logging ones life for everything they do. 2 or GUI are very feasible option for busy people.
you don't have to "prioritize logging" to have logs, calling this a "luxury" is quite bizarre. You can simply use tools that do bash history search, or one of the many copy-paste memorizer tools, and you'll save many hours out of those "many things to do" simply by typing ctrl+s. Some people are busy simply because they want to.
Docker Desktop includes the easy to run Docker Engine / Docker Machine. I think is fair to assume that most of the revenue is not from users that want a GUI but from users that want a stable Docker Engine experience.
Anecdotal, but my experience, as someone who gives DevOps professional services for many organizations, is that windows users that need containers know that they are called Docker and just download that. Must of them absolutely need GUI. Most of them doesn't know that Docker Desktop requires license, and I convert them to Rancher Desktop.
How do I install and run docker containers on windows without docker desktop? I’ve made attempts in the past but never actually succeeded, and just enddd up using docker desktop.
Now
a. Either work with Rancher Desktop (open it) and Docker is available also in cmd line (docker, docker ccompose , etc)
b. Or Start Podman Desktop to configure Podman (or just use comandline to configure)
Now in cmd you not only have docker and friends but also podman and friends
Bonus, you have Kubernetes tools too and you are FOSS.
Happy composing :-)
PS: I think you cannot start both. I have both installed and never looked back. Windows 10 x64 PRO
> want Docker containers running locally, know that that's what they want, and know how it all works, but then don't want to do the small handful of commands at the prompt needed to get it running
Consider the case of a team of people collaborating on a software stack - the prototypical use case includes Docker Compose at the simplest and a full K8s stack at the extreme. There is quite often a minimum of 3 containers here; frontend, API/Backend and a database server. If you start to add observability tools, async/batch/event execution, caching, automated integration testing, etc, the number of "layers" in the stack grows quickly. In addition, each component may have unique per-environment or even per-user customizations.
Often one ore two people will manage the stack itself and provide instructions on how to get the whole thing working for others using a specific defined selection of easy to use tools that essentially offer minimal prerequisite knowledge to use
"Install X, run Y, get to work."
It saves a lot of time for the intern on the UI team who just wants to add a component to one page and test it locally and not also have to learn how to deploy the entire stack from scratch.
I use Docker Desktop on both my macbooks, despite shunning IDEs in favour of a decent text editor and the command line. I use it for 2 reasons: to manage the Linux VM, and to twiddle the occasional setting. For running the containers themselves, or running `system prune` when everything gets cluttered up, I use the CLI.
Same reason, don't fight every time need docker upgraded, on Linux I simply use docker package but in windows and Mac docker desktop is my go to route, but I'm trying podman desktop, I use fedora and sometimes I used podman until stopped because the Nvidia, same reason always does not fight over my tooling
I agree on the docker / podman, but for Kubernetes, Lens is is really useful. It isn't a substitute for knowing the command line but can be much quicker.
This looks really slick! Quick question for you: the site mentions that other engines are planned. I'm curious what those might be. I would guess something like directly interfacing with containerd or kata, but would love to know more. If I could request one, it would be to directly use systemd, since it now covers all the necessary features to run containers quite nicely.
I never finished it, but I had a lot of fun documenting a basic-ass K8S (well, K3S) setup that costs about 20€/mo on Hetzner.
You don't really learn about sysadmin through it, or even about docker that much, but you get an idea of how you might easily run a few different things on a server while only needing to know YAML, and not some custom DSL like chef or puppet.
> only needing to know YAML, and not some custom DSL like chef or puppet.
YAML may be a known syntax, but the use of it still requires domain specific knowledge, and is still a domain specific language expressing those domain specific concepts, as to what the expected keys and values are allowed to be and how they are interpreted.
YAML isn’t the DSL, it’s just the language used to express declarative config because the tooling is ubiquitous and it’s rare that anyone uses it as anything more than a nicer version of JSON.
For Kubernetes, it’s CRDs that are written in YAML and they conform to a specification.
I did something similar between jobs—built a k8s "cluster" on my home Linux box using kops+qemu. It didn't make me an experienced admin, but it was really enlightening and fun! Projects like these are a great way to learn.
I don’t get why people need Kubernetes integrations. Kind works just fine. You run it from the terminal and it starts a “cluster” as one or more containers. You can define port bindings and volume mounts via the yaml config. Job done.
Poorly documented is one possibility. Also if you find an issue with anything that's not "testing k8s" the devs will tell you you're not supposed to use it for that.
> It explains all it does behind the scenes if you enable the developer console. It can help one learn so at a certain moment one understands and automates with scripts and specs.
An excellent way to learn indeed! Good luck with your project.
Like everything in tech, it's all about tradeoffs and understanding how you want to scale your business. Of the three startups I've been at, 2 of them adopted k8s early on and 1 of which didn't. Of the two that adopted k8s, one I would say k8s was our key differentiator in terms of our GTM motion and how our platform powered the business. $MM customer needs a setup running as close to their current infra's region for just about any reason, yeah sure we can spin that up in a week. This was often the key bit that allowed us to take customers from our key competitors before the competitor even knew we were in play. The 2nd one..... giant waste of money that gave me a decent pay check.
The third startup that opted _not_ to adopt k8s is stuck at $100M in revenue and can't land customers fast enough to offset churn. This is entirely because the COO has held the mentality "k8s bad, amirite?" and stuck with a patchwork of ansible scripts to manage configuring VM farms that ran our stack. Years of tech debt piled up and every new $MM customer coming in that needs to run in a specific region for $reasons, takes 6 months to setup and cost so much that we'd lose money on the deal. I genuinely believe this startup would be closer to $500M in revenue in the years since I've left had they invested in migrating from containers running in VM's to k8s. But instead they had to lay off 30% of their staff and get another round of funding, and are stagnating.
I have found that people who are adamant haters of k8s, usually truly do not understand k8s or the issues it solves.
It's not for everyone - but having a knee-jerk allergic reaction to anything k8s is silly.
That said - k8s isn't just about scaling, or for "web scale" companies. If you are a person that believes that, it means you are the type of person I am talking about.
I don’t hate k8s, I hate that everyone blindly adopts it without understanding the complexity it adds. I understand it well enough to know YAGNI for most applications. Get over yourself, just because you disagree with me doesn’t mean I’m stupid.
Sorry for being pandentic but you don't learn much by looking at the inside of a radio because it's mostly electronic components except for the knobs, antenna, dial. Without understanding how the the electronics work, you're just looking at parts. Mechanical parts like a bicycle, much easier to reason. Not knowing your background, can you build a radio if giving a box of parts? I certainly can't.
I don’t think you’re being pedantic. You’re just making a weird assumption that the radio itself is the only resource. I learned a ton from this as a kid. And I learned from Radio Shack. You stare at it, you go research, you try to fix it, you fail. Talk to someone who knows stuff. Repeat until it works or you work on a new one.
It’s really no different than how I taught myself to fix a chain or replace a spoke. Or know to use WD-40 to clean, but then apply an oil to keep stuff lubricated and protected.
With the internet, it’s a lot easier. I can look up spec sheets just googling component markings and see the sample circuits.
I’ve stared at the Linux kernel a ton. I messed with some stuff. I couldn’t write a kernel myself, but I program better from doing it and I can troubleshoot things easier knowing the components and topology.
Off the top of my head, I can fumble around and make a crappy amplifier from parts in my closet, or write a crappy FAT-like file system. I’d probably struggle a bit with a nice new bike. I think gear shifters and stuff are a lot fancier than an old 10 speed.
Looks cool, but how is the Kubernetes support? One of the major reasons we use Docker Desktop at work is to host a local Kubernetes cluster with services deployed there. We also support Rancher Desktop since it uses k3s, and k3s is arguably a nicer Kubernetes distribution than the one set up by Docker Desktop.
With that said, I have recently tried OrbStack, and it is able to start up near instantly, while Kubernetes spends at most 2 seconds to start up. The UI is minimal, but it offers just enough to inspect containers, pods, services, logs, etc. It also is very lightweight on memory usage and battery. I personally cannot return to either Docker or Rancher Desktop after having tried OrbStack.
OrbStack also allows using Kubernetes service domains directly on the host. So no need to use kubectl port-forward, and applications running on the host can use identical configuration to what's inside the Kubernetes cluster.
The battery savings, dynamic memory usage, fast startup time, and QOL of OrbStack is pretty much my standard for a Docker Desktop alternative. I am not sure if container-desktop satisfies all of these requirements. (Rancher Desktop certainly doesn't)
I'm a fan of k3s. Mostly because Rancher Desktop, but there are more useful features, like a full k3s distribution within a single docker container. It includes some nice QoL features, like pre-loading images from a mounted folder. Great for CI.
k0s is especially easy to deploy thanks to k0sctl, whether it's single node clusters, or multi node clusters. I haven't looked back ever since I started using it.
I mostly wanted to provide a software/hardware playground for my advisors who were working on their own thesis about algorithms for energy-aware IoT edge deployments.
The TLDR is that you can write algorithms to minimize various parameters within a Kubernetes cluster, like energy consumption.
I've been using Rancher Desktop as an alternative to Docker Desktop, https://rancherdesktop.io/ on macOS and Windows, it's pretty solid.
It has some kinks to work out but I got it working with IDEs too (e.g. the Intellij IDEA Docker Compose integration to work with it).
What I also like is that existing scripts and etc that use the docker-compose cli work with Rancher Desktop too, as it uses nerdctl https://github.com/containerd/nerdctl
Rancher Desktop is great, because kubernetes just works. Not only that, you can "docker build" an image, and then immediately spin it up as a kubernetes pod, without spending ten minutes googling the correct commands to correctly "load" the image.
We just completed the switch to Rancher where I work. 1200ish engineers, mostly on Macs. So far it's worked out pretty well..fewer hiccups than I expected.
My last job we ran very significant public workloads on windows containers. I don’t know the number of requests but it’s a multi million user application all around the world.
Interesting; I may be biased because I've been involved in helping teams containerize as part of a cloud migration and only one or two cases has there been a real 'need', basically for running a Windows service that was eventually retired in favour of a lambda triggered by consuming a message in a queue.
We were waaaaay too big to fit in lambda layers. Our containers were 8GB when I left, and that was using all sorts of tricks on the host infra to share data between running containers.
The root of the problem was we had third party tools which were windows only.
> unless you run osx on a Linux kernel, it will always be so
Linux is not the only OS that has container like things. FreeBSD had jails years earlier, Solaris had something else which I don't remember any more, and for all I know macOS may have their own native equivalent as well.
Bear in mind that Apple introduced an official hypervisor framework a few releases ago, so they could be doing something similar for containers. It wouldn't be a bad idea. :)
I would love to use it but I loathe subscriptions, especially for something I’d need work to pay for. I would happily pay a one-time $50-100 and get a perpetual license so I don’t have to deal with the headache…
Of course Orbstack is fast, it uses LXD, not actual VMs. In fact, Orbstack on Mac is what made me switch to LXD (Incus) on Linux to replace Docker and virt-manager.
> OrbStack uses a lightweight Linux virtual machine with a shared kernel to minimize overhead and save resources, similar to WSL 2 (Windows Subsystem for Linux).
The VM you just referred to is a virtual machine, that’s what VM stands for.
I think you forgot how this thread got started:
> If you’re on macOS, then Orbstack is a nice alternative to Docker Desktop
We’re talking about running OCI (“Docker compatible”) images. The page you just linked to makes it apparent that you are talking about something orthogonal: OrbStack’s “machines” feature (https://docs.orbstack.dev/machines/).
The original topic is that OrbStack’s support for Docker containers is fast (implied: faster than Docker for Desktop), which cannot be explained by the lack of a VM, as both use a Linux VM to run one or more Docker containers.
Colima offers the best experience for docker alternative. LIMA offers the equivalent of WSL, where both docker and podman are supported. I like LIMA a lot as I deal with both, but COLIMA rocks for simplicity. I think COLIMA + Container Desktop are perfect replacement on mac for traditional Docker Desktop users.
Is there anything you can actually _do_ with the Apple GPUs outside of macOS? I know the Asahi Linux person was working on a driver for it, but is it in a useful state?
Yes. In fact it's accelerated and supports OpenGL 4.6 while macOS tops at OpenGL 4.1, and really mostly only supports Metal nowadays. With Asahi you can use OpenGL and Vulkan.
There are two approaches to using compose w/ podman:
Replace docker-compose with podman-compose -- somewhat limited capabilities, but works in a lot of cases.
Use docker-compose against podman w/ podman's system service, which provides a docker compatible API endpoint (https://docs.podman.io/en/v5.2.1/markdown/podman-system-serv...). This basically has full docker-compose capabilities, but, you do need run the socket service as a specific user account which end up running all the pods.
I found the most stable to be a third option: 'podman compose' with docker-compose-v2 cli "backend" connecting to the actual podman socket. This will be done if you run 'podman compose' with 'docker-compose' in PATH, and DOCKER_HOST set to your podman socke, since 'podman compose' will just shim through to whichever command it finds available.
Both podman-compose (the Python project) and docker-compose-v1 have significant gaps in the compose spec.
Orthogonal rant: Podman allows host mounts during image build, whereas docker does not. Ran into a big headache where a monorepo using podman leveraged this to create container images from source and the equivalent docker implementation had to copy the monorepo into the docker build context every time.
We needed to use Docker for M1 support (probably should've tried Colima, etc).
After some initial pains with colima, I tend to agree. Mostly, just needing to specify some VZ args[0] so I could run x86_64 docker images on my M-series.
Is there something in these desktop UIs that colima is completely missing?
"some initial pains" = Colima VM running out of resources running kind, so I had to raise the CPU and RAM, and then raise the fd's in the VM itself to get it to work. but now it works!
Could this be the answer I needed to run an SQL Server image that refused to run on my M3 MBP? I was about to, sadly, try Docker Desktop, because of that.
Honest question, what’s wrong with docker desktop? Looking at all the alternatives suggested it’s not clear to me why any other tools are better? I’m not using k8s locally, just docker compose. To connect to our remote k8s cluster, I use IntelliJ k8s extension (I just need to do some basic dev tasks, I’m not administrating the cluster)
One big difference is the licensing. Docker Engine itself is apache licensed (and hence free to use at a company of any scale), but Docker Desktop requires a paid plan if your company has more than 250 employees or more than $10 million in annual revenue [0].
Which like, seems entirely fair, but when there are suitable enough replacements that cost $0, why pay for it? Sure there are big picture reasons, but companies often don't think that long-term.
I have a hard time thinking of cases where you need support or priority support for developer tooling like Docker. It’s not like Docker Desktop is running in production.
Docker Desktop requires a paid licence for companies with over 250 employees. While that's totally fair, it can add red tape if you want to use it in a project.
I'm not completely sure about licensing for Container Desktop but the footer suggests MIT license.
Unfortunately I got this error upon opening the Mac app:
Uncaught Exception:
TypeError: Cannot read properties of null (reading 'setImage')
at NativeTheme.<anonymous> (file:///Applications/Container%20Desktop.app/ Contents/Resources/app.asar/build/main-5.2.3.mjs:22:537771)
at NativeTheme.emit (node:events:519:28)
Nothing seems to be wrong, but that was surprising.
Also, it's not obvious from the site that Container Desktop does bundle Podman along wit it, unlike Docker Desktop. The analogy with the latter and the subtitle "Podman Desktop Companion" on the site made me think it might include a bundled Podman installation.
That said I do like the idea, and I'm definitely looking forward to trying it. For context, I'm not a Kubernetes user, mostly just Compose and plain `docker run` for ad-hoc things.
Thanks, just released 5.2.4 to address the flatpak issue you mention above. I am sorry for that, it is extremely hard to support so many formats on linux.
I am documenting myself as much as I can to be able to publish to flatpak hub, but there is a lot to get to do it properly.
Unfortunately FYI now the ARM64 app is now rejected by MacOS Sonoma on my M1, saying it's "damaged" and can't be opened. However the x86 version seems to work, presumably under Rosetta.
unfortunately this is because I cant afford to digitally sing mac apps, there is a trick to make it work in the USAGE.md but it is up to you. 200 euros per year for a pro bono project is ridiculous, you can also easily build your own dmg file from the sources as it is an open source project in the end.
Every time I tried Colima it stopped working after a few days. Not just for me either. Back to Docker Desktop which never gave me a single issue in many years.
Tangentially unrelated side queation: how do you make nfs mounts work in podman without running it as root and making running podman over docker kind of pointless or what do you use to share a base fileayatem from somewhere else on the network to a docker container that isnt nfs or samba?
I dont think thats quite it. I have nfs mounts defined in my compose files. I.e in the container /media is an volume docker creates from an nfs mount defined in the docker compose. That dodnt work withpout podman having root last i checked a few years ago
Creating some tickets would help improve what you find problematic. I understand your frustration, you would want it to just work , but life isn’t always how we want it to be either. It is a free and open source project, no hidden goals, driven only by passion and love for tech.
I like TUIs a lot too, they work great for remote connections and just feel good. I even mention one in the Readme of the repo, maybe some could create an awesome tui tools type of page for container management UIs
I don't know if your comment was intended to imply that Docker was against the Unix philosophy in some way (a debatable point, but not really one I share), or if you mean that the tools don't follow a lot of common Unix convension.
When Docker was only a few years old, I did keep running into lots of small things which implied that the people developing docker in fact did NOT have a Unix (or even Linux) background. Things like source code files having the wrong type of newlines (or a mix of types), and forgetting to add a newline to the last line in a file. (A correct Unix text file has a newline at the end of _every_ line, even the last one.) There were of course more giveaways than this, I just remember the newline stuff irritating me the most.
Unix's first job in life was as a documentation processing system. It was made to be very good at dealing with text. All of the tools which process text expect every line to end in a newline. The last line is not exempt from this. All classic Unix text editors will automatically append a trailing newline to any text file you create with them. Some modern tools may be _tolerant_ of omitted trailing newlines, but you shouldn't rely on that. A text file should always have a newline as its last character. Otherwise, it's not _really_ a text file.
POSIX defines it more succinctly than I do: A text file contains one or more "lines" and every "line" is terminated by a "newline."
Some old tools had bugs where they'd read a line (up to the new line) and then process it, so if the last line didn't end with a new line they'd never do the processing. So a manual workaround for bugs became the convention.
No, that is not a bug and there was never such a thing as "manually" adding a trailing newline. All tools add newlines automatically where they should be.
Unix has ALWAYS defined a line of text as being terminated by a newline. The last line in the file is not an exception.
Author: It isn’t, if you are a power user, the podman client should provide all you need. It also supports docker too. The difference is that this is a GUI, like Docker Desktop, but unlike it, batteries are not included. I dont dislike DD, but to each their own.
Author of the project here, yes! - electron done well is amazing, try to write a cross platform gui app in a RAD way these days with other tech. I am experienced in flutter too and would still stick to the curent stack. Maybe one day when text engine is better in flutter i would port it. You cant be that cross platform, cross os, cross arch with anything better than electron these days. Electron done well shouldn’t create any problems.
Personally I just build all my software so it includes its dependencies and then you don't need docker or any complex image manager. Don't rely on a bunch of crap being installed in the system path! Much much simpler this way imho.
True. But Docker comes with a lot of complexity. And it comes with a meaningful performance hit on macOS and Windows. And it doesn't work at all on Android/iOS.
It's so sad that running software on Linux is so wildly complicated and unreliable than things like Docker had to be invented. :(
wsl2 runs under the windows hypervisor as a vm, but so does windows since windows 11. So there should not be much performance issues from running stuff in windows vs wsl2. The major bottleneck is if you need to move files from and to the windows vm to the Linux vm
Nobody is forcing you to use anything I just wanted to underline that the performance hit you mentioned is not really there, as we are in a public forum there is value to keep things factual.
As for why to do it, if you develop on server apps Linux is the standard (as an example redis does not have a windows native version), and I say this as a developer of Windows based microsevices on the cloud, my company is actively looking to migrate to Linux due to lack of tooling in the windows space (and also licence cost of windows server), like it or not that is the way it goes. If you don't need it great for you, but for other of us those layers are life saver
In most scenarios it is definitely good-enough but even in just my own personal experiences over a decade I need to asterisk all three of your listed benefits.
I think that's the right way to do it from the software distributor's side, but most software distributors don't do it like you.
So, from a consumer's point of view, if you want to use their software, then docker is the lesser evil compared to all the others. Notably, it's much better than binaries with dynamic libraries that don't come included in the bundle itself.
As a user, I'd rather use a container then figure out how to run a binary. The onboarding process is typically so much easier, and most enterprise folks already have container infrastructure in place. For big customers, getting a Kubernetes namespace can have significantly less friction than a VM these days.
It should never be more complicated than "run the binary". Running programs shouldn't require infrastructure or VMs or Docker images. Deploying a program should be, and can be, as simple as sharing a zip file, extracting, and running.
> better than binaries with dynamic libraries that don't come included in the bundle itself.
Binaries should always include the dynamic libraries they require. Docker is one way to include them. But you can also just include them the vanilla way. Works great! Very easy and reliable.
I'm sorry but this doesn't work. Over the last 10 years so I was fucked over by countless "software that includes all its dependencies" that stopped working when I upgraded some other totally irrelevant software because "well duh it obviously uses system libC" or whatever. Examples: critical .AppImage binaries stopping working after random system upgrades. Nothing runs on my computer is ever fully isolated, not even Docker. So, any isolation guarantee I get is guarantee I'll take. You claim today that your software is isolated, but I don't know if 3 years down the road I'll upgrade my freaking text editor and your program will stop working because that one library from 1987 has to be exactly version A.X but my text editor upgraded it to A.Y. Thanks but no thanks.
> your program will stop working because that one library from 1987 has to be exactly version A.X but my text editor upgraded it to A.Y.
Perhaps you misunderstand. This issue is fully solved by including dependencies and not relying on anything in the system path. Programs should not touch the system path. If a program requires library A.Y then it should include and use A.Y. But it should not touch the system path and thus should not impact any other program. Nor will it be impacted by other programs wanting A.Z.
It's often literally not possible to ship everything. You wouldn't want to spin up a second X11 (or Wayland) server, for example, because you can't have two of them talk to the same video card device at the same time usefully.
The number of things that can't be shipped is extremely small. And I don't think that Docker is a silver bullet for Wayland vs X11 issues? Although I'm not sure about the fine details as I don't have a ton of experience there. Shouldn't you be using an abstraction that can automatically support which ever is available?
I tend to ship code that needs to run on Linux + macOS + Windows + Android. So Docker is a total non-option. And it's totally fine! Very easy in fact.
It's the same thing everywhere — there are some dependencies you can't ship. On Linux, you can't ship the window server (because you need to share it with all of the other apps also running). On mac, you can't ship Core Foundation. On Windows, kernel32.dll etc. I assume Android is similar — I haven't tried figuring out what a purely static app on Android would be, since I think the bootstrap is Dalvik…
It's literally impossible to _not_ depend on the system path.
Let me rephrase. If a dependency can be bundled then it should be bundled.
The "Linux Way" is to depend on a bunch of random garbage pooped by lord knows bullshit script into one of several global search paths. This is bad, stupid, and wrong. Programs should include as many of their dependencies as is possible.
The number of dependencies that a program can not deploy and must assume are provided by the system are extremely minimal and special case. It's a short and static list.
In general no script or program should add libraries into the global search paths. On Windows user programs do not add random crap to System32. On Linux the existence of /usr/lib is an abomination that should not exist.
Is that better? I'm fairly certain you understand what I'm trying to say.
I know that the Linux way is not perfect, but I don't know how companies can't do better than the distro maintainers. Most repositories packages are driven by some kind of build scripts. I don't expect it would be that hard of a job to create one for your software for the most popular ones. Anyone using obscure distros are familiar enough with Linux to do container or chroot environments. I like the fact that my environment is a complete one, not siloes where the developer is more than happy to let the software lingers. At least macOS force developers to upgrade, Microsoft's backward compatibility's promise is keeping so much crust around in the system.
Author note - Most of you guys here are power users, for whom UI is a visual poem that you need or not. This is not a commercial project, it is not following any business goals. But this does not mean concessions to quality, it does try to offer minimal resource usage everywhere, easy experience, good UI/UX.
It explains all it does behind the scenes if you enable the developer console. It can help one learn so at a certain moment one understands and automates with scripts and specs.
But everyone these days is either seen as too smart or too dumb, I don't consider users like this. I know everyone started somewhere and a gradual learning experience is the best.
I broke so many radios and toys when I was a kid and I learned so much, by looking at what was is inside.
It is a project done by one dude, after work and when it rains outside (In Belgium it rains a lot).