Gatus's readme heavily implies it's meant to be run inside Docker, let alone be built inside Docker. The "Deployment" section only lists Docker-based methods, points to an examples tree that only has Docker-based methods, and has a "Using in production" section that redirects to the "Deployment" section. They've clearly intended for it to be run in a Docker, and thus Linux, environment.
>Here’s another one, it’s called statping
The default Makefile target builds Linux binaries, but even from the README one can see that it supports OSX, and within the Makefile one can see OSX-specific targets. Sure there's no FreeBSD target, but the software clearly wants to support more than Linux.
>The developers of the New World Order will assume, always, you are running Linux, as Ubuntu, and you always have Docker.
The developers of the "New World" write software for the OSes they use, same as the old world. Linux users write Linux software. FreeBSD users write FreeBSD software. Only when they cooperate on the same software does that software work on both OSes.
Unix is less portable than the author thinks. Source code usually isn't 100% compatible between any of the BSDs, let alone between FreeBSD and Linux. If I were the author of statping and was asked to support FreeBSD, I would require not just code and makefile support but also CI support to ensure its quality. Why should I be expected to do that on my own instead of a FreeBSD user contributing it?
Skimmed quicky through the article, but I think it's worth pointing out that:
> While tweeting with anger, Daniel pointed out that I should tell them kindly and it’ll work out. I’m sure it will. Let’s hope I can make it work first. I don’t like just opening issues. I’d rather send a patch directly.
Sending patches directly is a non-collaborative approach to open source. Issues are made for discussion, and PRs for resolutions; as a matter of fact, some projects state this explicitly, in order to waste maintainers' time with unproductive PRs.
Disliking issues implies (in a way) disliking collaboration. Maintainers who don't respond to issues are unlikely to respond to PRs/patches anyway.
EDIT: Extra context: the maintainers are very responsive to PRs (see history of closed ones¹); writing a rant instead of opening issues and/or PRs is... hm.
I can only speak for myself, but i feel that the easiest way for me to communicate my issue is to show how i would fix it. If I can't propose a fix I don't feel like i know enough to raise an issue.
> (...) but i feel that the easiest way for me to communicate my issue is to show how i would fix it.
I don't agree. A PR does not automatically mean that the changeset is good or well thought-out, or matches the architecture. Refusing to discuss a problem or exchange ideas regarding potential changes also raises a red flag regarding the nature of said contribution.
Keep this in mind: just because you believe your patch scratches your itch, that doesn't mean your itch is the problem.
Moreover, it wasn't that long ago that the Linux project had to remove PRs from the repo because a team was caught abusing the PR process.
Sure but then you just put the discussion in the PR. The point is that having the code immediately says what you want done, that it’s possible and probably a new failing test case that demonstrates the issue.
> Sure but then you just put the discussion in the PR.
The PR is nothing more than a request to discuss the changeset. The point is that trying to start a discussion on a problem by skipping the discussion of the problem and instead jumping onto a changeset that's proposed without considering neither the problem nor the constraints can and very often is a complete waste of time for all parties involved.
Let's frame it the other way around: what is there to gain by not getting up to speed on the design constraints by refusing to onboard or even engage with the maintainers?
> The PR is nothing more than a request to discuss the changeset.
This makes no sense. Why does the subject of a PR have to be a specific version of a specific changeset? Why can't the discussion be the subject, with the supplied code changes be considered a supporting "attachment" for the discussion? PRs allow for commentary and discussions.
In fact, PRs let you do everything an "issue" let's you do, but not vice versa. You can't "attach" code changes to issues. Not without creating a PR anyway.
----
Let's not forget what GH-style "issues" and "pull requests" were supposed to emulate and replace: email threads. While GH's UI elements are better at most things than emails, they're not better at everything. Emails are a lot more flexible and powerful, even if not as "pretty". PRs? They can't even let me suggest code changes (when reviewing someone's PR) using my IDE/editor. Or even suggest a code change that's not contiguous. Diff-of-diffs is a concept that GH has barely understood, but one that's just obvious with plain emails.
> what is there to gain by not getting up to speed on the design constraints by refusing to onboard or even engage with the maintainers?
Usually my only engagement with maintainers is when their library doesn’t do what I want it to do for whatever reason and is in the form of a bug fix (or less commonly an added feature).
I don’t want to ‘onboard’, I just want to use the software and since I spent the time figuring out why it was failing me I ‘pay it forward’ with what I found —- which is usually a diff of my changes.
Absolutely no desire to attend committee meetings to discuss the social ramifications of my changes or whatever they get up to these days…
Fully agree. Why waste words describing a code change rather than show the change itself. I'd rather use my words for describing the 'why', thank you very much, but not the 'how'.
In fact, I do this even when reviewing code: a branch that I can pull from & push to is a lot more welcome than a word vomit I have to interpret and translate to (or from) code.
Submit a patch with behavior B with the intent of delivering behavior A. Maintainer accepts it on the premise that behavior B is a good change. Since iti s self documenting, it correctly documents the wrong thing. You've now made a patch that you don't understand.
> Yes well thats vaguely the shape of the point I was making
So, your point is that something is bad because it's /possible/ everyone involved with it could be terrible at their jobs? If that's your point, then I can't imagine why you'd try to make it, because it applies to nearly everything in the universe of "doing things".
Similar to your original comment of "You've now made a patch that you don't understand.", you've now made a point that you don't understand.
As long as you write a PR description, explaining why the behavior you are fixing is a bug, and how to reproduce it, and your PR includes tests if the project has tests.
> Sending patches directly is a non-collaborative approach to open source.
This is the way I’ve always done it —- find a bug, figure out a fix and shoot off a patch attached to a bug report. Or, software doesn’t do what I want, hack it into submission then send a PR with a patch.
Worked well so far. Sometimes they don’t like my code for whatever reason but that’s OK, I tried.
The only real exception was when I contributed on a regular basis to one of the larger FLOSS projects where I’d usually run my changes by the module owner to get their feedback before committing since it was just good practice. Not always though.
Not every bug need a drum circle though. If there is an obvious fix (e.g. an uncaught exception) I love it when community members skip straight to a PR proposal.
Next to all the other points, Dockerfiles force you to explicitly list all your dependencies. Makefiles, especially for smaller projects, has a tendency to let you Google header names. That alone is a win for portability in my book.
> Like Rubenerd said, I am thankful that the mainstream-ness of Linux helped other Unix systems as well, but monocultures are destroying what people have spent years to improve.
The rant leaves out all the important parts. I am curious to see what those reasons are beyond personal pet peeves and if they actually outweight the benefits of standardization so that people can worry about bigger issues.
So usr local could be fair game for whacking stuff in your firms base docker image that you then layer your firms product applications on top of. It'd be your unix engineering person putting stuff in /usr/local, not your app dev.
Opt would be a better fit if we're sticking to the FHS. Think about what you'd usually install in /opt, maybe your veritas binaries. Maybe your AV solution. Maybe your ERP software product. etc. etc. Opaque binaries you got from a vendor.
That's a better match for an application product you're shipping - which is what you're doing with the second docker image (the bit starting "from scratch") in the article's example dockerfile.
That said, the FHS evolves based on conventions and /app for docker containers has become a convention. I wouldn't oppose just using /app but i do oppose chucking everything in /usr/local - it's a misguided attempt to preserve convention without grasping the convention.
> Opt would be a better fit if we're sticking to the FHS.
I agree on this, it's a better fit. But I think /usr/local is not a bad fit either as it's fundamentally a "local" installation of a program.
For me it always felt that /opt mainly exists for programs which don't follow the split required by the rest of the file system (bin, include, lib (, lib32, lib64), local, sbin, share, src) and I personally would prefer if /opt simply wouldn't be a think.
Either way for a single application container image placing that specific applications into /app seems fine tbh.. Through sadly as far as I can tell it mainly is that way because it's shorter to type in a Docker file then to repeat /opt/<package> all the time.
I have very mixed feelings about this article. If you want to run a non-standard operating system (all the BSDs combined have about 0.3% market share according to https://w3techs.com/technologies/details/os-bsd) then more power to you, but it should be unsurprising that most OSS developers prioritize bugs or features that impact more of their user base. I applaud the author for their "I'll port it myself" mindset though.
At the same time it does feel like there ought to be a more standardized way of converting packages between operating systems. I look forward to the blog post for next week where they detail how to get it all working.
The real problem is that the docker monoculture discourage good dependency management making packaging very difficult.
The first victim is security.
Linux distributions like Debian do stable releases and painstakingly backport security fixes to give you both feature stability and security improvements *at the same time*.
The cultures of docker and of static linking makes this increasingly difficult every day. The result is a world where you can only chose between:
- update to the latest release of an application or a library used in a build: Lose feature stability and get newly introduced security issues.
- keep an old container or fat binary around, with all the known security issues unfixed.
Needles to say, this is completely untenable in many high-risk environments. Banks, Health, industrial automation, avionics, trains, cars, military, and so on.
I don't disagree, per se, but how many Linux servers are out there running outdated software on a system that hasn't been patched in years because it's working and stops working when updated?
I love docker because I can treat the security of the system separately from the application. As long as I can update Docker with the system I can keep the root OS secure and work individually on each container service. I don't run the risk of updating the server and causing the app to puke. if I update the app and it pukes, I can roll back to the original container while I resolve the problem.
In addition, all my services are slightly more isolated from the underlying OS, which limits exposure if an inherent security flaw is leveraged by a hacker. I guess I don't see why there is so much hate for docker.
> but how many Linux servers are out there running outdated software on a system that hasn't been patched in years because it's working and stops working when updated?
By the millions. If you work in security and do assessment for large companies you can find plenty. Then if you add non-Linux system it's order of magnitudes.
> I can treat the security of the system separately from the application
> all my services are slightly more isolated from the underlying OS, which limits exposure if an inherent security flaw
Not at all. Docker has a huge attack surface compared to all the sandboxing systems that run rootless and without any daemon.
Also, sandboxes/containers provide limited security and isolation anyways compared to VM.
Furthermore, sandboxes are not alternatives to VMs. E.g. lot of OS services/daemons are sandboxed by default and at the same time the whole OS is deployed in a dedicated VM.
Finally, tons of critical infrastructure runs on bare metal, with one service/application per device. Running a container there is a net negative for security.
This is the second anti-Docker (by extension, anti-container) post I've seen this morning (this first was on Medium, no surprise).
The OP's lack of familiarity with Docker, and desire to re-invent the wheel into a different shape, is not anyone else's problem but that of the OP.
There is a reason that the Docker-on-macOS or -Ubuntu monoculture exists: it works (darwin-arm64 aside) without a significant amount of effort, and is generally well-supported by the larger OSS community.
Personally, I'd love to find a batteries-included FreeBSD distribution that is as simple to install as Ubuntu. Not everyone is a true believer, so the barriers to entry should be minimal.
Linux isn't dead. Containers, as RedHat loves to say in their marketing material, _are_ Linux. What is dying is the common ground upon which Linux and other UNIX-like operating systems used to exist, but that's been on the horizon forever with systemd choosing to use new Linux features over portability.
You are always picking an ecosystem alongside your operating system. If you don't like how constrained it is, maybe you need to either work on improving it or jump ship.
Moreover, I'd argue that containers are responsible for an explosion of open source activity. In the span of half an hour I can spin up a half dozen different open source approaches to a problem and check out their implementations to see which is closest to the solution I am looking for, and then tweak and submit pull requests from there. I don't have to spend hours getting each one to compile and solve compatibility errors just to find out I don't like how it actually works.
When I blow away the unwanted containers there is no crap left on my system, no script changing my network behaviors or cron tasks. It's like it was never there. I have no idea how people can have a problem with a system like that. What a time to be alive.
For a regular developer like myself, Docker is awesome and it's way better than finding the right incantation deep down in shell scripts and Makefiles. It does seem like it solves problems we shouldn't be having anymore in 2021. But I guess that's the solution for decades of fragmentation and no attention for us non-wizards. Same thing with systemd.
> Of course it requires apt! Because not only we all run Linux, be we all run a specific distribution of Linux with a specific package manager.
At least this code actually spells out the packages it requires and makes it easier to look up the equivalent for yum or apk. I've struggled so much trying to find the right dependency to install before a supposedly simple ./configure, make, make install...
> First of all, let’s talk about the fact that this Makefile is used as a… script. There’s no dependencies in the targets!
As a sidenote, I actually like this style of Makefiles. No need to assume execution permission, and quite frankly it is easier to parse visually than a shell script with if's, case's and getopt.
I mean when people speak about docker what they mean is often Linux containers used in a certain way.
But wrt. to the implementation of using such containers there are by now multiple competing implementations, on different abstraction levels (e.g. podman as a docker CLi replacement). To me it looks like docker specifically is currently quietly and (very) slowly replaced/superseded by this alternatives.
Then wrt. docker images there is also not a mono-culture, through it can't be denied that a lot of images are based on just a small handful of base images (Does anyone have any statistics about it?),
either way that "small handful" are still enough images to say it's not a monopoly nor very close to it.
I guess the place closest enough to a monopoly is the Docker Hub? But that's easy to replace if necessary.
> A while back Rubenerd wrote that he’s not sure that UNIX won
IMHO UNIX didn't win, somewhat UNIX compatible systems did (like Linux) and UNIX was one of many stepping stones/tools/gears for them do archive what they did archive, and only that. Through yes especially at the beginning it was an important gear.
I would even argue that UNIX never did had a chance to win, it always was only one gear of many in systems with increasingly more gears which become increasingly less dependent on that gear (through it's still quite useful so no reason to remove it).
> Why is docker for Mac a thing and docker for FreeBSD would not be possible? My understanding is that it's just VMs. Don't they have VMs in FreeBSD
It's not. Docker uses Linux kernel features such as namespaces and cgroups, and doesn't virtualisation. The only way to run Docker on macOS or FreeBSD is through a Linux-based VM for those Linux kernel features.
Docker is not married to Linux kernel features. Docker on Windows uses a Linux VM to run Linux containers but also has a Hyper-V backend to run Windows containers. Microsoft even ships a first-party-supported version of Docker called Moby for on-prem container engine installs (eg as part of Azure IoT Edge that runs Docker containers on IoT devices) that only runs Windows containers (with the Hyper-V backend).
As you said yourself, Windows containers is a whole different thing that only runs on Windows, and is only for Windows ( you can't run a Windows container on Linux for instance). Hence it's more like *BSD Zones that just calls itself the same as Linux containers, and CLI compatible with Docker, and not in any meaningful way related to actual Docker and Linux containers.
PS: why would anyone use Windows for IoT stuff? Seems like a complete waste of resources. And in general, Windows containers are very rare in the wild
I found porting an app to a Windows container pretty hard because of IIS issues. Unless you have a straightforward app it is not nice.
We will wait until we are on .Net 5 + and then run it in a Linux container.
FreeBSD has jails, which are equivalent to containers. Using docker in a vm isn't the best solution due to performance of things like file system access.
Author is assuming that everyone absolutely always must support their < 1% of market OS. You got the sources open, hack it yourself, then send patches.
He is taking about open source projects, where people donate their time and their code for public use.
But that is not enough! It seems they are obligated to write portable code and support all OSes and distributions people happen to use.
I just wrote a small lib to talk to a service I use. I am on macOS and use python 3.9. I suppose it can run I linux or even windows, but I haven touched windows or Linux in 10 years.
Instead of keeping it in my hard drive, I posted it on GitHub. Now people can use it if they want.
If some random guy appears and rant about how dork I am for not making it work with free bsd out of box, I’m pretty sure I’ll be calling names.
I feel that's all I'm asking from anyone who solves a problem: just share your discovery and work so far to save me duplicating your work. If someone has taken the effort to interpret an API and it isn't completely accurate for my use case I'm going to say thank you and share any steps I needed to take to set it up myself. If no steps are taken I'll open a PR on the Readme highlighting that it has been tested working on my OS to minimize ambiguity. This is collaborative open source development.
> I suppose it can run I Linux or even windows, but I haven touched windows or Linux in 10 years.
Same for me but being on Linux. It's awesome that for certain kind of applications modern programming languages and frameworks will provide you with an API which is most times OS independent.
This. Supporting software costs time. If no one person is going to use it (on other platform than that I am using/promise to support), it's not worth spending. Usually, everyone is free to contribute to OSS if they need extra features that are not present.
You did it right, as for those requests, just reply you are open to contributions (if at all) and ignore or block everyone that feels entitled to get their stuff for nothing.
>The developers of the New World Order will assume, always, you are running Linux, as Ubuntu, and you always have Docker.
It sure doesn't seem like these projects are *NIX compatible. What happened to traditional bash scripts, chroot profiles and Makefiles?
It reminds me a lot of the systemd debate. I wonder where the old school sysadmins are to lecture us on the evils of writing everything for Docker and Ubuntu only, like the old Debian neckbeards did against systemd
> What happened to traditional bash scripts, chroot profiles and Makefiles?
They're gone, and it's not for the worst. Have you ever tried editing a 1000+ line Makefile or debugging an insanely large bash build script with all these assumptions baked in? Bonus if the shebang requests sh, but it actually needs whatever shell the author has.
Docker is just static linking taken to the max. It's so ubiquitous because it works well. Yes, these Dockerfiles could have been written better, but the equivalent shell files would have been worse.
I’m a big fan of containers, but we oughtn’t hold Dockerfiles on a pedestal. They kind of work, but you have to shoehorn a dependency tree into a linear list of layers in a futile attempt to make build caching work. Moreover, they aren’t reproducible as they tend to rely on arbitrary network calls. To a lesser extent, the semantics are also “imperative commands” rather than “packages to be installed” which is the more natural way to think of composing an image.
I’m of the impression that the underlying BuildKit exposes the requisite dependency tree structure, but it hasn’t found it’s way downstream into the Dockerfile representation (to my knowledge, anyway). There’s also Nix which supports building arbitrary images with an elegant conceptual approach, but everything about Nix in practice seems unworkable for professional software development.
A standard optimization for reducing overhead is to Alpine based images.
Similar quite a lot of official docker images people base their images one are Debian not Ubuntu based.
And a lot of Linux server work in certain sectors is based around RedHat.
Similar docker gets in a lot of areas replaced by podman (through I guess for this discussions we could treat them as being the same).
Lastly outside of server space (and ignoring embedded Linux) Steam Os (Arch based) might have interesting effects in the future (due to the Steam Deck, which IMHO is very promising).
So I'm not really seeing everything written for `Ubuntu` only, through a lot of things will target Docker with whatever they like as a base.
Through then we would need to consider what we mean with "everything".
I mean the situation around Games, multiple Server scenarios, Embedded Linux and multiple scenarios of desktop Linux differs. Sometime quite massively.
E.g. Linux Gaming is/will be eaten up by proton, independent of weather it's proton on Steam Os (Arch derivative) or proton on Ubuntu. Similar in a University related context the large majority of people I meet which did ran Linux did ran Ubuntu. But outside of a University related context it was pretty much the opposite and no one I meet ran Ubuntu (anymore, may did in the past, e.g. in University, but switched and stayed away from it since then).
E.g. if the Steam Deck is not just successful but also stays successful then in a few years more "Desktop" Linux systems might run Steam Os (Arch derivative) then Ubuntu (partially because the university Ubuntu share seems to be eaten up by Windows and their subsystem to some degree).
Steam games that target Linux usually do only target Ubuntu. They work on other distros because Steam bundles a ton of libs internally so that the games use those instead of the host OS's. It's essentially the same situation as if each game was a Docker container with an Ubuntu base image.
Also, given the few host OS dependencies that Steam needs are 32-bit libraries, I find it better to not install 32-bit packages and pollute my OS by just running Steam in an Ubuntu container anyway.
If you don't want portability design exclusively for Docker and specific Linux distributions. Otherwise see what happens when compiling on other OS families like BSD and Windows. Popular tools are broadly available on many different platforms. Why? Because of portability and keeping the number of dependencies as low as possible and optimized to be compiled by numerous compiler vendors. Sticking to Open Standards and time proven RFC's improves the projected lifetime of tools like rsync, curl and ssh to name some examples. Projects going for the narrow audiences and assume the availability of Docker, systemd and dbus limits the users of the software or tool at hand. What's going on with the number of name resolvers on Linux. These truly make simple things less opaque and ever more confusing when compared to the origin of all, UNIX and BSD operating systems. Microsoft 'Love's for Ubuntu will without doubt only stand to gain Microsoft in the long term. The telemetry and always changing the narrative on how things must go forward within the vision of Microsoft will aim to kill compatibility and the power of the users to choose another platform and bring with them the applications they like to run. This freedom is not available when all software assumes one very specific type of MS blessed Linux/win32 honeypot. The telemetry is the most important feature, all else comes second to Microsoft, Google and Amazon. Are you enjoying VSCode and GitHub now? Skip to 2030 and you will absolutely hate it. Freedoms will be taken away, code will be assimilated and terminated, see the YouTube downloader saga as example of things to come.
I disagree. At most the author complains about packaging, and singles out a specific Linux distro as being very popular as a base image for Docker containers. For some reason the same author ignores the fact that Alpine-based images are highly favoured as base images because they allow the same software to be packaged and distributed without requiring hundreds of MB of bloat.
Also, it's irrelevant which base image was used to package an application. All that it matters is whether running '$ docker run <image>' is enough to run the app. The answer to that question is an unequivocally "yes" regardless which Linux distro you're using, and that's the exact opposite of Linux being dead. In fact, Linux is the OS of choice for deploying software, and that isn't really compatible with the notion of being dead.
Docker ecosystem itself is just pure bloat. I already have all the tools to build and use thirdparty software in my host distro. Why should I need another [untrusted] copy.
> Docker ecosystem itself is just pure bloat. I already have all the tools to build and use thirdparty software in my host distro. Why should I need another [untrusted] copy.
I don't find this argument is reasonable at all. Just because someone uploaded their half-baked package into a registry that does not mean you are expected to not package and maintain your own application. In fact, it's crazy to rely your production system on whatever package any random person made available to the world.
More likely, you're exposing yourself to more random people, when using docker, because you may now be using Ubuntu on top of Arch Linux host, or whatever.
Dockerfiles are easy to edit and validate. Building is just as easy. A container registry saves some time in terms of spinning up a new instance but if you've got a vital service running in a container it makes sense to take the time to validate any updates and test any changes before replacing the old one.
>The first one that came to my mind was Gatus.
Gatus's readme heavily implies it's meant to be run inside Docker, let alone be built inside Docker. The "Deployment" section only lists Docker-based methods, points to an examples tree that only has Docker-based methods, and has a "Using in production" section that redirects to the "Deployment" section. They've clearly intended for it to be run in a Docker, and thus Linux, environment.
>Here’s another one, it’s called statping
The default Makefile target builds Linux binaries, but even from the README one can see that it supports OSX, and within the Makefile one can see OSX-specific targets. Sure there's no FreeBSD target, but the software clearly wants to support more than Linux.
>The developers of the New World Order will assume, always, you are running Linux, as Ubuntu, and you always have Docker.
The developers of the "New World" write software for the OSes they use, same as the old world. Linux users write Linux software. FreeBSD users write FreeBSD software. Only when they cooperate on the same software does that software work on both OSes.
Unix is less portable than the author thinks. Source code usually isn't 100% compatible between any of the BSDs, let alone between FreeBSD and Linux. If I were the author of statping and was asked to support FreeBSD, I would require not just code and makefile support but also CI support to ensure its quality. Why should I be expected to do that on my own instead of a FreeBSD user contributing it?