I am arguing about this on FreeBSD forums - from an end user perspective. I think the benefits of saving disk space by sharing libraries do not justify the inconveniences we, desktop BSD and Linux users, obtain from being forced to disrupt and upgrade hundreds of installed software packages, just because one single desired upgrade requires to pull its dependencies. The whole ecosystem falls down like a card house - once in a while I have to say "screw it" and auto-update more than a thousand packages, praying my Python and other projects survive, just because I want a browser update or a security fix.
I wonder if there are others who support this opinion that desktop Unix has very complicated future unless complex apps will begin to bundle their own libraries.
> I think the benefits of saving disk space by sharing libraries do not justify the inconveniences we, desktop BSD and Linux users, obtain from being forced to disrupt and upgrade hundreds of installed software packages, just because one single desired upgrade requires to pull its dependencies
It's funny, from your point of view having a centralized repository with a (usually) single (usually) latest version of a library is a bad thing that may be (and probably isn't) justified by the goal of saving space.
From my point of view having a centralized repository with a (usually) single (usually) latest version of a library is an awesome thing that I would leave any other ecosystem to get, and the space savings is just a bonus that doesn't much matter.
Most dependency maintainers don't provide updates for more than a few versions of their software. When one piece of software depends on -latest and another piece of software depends on -legacy, you can ship both with the central repository model. In the Linux distributions I've used this is a solved problem. Arch Linux has five different versions of the JRE that are separately installable.
> From my point of view having a centralized repository with a (usually) single (usually) latest version of a library is an awesome thing
This reasoning assumes that future versions of software are always better. This is certainly not the case. I would like that a working program does not spontaneusly break because a third party "upgrades", thus introducing a new bug. I am willing to give up anything else to be assured that working programs continue to work no matter what.
I suppose I should clarify by "latest" that I mean a version of a piece of software that is actively supported and has all the security patches. Like it or not, the ecosystem is currently built around the assumption that you do want to have this, that nobody should still be running e.g. Firefox 34. The question is what the best way of shipping it is.
The conversation in the two posts you replied to isn't about whether you always want to be on the bleeding edge all the time, it's about whether it makes sense to have a centralized repository that contains the latest version on some supported channel, which isn't necessarily the same thing as the latest -nightly. I take the position that it is; that shipping a (potentially) different version of a library with every application is a bad approach which is often less secure and harder to maintain.
But didn't the OP show exactly a situation where this system broke down?
There is a new version of the Rust library. In theory, either anything that supports it should be updated selectively (i.e. Firefox) or all consumers should be updated in lockstep. Yet in practice the maintener decided that neither option was feasible so nothing was updated at all. What went wrong here?
What they should do is have a centralized repository of packages but allow multiple versions that function independently. That way nothing breaks when one program installs a newer version.
Sometimes it feels like I am living in an alternate reality. After I started using Arch Linux as my main OS I was fully prepared for it to break every few months. But it's been more then 2 years (maybe even 3 or 4 ?) and not one breakage. Also keep in mind that I update it every single day. From time to time I check the Arch Linux news site to see if anything needs manual intervention. So far I haven't needed to do anything.
> Also keep in mind that I update it every single day.
I can't remember what I was looking at, but within the past week I ran across a comment that said Arch Linux only supports constant upgrades such as that, and any delays that result in skipping a version are what risk causing breakages. The commenter was very surprised that they were even thinking about supporting a version jump on whatever the thread was about (pretty sure it was somewhere here on HN).
can confirm, went to arch linux due to constant breakage under Debian & Ubuntu, it's be so much more pleasant since then. No more everything exploding left and right as soon as I need a newer GCC or libav version.
Changing the system default GCC version, like it seems you did, is not good practice in environments with conservative package policies.
There's also no need to do that, since nobody prevents users from installing newer versions alongside old ones, and invoking them directly.
Even not considering the fact that different GCC versions can coexist, complaining about this breakage in absolute terms doesn't make any sense. Releases changing default compiler versions inevitably cause more breakages with packages who haven't been updated to be compatible with newer GCC versions. So ultimately it's a matter of choosing the distributions with the appropriate release model, not a matter of Ubuntu/Debian breaking stuff.
Libav has also been deprecated in Ubuntu long ago, so it's not clear what you refer to. If you happen to refer to the transition from and back to ffmpeg, that's very old history.
> Libav has also been deprecated in Ubuntu long ago, so it's not clear what you refer to
I'm referring to the libav* libraries which are part of the ffmpeg project (not the horribly-named libav fork) - and external debian repos providing updated version of those (due to better codec support in media players, etc) such as debian-multimedia
That's funny because I've been running Debian Sid for 15 years as my main OS doing weekly updates and the two single cases of breakage I've seen were glibc6 transition (which was announced and expected) and proprietary video card drivers. You must be thinking of Ubuntu specifically.
To keep the anecdotes going, I've been using a mix of Debian and Ubuntu, both stable/LTS and testing/biannual, for pretty much exactly the same 15 years as you've been running Sid and I've never had any breakage that wasn't caused by me fucking around with binary drivers.
ndiswrapper was the main cause back in the day, shockingly giving Windows drivers access to the Linux kernel can cause problems. The most recent time was when VDPAU was new and I was trying to get HD video playback working on a mini-PC with an nVidia Ion GPU by running a version of the nVidia driver much newer than Ubuntu packaged. Now that I think about it that must have been around a full decade ago.
I'm not old enough to be able to run a Linux distribution for 15 years. And I also never run Debian on any of my personal computers. Thus, my anecdotes are definitely way less convincing. In other words, you can stop reading here.
The only two major distributions that I used for a sufficient amount of time are Ubuntu and Arch. Arch "unstableness" is exactly what I want most of the time on my personal computer as it's my to-go Petri dish. "Stability" would mean that it's harder for me to break it apart, and make a Frankenstein out of it. That's exactly what I have been experiencing with Ubuntu LTS releases --- stability.
Most of the time, I want to have all the available LLVM versions alongside with all the GCC versions, with all the available binutils (Qemu, Docker, Oracle VBox, etc.) versions on the latest kernel full of my monkey patched printk's. When I finally get to break its back I dive the Wiki for few hours to restore it.
I can imagine a non-office, hacking desktop OS that follows the Arch packaging strategy being highly successful.
I also maintain a few compute servers for 10-20 people. They are on Ubuntu LTS. The packages that I need there are always the ones that just work and don't let anyone do anything "cutting edge".
Arch is rolling release and you take the good with the bad. The ones who try to defend arch as some paragon of stability miss the point that Arch's model is inherently unstable, but it comes with other benefits.
I'm the one who kicked off this entire conversation pointing out that arch is unstable, and it cracks me up watching silly people scramble to try and defend Arch as being some paragon of stability.
Aside from patching the kernel I have done everything GP said. Just because the Arch wiki says it is unstable doesn't mean it always is. It just means Arch Linux can do breaking changes (systemd) without worrying about backwards compatibility. And FYI I wasn't defending Arch Linux. It just seems strange to me that everyone is having instability problems and I can't even reproduce it.
I also agree that you shouldn't run your production database on Arch Linux. It isn't made for workloads like that. But personally I find maintaining Arch Linux+"custom packages"(with AUR) easier then Debian+"latest packages"+"custom packages".
That is only if you use the debian-specific definition of "stable" which is "does not change". The rest of the world thinks of "less bugs" when they think of stable software.
yes, because randomly declaring the other person as using a different definition somehow adds to the conversation and changes their point.
Back here in reality, rolling release is less stable because more bugs in the software get through. And this is a reasonable expectation and not some magical fairyland where bugs never get written so being right up against the dev branch is as stable as being on the stable branch.
> Back here in reality, rolling release is less stable because more bugs in the software get through.
We really live in two different software worlds. Every software I'm using has its number of bugs a purely decreasing function of time, especially in the "main" paths and use cases.
If it were true, it means there wouldn't be bugs in the first place because they wouldn't have gotten written. The very fact that the bugs got written implies new bugs can, and will, be introduced.
I'm running Debian testing since 4.0 beta and never had an un-bootable system or something got broken due to normal updates. I've just reinstalled the system once to migrate it to 64 bit, since there was no proven procedure to migrate a running 32 bit system to 64 bit.
There were rough patches along the way but, all distros were having the same problem in someway or another (vdpau, fglrx, multi-gpu support, ndiswrapper & wireless stuff, etc.) but it's a set it and forget it affair for a very very long time.
No, I only ran Ubuntu a few times. My worst breakages were on debian (generally testing). I remember the mysql packages completely killing my apt as well as grub updates wreaking havoc as two individual examples.
That said, just because you had good luck doesn't mean that it's stable.
Here's the most trivial way I can think to explain this:
Check out Arch News.[0] Ctrl-F (Find) 'manual intervention'.
Six years of results on the first page ; 13 instances of 'manual intervention required'.
Reliability != stability. Stability usually implies a platform on which one can use and develop for without expecting common major changes, if ever.
As for the anecdote of using the [Testing] repositories, a users experience with such things really depends on their use of new and currently developing software.
A simple computer with simple peripherals that is used to run emacs all day isn't likely to be broken by the Testing repositories.
Personally , my anecdote : the second someone starts using Arch for something new-fringe (hi-dpi, touch, tablets, SPDIF, SLI, NVRAM, new window managers, new x-windows replacements, prototype schedulers, filesystems, or kernels, PCI pass-through, exotic RAID configurations..) [Testing] repository is an act of masochism. It's only a matter of time before something drops out.
I found that my best bet was chicken sacrifice and Opus Dei-style self flagellation before each [Testing] 'pacman -Syu' . At least then I had a 50/50 chance of the next boot. But, granted, I use a lot of weird or fringe hardware.
All that said : other distributions don't have better [Testing] repositories. It's just that [Testing] is for .. testing. It's unstable by its' very nature.
I used to run Arch, I don't anymore although not for stability reasons.
13 instances of "manual intervention required" over 6 years seems awesome.
I think I did 2 manual interventions during my time using Archlinux, and each time it took maybe 2 minutes, it was just a matter of copy-pasting the commands in Arch News.
That's not awesome at all, it means something happening every 6 months. In corporate environments this means all the ceremony around it: tickets, CAB, etc.
Meanwhile you can use RHEL or CentOS and basically leave the thing alone for 5 years.
RHEL and CentOS are not targeting the same users as Arch.
Distros targeting the same desktop needs are Ubuntu, Mint, Debian-flavors, Fedora, Gentoo, ...
In corporate environment for critical software you would get a contract with an on-call duty and you would only have a limited choice fo supported distros.
I probably wouldn't recommend Arch for corporate environments, but at this point I feel I should point out I wouldn't recommend Linux either if you're working in a windows shop.
I don't know about your experience, but if at most 13 little manual interventions in 6 years
would be the only problem of most "stable" distros I would have never switched to Arch.
I've had various non-rolling distros fall apart on all sides within 6 months.
With Arch I've had one manual
intervention(aka one copy-pasted command) in over a year and one minor issue where
I needed to restart a service. That's amazing and something I'll gladly take in
exchange for painless and straightforward setup of pretty much anything.
Not to mention with regular backups(that should be done either way) and delaying updates in critical time periods
you can easily minimize the risk.
I ran a stable arch system for 4 years with only one breaking change introduced. Everything else was minor with clear instructions on how to solve it on either main page or the forums
That's awfully nice for you. But my experience was horrible every single time I've tried Arch. The main problem is, every time you choose to update you have to brace yourself for possibly wasting another hour to repair the subtle breakage the new package version of some minor piece of software introduced.
I had Arch on a laptop I used for my freelance business once and took notes of the time spent on maintenance. So I actually have the numbers and know how much money I wasted compared to the time I used an Ubuntu LTS. Guess how many hours I wasted on repairing breakage on Ubuntu LTS. Exactly zero.
How long ago was your experience? That might be a factor. I have been using Arch for a long time, and if my memory serves, stability was a bigger issue years ago than it is now. These days, I have found Arch to be really solid.
That might be a factor, yes. Last time I tried Arch was around 2015.
However, I come from a business perspective here. You really don't need an operating system that introduces new versions on a whim in the middle of operations. There's a reason businesses run RHEL/Centos or macOS and whatever has LTS in Windows land: cost of maintenance and reliably wide windows of no possible breakage.
As a business, it's kind of a dodgy position to be at relying on "yeah, randomly upgrading versions worked for me so far. Fingers crossed, lol".
So I'm interested in what people use their Arch Linux for and what didn't work for them on any other stable platform. My personal observations point towards the Arch Linux users being the tinkerers who like to use cutting edge and hose their Debians trying to wedge some newer version into it. While the ones that ran screaming away from Arch pretty much never had the desire to change the underlying system and were happy with a security backport now and then.
You don't want your PHP version to endlessly get updated during your operation when you build your app at a specific version.
It would mean not only you have to take care of distro update gotchas but you have to go through all the breaking changes the language introduces.
Stick with LTS. Rolling is for enthusiasts, though Arch does have put decent efforts on their wiki but I did find errors on minor pages that didn't make it work as written.
> You don't want your PHP version to endlessly get updated during your operation when you build your app at a specific version.
I think we'll figure out how to upgrade in the 2-3 years of support PHP has for each version. Trying to compare this to arch's rolling release style is bananas.
100% my experience as well, down to the freelance. I made another comment pointing out that the ones who think it's just a minor issue with Arch get paid whether they're getting meaningful work done or not.
> It is the user who is ultimately responsible for the stability of their own rolling release system. The user decides when to upgrade, and merges necessary changes when required. If the user reaches out to the community for help, it is often provided in a timely manner. The difference between Arch and other distributions in this regard is that Arch is truly a 'do-it-yourself' distribution; __complaints of breakage are misguided and unproductive, since upstream changes are not the responsibility of Arch devs__.
Even their wiki is very clear to inform you that you're at the whims of the package upgrades.
But I also want to point out that "it hasn't broke in a year" is cute, but I ran arch for probably 10+ years until I finally got tired of it and moved to Ubuntu. The last straw was me sitting down to get paying work done and spending half my day trying to recover my work environment. In the 2-3 years since I moved to Ubuntu I've never once sat down at my PC and had something stop working that was working before.
This defense of your favorite distro is especially silly when you think about it logically. Of course a rolling release system is going to have more breakages. The sensible response isn't "it never breaks!", but is instead "that's the nature of rolling release, you opt into it when you choose Arch".
This is funny, I could tell exactly the same story the other way around. I came as a 10+ year Debian user to Archlinux because it did too much automagic under the hood that broke and took a lot of time to fix. No breakage on arch because no automagic behind your back.
For a long time before using arch I thought too that rolling release might be more unstable, but I have come to the conclusion that quite the opposite might be true.
> I came as a 10+ year Debian user to Archlinux because it did too much automagic under the hood that broke and took a lot of time to fix.
This assertion makes no sense. Debian Stable has always the paragon of stability to the point that it's actually criticised for it. The Debian Testing release is even famous for being more solid than other distro's stable releases.
Breaking changes in Debian are practically only remotely possible in Debian Unstable or if you purposely install packages from backports or PPA repositories that have no assurance of stability or even maintenance.
The problem isn't so much that Debian packages or the ecosystem "breaks" as such, but that you run in to bugs (sometimes already fixed ones), and that you're then basically stuck with that unless you, the maintainer, or someone else decides to backport the fix, and they frequently don't get backported especially not for "minor" bugs.
Well, yeah, but that's bypassing the package manager. You can say that in response to pretty much every argument about "I don't like package manager X because Y".
debhelper is a library of tools for building packages. Debian users should not need to be aware debhelper exists, or even have it installed.
Maybe you meant debconf? When I was a DD it was an RC bug if your package used debconf to do "magic", i.e. using it as a registry for an application, or not properly re-seeding answers from the configuration and assuming the debconf db was the source of truth.
Huh, when does anything break in Ubuntu LTS if you don't do a dist-upgrade? There's nothing to break if nothing changes. And the minor version upgrades along the years have not broken anything on any of my installs. Arch on the other hand ...
And that is precisely what you should not. The dist-upgrade is something you can plan for and set time aside for. The risky rolling mini upgrade every couple of weeks you can not. That is a huge difference!
And then I sit with a hopelessly old system I dare not to touch because I dread the day I have to do that kernel update or that one update that pulls in a new libc, leading to an upgrade orgy. I have seen that plenty of times.
This nonsense is why stable and LTS releases were introduced in the first place.
If I'm faced with a major upgrade of an LTS system, I usually choose to reinstall from scratch, shedding packages I haven't needed for a long time with it. But that happens every couple years or so.
If you're okay with hitting the forums now and then to find out why your desktop has been behaving oddly the last two weeks, rolling is your thing. You won't have to break in your new dist-upgraded or freshly installed system for like two weeks.
If you don't want to play lottery every time you hit enter for that update or if you're a business where you can't afford possibly breaking all your laptops for some security update, stable is your only option.
> If I'm faced with a major upgrade of an LTS system, I usually choose to reinstall from scratch, shedding packages I haven't needed for a long time with it. But that happens every couple years or so.
This goes back to my original point, which we've drifted a little from.
Ubuntu has more unexpected behaving when upgrading than Debian.
A working dist-upgrade should not be something that your OS struggles to provide. Asking people to reinstall their OS isn't an acceptable answer.
> This argument that you MUST dist-upgrade as soon as the next version is available, and completely ignore the entire idea around LTS, is silly.
This idea that you never want to upgrade from one LTS to the next is equally ridiculous. Upgrading the operating system is part of the package manager's job.
this reply 100% mirrors my thoughts on the matter, all the way down to preferring re-installs over dist-upgrade's.
I like Linux, I don't dislike dealing with the OS. But I want to deal with it on my time. Expecting to get work done and then realizing something is broken and being forced to deal with it is a completely different proposition from picking out a weekend to reinstall your work PC with the newest LTS version.
They first time I tried Arch, I installed it, did a system update and libc completely broke and my system was unusable. I went back to Gentoo.
I went through some hell when I ran Gentoo unstable(~) but for the past several years I've been on Gentoo stable and have very few issues. I've even used it on work laptops at three different companies.
It is definitely more of a do-it-yourself distro than Arch for sure, but I enjoy working with it.
I ran Gentoo until shortly after Daniel Robbins left the project. It just started to have problems. A good example is that installing KDE would break to the point of needing to throw everything away unless you installed a few other packages in a very specific order.
That never seemed acceptable to me and I ended up moving over to Arch Linux for a good 10+ years before the instability of it finally got to me and I moved to Ubuntu.
Having said all of that, I'm glad to hear that Gentoo is stable for you. I really enjoyed my time with Gentoo and I got the feeling that after he left there were mistakes made in terms of everything working reasonably.
Your description of Arch reminds me of an experience I had with linode and arch years back. Due to an internal hardware issue, they basically lost my server. When I went to recreate, I discovered that you couldn't just update the world because it would immediately result in a broken system. You instead had to reconfigure a few things before updating the world.
I no longer use Linode.
At some point the vendor has to take responsibility. That's a large part of why I like Ubuntu, they do. It works every day without me having to worry about it.
I used to run Gentoo. At one point, trying to resolve dependency conflicts, and not knowing what I was trying to do, I uninstalled libc. Obviously, my system immediately became unusable.
I actually appreciate that Gentoo is set up to allow me to run that command if I want to.
But I did move off of Gentoo due to constant dependency conflicts. They took forever to deal with and they were pretty well guaranteed to happen whenever you didn't hold to a strict and frequent update schedule.
> Of course a rolling release system is going to have more breakages.
Well, no. The point of a rolling release system is that you upgrade by little increments, so you never have a moment when a 'major upgrade' happens and breaks your system or forces a reinstall.
Obviously this is true for "Arch Linux users", but that is a self-selected set of individuals with correspondingly biased circumstances/behaviors etc... I wonder if your statement still holds holds when we change the domain to people who have used Arch Linux at some point in the past?
As I understand it, Arch Linux is like having a pet. It requires constant care and feeding to keep it alive but can be very rewarding (allegedly)
You misunderstand it, then. Arch is stable enough that (unless you're doing stupid things, like grabbing half of your system from the AUR) there will only be a breaking change every three years or so. Just set a cron job to update it for you and you have nothing to worry about. It was literally created for lazy system administrators, and it's only gotten better for them as time goes on.
I don't think that's true at all. If you use Ubuntu, broken packages are ultimately your responsibility unless you're paying for support from Canonical. In both Ubuntu and Arch Linux, changes will be handled carefully by the maintainers to try to avoid breakage where at all possible. The difference is basically that on Arch you're supposed to be able to handle basic sysadmin tasks yourself, which means that Arch Linux will ship breaking changes (with a mailing list warning) that require you to run a command or two yourself. Even with that qualifier, this is uncommon.
> on Arch you're supposed to be able to handle basic sysadmin tasks yourself
And the reason for this is because you realize the latest update killed virtualbox and you require it for your work so you need to be able to downgrade and pin the vbox application to a specific version.
Meanwhile, Ubuntu catches the problem and refuses to update the package until it's fixed. Therefore I never even realize there was a breaking update to vbox somewhere.
The difference being that Arch tells me when to tinker because it blew up again, Ubuntu allows me to choose when to tinker because it doesn't.
> and your view is clearly against the majority of Arch Linux users.
I just gave a link from their FAQ that clearly states the system will break when packages break and they aren't responsible for it. As another user pointed out, you can go do a search for 'manual intervention' and find years of results on their front page.
Arch Linux users are ok with having their system randomly stop working. I used to be ok with it, until the umpteenth day that I lost half my working day getting the system back up and running.
Unless Arch Linux broke for you because they ship multiple versions of some packages, that's not really relevant. This is a thing all the popular Linux distributions do as far as I know.
Also, personally I've stuck with Arch Linux for 6-7 years now because it's the first distribution I found that consistently works for me, so I don't know that anecdotes will get us very far either way.
If we're honest: Arch is annoying, but the reason people (including me) run it is because they don't want their OS to explode in a shitstorm every time a 'major upgrade' cycle happens. (Looking at you, Ubuntu.)
So, your comment is 180 degrees off the mark. Arch exists for people who don't want breakages to happen.
(On the other hand, if you're conditioned by Windows to reinstall the system every year, then Ubuntu might work fine for you.)
> Arch exists for people who don't want breakages to happen.
Or, for people who don't want breakages but also want/need current software versions.
Debian-based stuff can be pretty nice and stable as well, but on Arch installing and setting up anything usually "just works" without missing dependencies, renamed packages etc.
The difference is I that I know the dist-upgrade is coming and can plan for it. With Arch it's on that one day that I really needed to get work done, but Arch decided it wasn't happening.
Strange, the story has been quite the opposite for me, Since I moved to Arch Linux I only had one minor breakage (bluetooth) that was fixed in a day, whereas in Ubuntu I cannot count how many times an update or a PPA broke something.
It depends if your Ubuntu is LTS or not. I'd say LTS Ubuntu > Arch > non LTS Ubuntu as Arch tries to be stable on its only version but Ubuntu can be more aggressive between LTS.
Oh man. This seems to have spawned an anecdota war.
I think the main issue/benefit with arch linux is that it wont try to do anything for you, except give you extra instructions for when a package change requires "manual intervention".
I put that in quotes because I don't think of that as breakage, I think of that as normal upkeep. We'll, as long as it's listed.
Anyway, the implications are that things can be stable longer on arch because there is no magic under the hood. But there will be times (though I haven't experienced it myself), where something unexpectedly breaks, and it'll be hard to recover unless you know what your doing... though maybe it's actually easy with the rolling back packages? The package manage does keep copies of old versions of packages, I've just never needed them.
As an example if this, my co worker and I have very similar thinkpad laptops. His graphics start doing wierd things that hes has to go fix after every update (Fedora or Ubuntu, can't remember). I took 4+ hours to get mine working how I want, with research and experimentation, but nothing has broke it since.
Out-of-date software and related dependency hell is the breakage that rolling-release/Arch gets me away from.
More UX anecdote; I use Arch (btw) but trying Manjaro just now for something so I grabed an image from osboxes.org; the system update fails after hitting enter on the default options https://i.imgur.com/OTNRMrH.png
'y' for the last would have continued things, but "ugh, I have to think about something" is breakage to some.
> But there will be times (though I haven't experienced it myself), where something unexpectedly breaks, and it'll be hard to recover unless you know what your doing...
The thing is, my experience with Ubuntu is that there will be times when something unexpectedly breaks, and because of the added complexity, it will be nearly impossible to recover even if you know what you're doing.
> auto-update more than a thousand packages, praying my Python and other projects survive, just because I want a browser update
You may want to check out Nix. It's a package manager which isolates each program's dependencies; so you can have multiple versions of the same package.
> or a security fix
having a centralized repository like this helps in case a it's a library that needs a security fix, because you only need a single update to apply the fix to all applications. Applications that bundle their own deps (including Docker images) will need to publish their own update with the updated version of the lib.
You may want to check out Nix. It's a package manager which isolates each program's dependencies; so you can have multiple versions of the same package.
In principle true, yes. But it comes with a cost. Even the nixpkgs Firefox maintainers are considering to only ship Firefox ESR starting with NixOS 20.03:
I won't blame them. Firefox is a huge piece of software, and is updating every month. It's a bit shitty of Mozilla to expect every single distribution (most of which are run by volunteers) to package all their new dependencies and check everything works every single month.
This is a problem the package maintainers and the distros have brought on themselves!
They put countless hours into stitching roadkill into a complex quilt and then scoff at users who complain on the rotting stench.
They split packages from the creators into multiple pieces to meet their own idiosyncratic aesthetics. Firefox and other pieces of complex software should reside entirely in its own hierarchy. I have been using Linux and FreeBSD since 1997, SunOS and OSF/1 before that, this is good as it gets folks.
The only hope is Nix, OSX, Qubes and folks that care about progress. Everyone else is building ships in bottles.
But their 'idiosyncratic aesthetic' helps navigating. In the mozilla world, they know their aesthetics ('not invented here'), because its their everyday job and can always work hands on and naturally can be faster. While in the distribution world, where a lot of software comes together, its good that i can find everything in common/familiar places.
Not easy to find a good middle ground and still supporting progress.
/Applications/Fiefox.app. Not that hard and not unfamiliar, really. (Not saying Linux distorts should adopt /Applications. My point is it’s not hard to have a consistent and self-contained experience, and AFAIK there are several competing bundle formats on Linux already.)
Although as far as package management in Qubes you’re still stuck with dnf and apt. But they’re pushing progress in other ways. (Are you able to run nix on qubes?)
but you can just use Mozilla's official binary packages, which are tested to work with a specific set of libraries, and I think are built with higher optimization levels (O3/pgo) than what distros conservatively do (O2) because of cargo-culting early-2000 gentoo forums (that was the case a few years ago, not 100% sure it is still the case today as I could not find easily the current release flags).
Mozilla doesn't provide binary packages for all architectures and OSs supported by Nix/NixOS
Maintainers also sometimes need to patch software to conform to their distribution's norm. For example, Debian patches it to accept extensions installed with APT.
And I'm glad distribution maintainers always recompile software themselves, it helps making sure:
1. binaries are really made from the open source code (I know, it's not bullet-proof)
2. it's actually humanly possible to actually compile it from source (my pet peeve in this area is Kafka: it's used everywhere, but no one seems to know how to actually fully compile it from sources anymore)
I fully agree. It should be said that nixpkgs also provides the firefox-bin derivation, which is just want the grandparent suggested. But there is a lot of value in checking that software builds are actually reproducible from the sources.
Nix is a fantastic concept, and I hope it takes over the world. But the NixOS packages are a mess. I tried it for a few months last year before giving up after several packages and even whole collections of packages became unusable even in the stable repository. The repository needs some serious reworking before Nix can really shine.
It's really trading one problem for a different one. The reason traditional distributions only package one version of a given library and make everything use that same version is so they only have to maintain one version of the library.
If you allow every application to choose its own version then they all choose a different one, which means someone then has to continue to maintain every different version of the library separately. Doing that isn't much if any of a reduction in workload compared with getting every application to use the version the distribution ships, and is much more likely to end up with a situation where dozens of versions of the same library exist and half of them are broken in some way.
The better solution is for libraries to only break compatibility between major versions. Then the packager can ship some suitably recent minor version for each major version of the library, at most two or three major versions are supported at any given time which keeps the maintenance overhead feasible, and every application can successfully link against one of those major versions or another.
> If you allow every application to choose its own version then they all choose a different one, which means someone then has to continue to maintain every different version of the library separately.
Well, no. Else software on macOS and Windows obviously wouldn't work, since those ship all their libraries along.
It does not matter to the users of my software that I use libraries versions X, Y, Z, what matters is that what they want to happen happens when they click the buttons. e.g. I generally use Qt - there are plenty of bugs, but what matters is that my use cases work, and it is my onus of developer to ensure it.
And it is so much more painful to ensure that it will work correctly without any regression and with best performance for every version across e.g. Qt 5.6 to 5.14, than to just ship Qt 5.14 myself built with only the features I want with all my use cases duly tested.
> Well, no. Else software on macOS and Windows obviously wouldn't work, since those ship all their libraries along.
Not exactly. For the system libraries (e.g. the Windows crypto API) you effectively get one version for the whole system, and it works because the library maintainer (Microsoft) is diligent about maintaining backwards compatibility between versions. Whereas for applications that include their own copy of e.g. OpenSSL, that problem does occur there. When there is a vulnerability discovered in OpenSSL, every developer using it has to update their application instead of a library package maintainer doing it once, and if any application is no longer maintained (or just never updated to the latest version) it's now insecure indefinitely.
> And it is so much more painful to ensure that it will work correctly without any regression and with best performance for every version across e.g. Qt 5.6 to 5.14, than to just ship Qt 5.14 myself built with only the features I want with all my use cases duly tested.
You don't have to ensure it works with every version ever made, only the one the distribution ships.
This is made much easier if the library developer designates specific versions as stable and then stable distributions use those versions, because then working against the two or three actively maintained stable versions is all you need to work anywhere.
> and it works because the library maintainer (Microsoft) is diligent about maintaining backwards compatibility between versions.
yet people have to ship microsoft DLLs left and right since behaviour sometimes still break. One breakage is enough to convince stakeholders to do that.
> When there is a vulnerability discovered in OpenSSL, every developer using it has to update their application instead of a library package maintainer doing it once, and if any application is no longer maintained (or just never updated to the latest version) it's now insecure indefinitely.
On the other hand, I had software who broke, as in, core features became unoperational due to openssl distro updates and changes of default policies. My users much prefer having working potentially unsafe software in the exceptional case where the software would connect to the network (eg reading a youtube stream in VLC or something like that) that non-working software.
> You don't have to ensure it works with every version ever made, only the one the distribution ships.
Which is "every version ever made" because people are still rocking Debian Wheezy or Ubuntu 12.04 and want to use my software
> My users much prefer having working potentially unsafe software in the exceptional case where the software would connect to the network (eg reading a youtube stream in VLC or something like that) that non-working software.
Do they, though? Or are they just unaware of what "insecure" means?
Sure, if you frame it as "[technobabble about TLS certificates], click 'continue' to make the software work?", they'll chose the insecure-but-working option.
But if you frame it as "your Internet Service Provider, government, and possible third-parties may be notified you are watching this video, do you want to proceed?", some of them might think twice, depending on what country they live in and what the video is.
It doesn't entail any of that. Traditional distributions like Debian allow you to install any number of library combinations, they just do so badly by requiring you to do so on different installations updated at different times.
Supporting the presence of multiple library combinations doesn't mean that you "support" all version combinations in the sense of supporting all combinations of their versions being used in the wild.
It just means when users upgrade they can do so gradually for some sets of packages at a time. You can emulate this on traditional distributions by having a single VM image you use for your browser, then cloning the image, updating packages, and using that one for your E-Mail etc.
Having a mechanism like this should mean less support is needed from the distribution, because system-wide breakages are less likely to occur.
You'll get the same bugs, but users can easily back out say an OpenSSL version with a security fix for the 5% of packages it breaks with, while retaining the fix for their browser & other high-risk packages.
For example, I currently can't upgrade my firefox version on Debian testing because it ">=" depends on a version of a couple of libraries that some 100-200 other packages on my system don't like.
In Debian testing this sort of thing happens occasionally and will get fixed sooner than later, but that it happens at all is just an emergent property of how the versions are managed. If I were to run the latest firefox with those new versions and not touch the other packages until they're recompiled both myself and the package maintainers would be exposed to fewer bugs, not less.
I'd get a bugfixed firefox today, and they wouldn't need to deal with bug reports about a broken upgrade, or firefox bugs in an older version users like me can't upgrade past simply because firefox happens to share the use of a popular OS library.
> Supporting the presence of multiple library combinations doesn't mean that you "support" all version combinations in the sense of supporting all combinations of their versions being used in the wild.
It's not about combinations. It's about, there's a serious bug in the library and now there are 79 different actively used versions of it that need to be patched instead of 2, which means half of them never get fixed.
> You'll get the same bugs, but users can easily back out say an OpenSSL version with a security fix for the 5% of packages it breaks with, while retaining the fix for their browser & other high-risk packages.
You'll get the same bugs and then the 5% of packages a security update breaks are broken until they're fixed, which should happen promptly because broken packages should be very high priority. Is it really better in that short time for the other 5% of applications to "work" but be insecure and get your system compromised because the new vulnerability is being actively exploited?
The real solution is for sufficient testing to be done during the embargo period so that by the time the new version goes wide it doesn't actually break anything.
> For example, I currently can't upgrade my firefox version on Debian testing because it ">=" depends on a version of a couple of libraries that some 100-200 other packages on my system don't like.
Right, that's a problem. It's valid to need version >= 5.6 of some library, but something has gone wrong if some other package specifically needs version 5.5 -- version 5.6 should be able to satisfy that dependency. Or if it can't then "5.6" should be 6.0 and it should be possible to have 5.x and 6.x installed together, but that should happen only rarely so that at most two or three mutually incompatible versions are in active use at once.
In general the latest version should be able to satisfy applications expecting older versions and then you need only a suitably recent version and nothing else, and the lack of that is the real issue.
> It's not about combinations. It's about, there's a serious bug in the library and now there are 79 different actively used versions of it that need to be patched instead of 2, which means half of them never get fixed.
Yes I agree that would be a royal pain in the ass, i.e. running something like a Debian where each package would be the equivalent of a docker image with an individual maintainer who'd need to decide when they upgrade OpenSSL.
I'm saying you could have something like a Debian where the build infrastructure and package maintenance works exactly the same, your package is always built against the latest library versions.
But users get a NixOS-like experience where they can partially update their system, now with traditional OS package management an update of a common library & a full system-update are often in practice the same thing.
> Is it really better in that short time for the other 5% of applications to "work" but be insecure and get your system compromised because the new vulnerability is being actively exploited?
On the flip side is it really better that if you're using the other 95% of applications that your security fix is delayed because upstream is finding it a pain to rebuild the patched package for 5% of its users? That's realistically the alternative.
> [...]but that should happen only rarely so that at most two or three mutually incompatible versions are in active use at once[...]
The sum of differing versions in use among a distro's userbase is vastly larger than that, some of those users are deferring security updates because their distro makes upgrading an all-or-nothing experience.
Is this kind of like Ubuntu's "snap" thing? (I know snap involves cgroups, which aren't strictly necessary to give Firefox a different version of rust than everything else... but it seems widely deployed and easy to use, at least if you're on Ubuntu.)
Complex apps can bundle their own dependencies-- that's what the Flatpak and Snap package formats do. You can also run Firefox in a Docker or LXD container with it's dependencies. By sharing the X11 or Wayland socket with the docker, the apps can appear on your main desktop.
Yes, and it sounds ideal but I've had a recurring problem where having snap installed on a system increases the boot time, and apps hang on load. They stop hanging when I install the native app, and use that instead.
IIRC that's because Snap's are stored compressed and have to be uncompressed when you first run them after boot.
I think that's because Snap is also meant to be used in IOT where disk space is limited but it definitely should be optional on work machines with lots of disk space.
It is not disk space you should be concerned with saving, but real memory usage. I wonder how much memory a statically liked Firefox uses under a heavy load. I checked with esr (what I use) and it is about a bit more that 1.3G excluding shared. So as firefox creates threads, I would think memory could get tight with a statically linked FF.
I fully agree with what OpenBSD has decided, I think the only thing worse that compiling Firefox is Gnome 3 :)
Statically linked processes can’t share dependency pages with other, different processes. Multiple instances of the same process or multiple threads don’t have to incur that same penalty.
Seems to me that more aggressive memory deduplication at the kernel or hardware level should be able to mitigate memory explosion from using only statically linked binaries or letting “big” frequent updated projects manage their own dependencies.
I also think that large projects should just vendor their dependencies, including compilers, so you git clone the thing, type "bazel build", and have a working binary. (I am fine if the dependencies are not checked in to the repository, a bazel WORKSPACE file is fine with me. It records a checksum for all dependencies, so even if you reach out to the Internet to get it, you get the same bytes as the developers working on the code.)
Building and distributing software should be very simple. But it's not, because of an accumulation of legacy tools (shared libraries) and practices ("please install these 6000 dependencies to build our project").
Sure, that gives you working binaries. But also security problems, because you don't control which versions of libraries you are using. Suddenly, all individual software packages must do their own security updates to keep the system secure. So the whole thing is a balance.
You do control what version of libraries you're using. You include the exact SHA256 of every dependency of every dependency, down to the toolchain itself.
If you're saying "your distribution can't automatically update you if libc is vulnerable to something", that's true. More CPU time is required to react to major vulnerabilities, as everything has to be recompiled. However, it's not much CPU time, and the downsides of requiring more compute time are lower than the upsides of knowing exactly where your dependencies come from. And having your "getting started" instructions be "1) install bazel 2) bazel run //your:binary".
Yes. But if we can extract versions of libraries used to build packages then it would be easy to audit the system by cross-checking against CVEs. Those who prioritize security would remove the affected package(s) until a new version is available.
My point here is to be able to install a new package when it's out, without disrupting the whole environment. For FreeBSD, for example, the new Firefox is already available. I have installed it, and it wouldn't run. I had to auto-update 300 MB of other stuff, including LibreOffice, PyCharm and even TeXLive, to get the system up to date for the new Firefox to be able to run.
Well the reality is, some developers of these complex software complain how their software ported to alternative OSes is difficult to maintain even when the BSD devs would offer to maintain it, which is why the BSDs are always on their own here in terms of porting, testing, packaging and updating the software in question.
For example, Chromium. Any BSD developer would know that attempting to upstream their port there is dead in the water. AFAICT, They Chromium devs only care about Win, Mac, Linux and nothing else. Likewise for Firefox, which is why the BSDs don't get official releases.
I feel some sympathy for OSes like the BSDs that for "some" software they are limited by the arrogance of other open-source maintainers who would rather go for maintaining the convoluted distros and testing with a huge disintegrated software-stack than testing a single unified OS which the BSDs maintain themselves.
> For example, Chromium. Any BSD developer would know that attempting to upstream their port there is dead in the water. AFAICT, They Chromium devs only care about Win, Mac, Linux and nothing else. Likewise for Firefox, which is why the BSDs don't get official releases.
The BSDs qualify as "Tier 3" in Mozilla's build terminology, which means that the onus is on external contributors to identify problems and propose fixes, as there is no continuous integration support for these architectures.
> The BSDs qualify as "Tier 3" in Mozilla's build terminology, which means that the onus is on external contributors to identify problems and propose fixes, as there is no continuous integration support for these architectures.
In other words, Firefox doesn't maintain the port for OpenBSD, OpenBSD developers do.
> Likewise for Firefox, which is why the BSDs don't get official releases
We mostly don't want official releases, we prefer our package managers.
Firefox is absolutely not like Chromium, Mozilla is very good at accepting patches and the upstream repo pretty much always works on FreeBSD. Just git/hg clone and ./mach build and here it is.
RedHat’s “streams” model will certainly do a much better job of handling this than the other distributions do today. I hope that the need for having multiple parallel versions of a dependency coexist is incorporated into the other distros, because I’ve lost a lot of sanity this past two decades to the assumption that “one installed version should be enough for anybody” on Linux and BSD servers.
AppStream does not allow installation of multiple versions of the same app AFAIK. I believe this is what they refer to in the clumsy sentence "The one disadvantage of Application Streams from SCLs is that no two streams can be installed at the same time in to the same userspace. However, in our experience, this is not a common use case and is better served using containerization or virtualization to provide a second userspace." in the linked article in the sibling comment.
It's a good idea in general, and it would be cool to solve this problem, but modular "streams" as they currently exist have so many edge cases that I don't really suggest using them if you can avoid it.
You can check the status for the firefox package in Fedora at https://admin.fedoraproject.org/updates/ which shows firefox-72.0.1-1.fc30 as "testing" (that is, you can install through "yum --enablerepo=updates-testing update firefox"). The reason for it not being promoted to stable yet is, according to that page, "disabling automatic push to stable due to negative karma" - that is, the update was marked as broken by a couple of people.
The whole "centralized, trusted repository that has all your apps" system is wrong at a fundamental level.
The way shared libraries are used in Linux is built upon the assumption that package managers and centralized repositories are the right way to do things.
This! Our company OpenSUSE Tumbleweeds were still vulnerable on Friday. On Arch, the package was already available for two days then.
The thing is, the distro maintainers need to decide whether they want a fast moving "latest and greatest" approach, which might break stuff by accident, or go a "slow but always dependable" route, which you can depend on as company, with painful losses if things go south (in which case there should be an express lane for important security fixes).
It's perhaps coincidental, but I don't remember ever having a trouble on an update that I couldn't find a quick fix for in the Archlinux homepage.
In my opinion, the reason why Archlinux can be so dependable is that packaging is based on really simple ideas, like not starting or stopping services on installation like Ubuntu would, and that most if not all system files belong to a package. The package manager can do a whole OS installation just by installing the basic packages on a directory different from /. The package format is also really simple. Getting into the guts of Archlinux packaging is really approachable and that makes me feel at ease for whatever trouble I may find myself in.
As I mentioned downthread, surprisingly little stuff breaks by accident nowadays on Arch (on my desktop, zero breakage in the last year with a fairly complex stack; on one server machine I had one strange interaction between a language package manager and pacman that ended up being my fault). The "slow but dependable" solution ends up breaking as soon as software becomes too complex for maintainers to be able to backport security fixes onto old versions, so they just ship the latest version of those packages, negating the dependability aspect (like Debian's treatment of Firefox and Chromium).
"brake" makes it sound like one would have to reinstall the whole thing. I don't think I've ever had that happen in the 10 years since I started using it for nearly everything.
I've had my drive fail, but that was the drive's fault, and even in that case I also didn't need to reinstall everything. I just copied the good files and reinstalled only the packages whose files got corrupted.
I see, when you said "break", I thought you meant because of a bug or other error, but now I guess the fact that it needs configuration and learning how to do that configuration is what you're calling broken.
Certainly, Arch is definitely not a "it just works" OS. It's a tinkerer's OS. Different distros favor different types of users. By what you said, probably something like Ubuntu or Mint is better, something that "just works" with minimal learning curve for someone not familiar with Linux in general and that does not wish to invest the time in learning it. (Not saying that you're not at least familiar with it, but that's what these distros optimize for, I believe.)
Arch is not an easy distro to use even if you know what you're doing. Every Arch user essentially creates their own distribution which can break at any point depending on their particular environment. You need to take care of all the little things yourself.
Personally I didn't experience all that much breakage, but eventually got frustrated by kernel updates breaking hotplug kernel module loading until reboot because Arch removes kernel modules for the running kernel. This breaks random things like plugging in a USB drive unless you happened to load the module previously, so you're essentially forced to reboot every time you upgrade the kernel or take care to manually exclude it.
I'm grateful for Arch because their wiki is beyond excellent, but I wouldn't recommend it to anyone unless they just want to tinker.
Once you understand this, it seems to me to be a very simple step to align your kernel upgrades with reboots. I fail to see this as an usability issue.
Why respond in this dishonest manner? When I left Arch I had been using it for over 10 years. I probably still understand it better than you do.
Is your supposition that a "misconfiguration" just "happened" randomly over night?
No, it's a rolling release distribution, software on it will break randomly on updates. You need to read their FAQ, even their official document will tell you this happens and it's not their responsibility due to the rolling release nature of the distribution.
What's dishonest about it? I'm trying to understand the issues that you mean, and the need to configure before getting to work is what I understood you to mean as broken. I'm not talking about misconfiguration; that's different. You emphasized that if you couldn't get straight to work then it was broken. When I can't get straight to work, it's because I need to configure something. I'm using "configure" to mean any sort of maintenance on the machine itself that doesn't involve "real work".
If that's not it, then I've no clue anymore about what you mean as broken.
My main computer was an arch box for about 7 years (I moved computers after 3 years, hence why my other comment says 4, I just remembered my old tower from around 2014 that also runs arch). In that time, the only breaking updates that occurred always had clear solutions on the homepage, with something you could paste in and fix. I stopped using it because systemd randomly hung on boot, for no clear reason (even after debugging it and trying to speed up boot), but I've had that with several other systemd-related distributions, too.
I believe they are suboptimal for the same reason that lexical binding won over dynamic binding in language design: it is easier to reason with immutable bindings, and to maintain as little global state as possible.
The work done in Nix and Guix is interesting in this regard.
I wouldn't say nix isn't anti-centralization, I'd say it's policy-agnostic. At the end of the day, not everything composes, only things whose interface is in some way compatible.
Traditional package manager conflate having something with composing something (everything is ambiantly available and interacting). Nix separates those, so you can have every version of everything, and only try to compose things that fit.
I agree, and I think that people are too emotionally invested in the package manager concept to back out now. I mean, for years Linux proponents have been touting it as the key advantage over software distribution on Macs/PCs.
I think the empirical evidence is against you here, though. If package managers weren't a good way to do things, brew wouldn't exist. Neither would the Mac app store, or whatever the equivalent on Windows is these days.
As a Debian user, I appreciate that the stuff I install had at least gone through some minimal vetting first. And if I have to add a third party repository or download something myself to run, I'm much more likely to view that as the possibly-dangerous action that it is, and try to actively assess the reputation and trustworthiness of what I'm installing.
That's certainly not an average, noon-technical user thing, though. Vetted app stores are there to help average users avoid malware, assuming they're doing their job properly.
I agree with the OP a little. Central repository models have advantages but they've always seemed like short term benefits in exchange for long term costs.
Things like the app store, in for profit scenarios, seem like ways to slip in monopoly control. Brew is an attempt to circumvent it.
I don't want to come across as suggesting they're a bad idea or don't have advantages, just that on balance I've always had a sense there had to be a better way.
I mean, it is a key advantage, on ArchLinux. If you don't have a rolling release, it's pure pain: you're often just stuck with old versions of software which aren't any more stable than the current stable release.
Non-rolling releases might work with smaller software where your distro maintainers can backport security bugfixes easily, but as soon as your software gets too big and complicated the distro maintainers won't be able to backport fixes themselves and will have to just resort to packaging the latest version of certain packages (like Debian had to do with Chromium and Firefox).
Also, for the most part, the "Arch is unstable" thing hasn't been true for a long time. I find that it's far more stable than Debian or Ubuntu LTS on my machine because of newer kernels containing newer hardware drivers.
You can run backported kernels on Debian and Ubuntu LTS too. On the latter it's pretty much the default - they provide "hardware enablement" releases so that users can avoid issues on cutting-edge hardware.
The existence and popularity of homebrew, macports and fink (on mac) and ninite, chocolatey, cygwin, mingw and nuget (on windows) would suggest that the concept appeals to mac and windows users also.
Isn't that the whole point of "stable" distros like Debian and RHEL that you can run apt upgrade/yum update and it'll never break your system? I thought that's the big reason they bend over backwards to backport patches and muck around packages so that they retain compatibility.
Perhaps, but in which case this concept actually adds value? Imagine you have a working environment (development, scientific research, etc) carefully set up to do your job. You wish to upgrade, say, your web browser, but now have to upgrade the whole environment. Changing versions can be very disruptive in many situations.
Imagine you're rolling out *nix 5000 endpoints; change is going to make the problem hard, so the idea is stables ABI/APIs should be the same, so it can like a platform you can build against.
The problem is, browsers pretty much have to be the latest versions, or websites won't work, and most hardware improvements are going to need a new kernel (which to be fair, given linus don't break user space position, is a probably safer than updating userspace), so it doesn't necessarily work for all use cases.
A common approach in webdev has been to keep current, but run a test suite so you know what breaks, and deal with it when it happens, but I don't think the average infrastructure person has kept up with web dev here, and unless you're running containers, you're probably still on a stable OS.
I have been interested in finding BSD users who are interested in https://nixos.org/nix/ or even a NixOS/kBSD. It solves all these problems and in my view is the continuation of the spirit of the port system.
(The problem with bundling isn't disk space, but composition. Individual applications can compose fine, but libraries can't if they link other libraries at different versions, and use those library's types (ABI) in their own interface (ABI). To solve this problem you need to distinguish between public vs private dependencies in your package manager.)
Oh good, I'm not the only one thinking about this:) My last idea was basically "run nix on top of a BSD (replacing the package manager), then make nix package(s) for the base system". Which sounds doable, actually.
> The whole ecosystem falls down like a card house [...]
> praying my Python and other projects survive [...]
And this is exactly why I never update. Every 10 years or so I go through the pain of installing a new version of the OS, mostly because the browser no longer works on most websites. But then, now that the web becomes more and more uninteresting to me, I might as well keep the current version until the hardware breaks down.
Security? Sure, but not at the cost of ruining half of my installed software on a regular basis!
If there was a statically linked version of BSD, I would switch to it in the blink of an eye.
What type of system is this? I appreciate the sentiment but I’m surprised you manage to pull it off seemingly easily. You don’t run into software compatibility issues?
(I’m trying to do something similar by downgrading to an old version of macOS more-or-less permanently, because I don’t like where the platform has gone.)
Me too. Especially as combined with fat binaries in the OS X rosetta years. It'd make it a lot easier to package something that would run on arm, arm64 and amd64 in a more accessible way for users.
AppImage[1] kind of works that way. I'm surprised it hasn't caught on more, tbh. I don't think it supports multiple architectures like app bundles can (fat bundles) though.
One of my applications is distributed through Appimage. It works really well.
I really feel it should support containerisation in a similar way as Flatpak does. In my opinion, the idea that any application on your system can can touch all your files and access all your other applications is really problematic. You're always just one broken (intentionally or not) application form having your entire system compromised.
As best as I can tell, the only solutions that even tries to address this is Qubes OS and Flatpak.
Nowadays (for almost 20 years actually), Windows supports and encourages programs to deploy their dependencies in their own folder.
It is trivial to do for developers and works perfectly for users.
Some people claim security updates are a problem, but many apps do not deal with untrusted input for starters. For those that do, like browsers, servers or your office suite, you really should be using one that is supported and updated automatically.
Is the shared library issue actually related to this? In the OP they are talking about a Rust upgrade which is presumably only a build-time dependency.
As an end user I view it the exact opposite way. On windows I have dozens of auto updaters running in the background that may or may not fix security issues in any reasonable amount of time. I also have lots of different versions of .net, libraries etc. installed, some with security support some without. Also, an issue in a central library will require hundreds of updates, some of which may never come.
In contrast on linux I have exactly one updater and can be certain my software uses the fixed version. I don't care about disk space saving or RAM savings (though those are nice), I care that stuff gets fixed and is secure. If you don't enforce that you trade convenience (for the developers, not the user) for security.
Would it be possible to run Firefox in it's own jail? All of my FreeBSD boxen are servers, so I don't have any experience with this but... put your crustier apps in containers so they don't poop in your main sandbox. Works for servers
Is containerization any better than static binaries for compiled apps (not talking about python/ruby/js apps here, but instead compiled ones like firefox)?
That’s great on a personal level, but bigger picture, aren’t we going about this the wrong way?
If most people are using containers because “applications were never designed to be compiled entirely static”, developers should start designing their applications so they can be compiled entirely static.
Hm. For me this is a question about interface design (vs. implementation). IMHO Unix programming philosophy is a lighthouse how to solve this commendable. In an ideal world you should have to upgrade base program and dependent libraries only if their interfaces have changed.
EDIT: And of course for security reasons - but then you only upgrade the affected player and not huge parts of your installed base. What I want to say - this is all about the ruling design philosophy.
> I think the benefits of saving disk space by sharing libraries do not justify the inconveniences we
This is really sad. It seems that we are moving from the complexity of shared libraries to the complexity of "docker images" and the like, where the whole OS is embedded in each program for convenience. The sweet spot of static executables has been shamefully bypassed.
This seems to me like a false dillema between forced upgrade due to dependencies and not using shared libraries.
I think that there is in general no technical problem to have multiple versions of the same library installed (files in /usr/lib are versioned anyway and for other data the libraries can be compiled with prefix including version).
Then, package system could handle different major/minor versions of libraries as different entities, handle upgrade of major/minor versions through dependencies from installed/upgraded packages, and only upgrade library directly for patch-version change (i.e. security bugfix).
In such setup, there would be several instances of a popular library installed on a system, but not say hundreds partial instances statically compiled-in to applications. And it still allows simple updates of libraries for patch fixes.
I never experienced that catastrophic breakage and I'm using Ubuntu as my only desktop OS since January 2009. When did that happen to you?
I have no problems with the standard approach. Actually, it makes me feel safe: a maintainer fixes a vulnerability in a shared library and all apps are fixed. If all apps shipped their own libraries I'd have to download a new version of all apps, if they ever care to fix their code.
Outside packages should NOT be disrupted given semver. The whole point of using shared objects (dynamically-linked libraries) is so that when a problem arises you can update whatever pieces of code in a centralized, system-wide store and every single one of the projects you use can benefit from the new, up-to-date version. Using the latest version is just the right thing to do.
Semver is not a given in practice. Many libraries haven't adopted it, some of those that claim to break things in practice, and the very definition of a "breaking change" can mean different things - sometimes depending on the consumer (this is especially true for cross-language interactions).
Distributions can make their own choices about versioning if upstreams "break" semver. Sometimes this results in weird version strings and you might not want to depend on these relabeled system packages for your own projects - that's the one case where "vendoring" a dependency version might be worthwhile. But it ought to be quite rare indeed.
This works if you have a stable ABI, but breaks when libraries are written in languages that don't have a stable ABI (in Firefox's case, Rust). Then even a simple recompilation of one package breaks dynamic linking.
Doesn't Rust use static linking between crates as a rule, precisely because dynamic linking would break given reliance on anything but a pure C-like ABI? (BTW, the cbindgen tool mentioned in the linked article is meant to address precisely that - provide a clean interface to a Rust crate that won't break with internal ABI changes.)
Does it actually work though? It seems like there is a significant difference between the expectation that semantic versioning should result in no disruption, and the experience of the parent to your comment. I'd be interested to know exactly what went wrong that lead that poster to be pessimistic.
Yes it does actually work and has for decades. The poster is either confused or doing something different, I expect.
The specific notice mentions rust dependencies. Rust does not have shared libraries, so a Rust [security] update means all rust binaries must be completely rebuilt. That seems to be part of OpenBSD's concern, and perhaps this has "triggered" the poster.
user blackhaz random forum post found:
> mariourk, you're definitely not alone. The same breakage happens to FreeBSD desktops as well. I have been vocal around the forum a few times about this. Not every desktop user wants to upgrade all their packages every X months. The way the default repositories are structured, the upgrades are forced onto users. I am a huge proponent of using application bundles, like on Mac OS, as quite often those upgrades break complex desktop apps too. I don't mind upgrades but sometimes I need to stick with a specific version of software, or roll back after something went wrong, and if it came as a bundle with its own dependencies, it wouldn't have to depend on other stuff that is being "force-upgraded" periodically.
So his complaint is about the way FBSD handles updates; he is generally incorrect about shared libraries; and his comment is completely OT for this thread.
Because rebuilding everything all the bloody time is expensive.
Virtually every Linux distribution and BSD share this concern. We've all complained about it to Rust upstream, but they don't care. To them, it's cheap to rebuild every Rust project every time the compiler is updated or the standard library needs a fix.
What's worse is that a lot of people are forgetting why we do it this way in the first place. While part of it was about saving disk space, the major reason we do this is for being able to fix things in a cheap way and have wide-ranging impact. Without this, things like security fixes to zlib, libvpx, or other important libraries would require finding all their reverse dependencies, patching them, and rebuilding them to incorporate the fixes.
It's incredibly important, but because nobody cares in Rust and Mozilla, we're all doomed...
Nonsense. Rust has shared libraries just as much as C and C++ do. Distributions can and do build individual Rust crates as shared libraries, and crate authors can go further and offer a shared library with a C ABI that remains stable across rustc versions. (The latter is what cbindgen is for.)
The problem here is that the maintainers don't want to keep multiple versions of a crate around, so when any single application uses a newer version it forces an update on all the rest. Which is exactly what Rust's tendency toward static linking avoids.
It's not that simple. New versions of Firefox can require a higher minimum version for the Rust compiler, and upgrading that will break the ABI for all crates that don't stick to a pure C ABI, regardless of shared vs. static linking,
> praying my Python and other projects survive, just because I want a browser update or a security fix.
You mean you aren't familiar with Python's virtual environment system exactly intended for isolating development dependencies from system ones but you're blaming the distribution. Please.
Virtualenvs don't protect you completely. If any of the modules in the virtualenv depend upon libraries on the host system, and a lot of Python modules do either directly or indirectly, then there is a chance of breakage if any of the ABIs of these modules break or there's an incompatible ABI between any of the modules.
Even virtualenvs themselves can cause the breakage. Take "pylibtiff". It embeds a copy of libtiff. But, it's old and incompatible with other libraries on the base system. This can be libraries it depends upon, or it could be another copy of the same library. Either way, virtualenvs can cause more problems than they solve.
Virtualenv lets you use only a Python interpreter that's already installed on your system. Usually the choice is between the latest v2 and v3 interpreter, at least on Debian derivatives. Every virtualenv is autoupgraded each time the OS upgrades its Pythons.
If you want to use multiple versions of Python and pin them, you should use something like asdf [1] and its Python plugin [2]. It's a version manager with support for many languages and more. I used it for Elixir, Node, PHP, Ruby and even PostgreSQL (I need different database versions in different projects.) I never used it for Python because I'm OK with being on the latest version so far.
With asdf I would expect to do something like this
asdf install python 3.7.5
mkdir my375project
cd my375project
asdf local python 3.7.5 # pick the version to use here
python3 -m venv py375 --python=~/.asdf/installs/python/3.7.5/bin/python
~/.virtualenvs/py375/bin/activate
This virtualenv is going to stay on 3.7.5 forever.
The devs that complain about <1gb of space over an important piece of software are living in the early 2000s still. They haven't figured out a terabyte on disk is worth about $30, with ssd terabyte at $100.
I do something like that. I have a separate minimal chroot directory with just Firefox installed and configured, and use systemd-nspawn to start a container. The container is using a snapshot (btrfs) of the chroot directory, which gets deleted when I close the Firefox instance.
I use this not as my main browser, but only when I can't avoid visiting "the wild web", meaning unknown or untrusted sites, or even sites that use too much javascript for me to feel safe allowing in my main browser.
Is that what's going on? I wonder how the Gentoo maintainers are dealing. They frequently unbundle libraries, but I've also see a lot of packages with use flags like system-jpeg or system-sdl to force the ebuild to use a system library instead of the built in.
Firefox, Libreoffice and others big applications typically take a long time to compile because they have so much stuff built-in instead of depending on the system libs. They gain a lot of stability and predictability but you can have dependent libs with security issues too.
> I wonder how the Gentoo maintainers are dealing.
I spoke to the gentoo firefox package maintainer last night. He even has firefox building on aarch64 with musl, so things are pretty good. For more common targets, there's also the option of using the official binaries from Mozilla.
Gentoo policy is to attempt to provide a way to use system libraries where possible except when those libraries are too heavily modified. If something breaks, they use the bundled package. You can see that's currently happening with harfbuzz in firefox until they bringup the patch: https://github.com/gentoo/gentoo/blob/master/www-client/fire...
From experience, it's usually fine to use system libs when available unless those system libs are unstable or development releases. Then all bets are off.
The title on this story makes it sound like OpenBSD stopped updating Firefox altogether. That's not the case, it just stopped backporting non-ESR Firefox updates to its -stable branch.
> being too complicated to package (thanks to cbindgen and rust dependencies)
Can anyone explain what is behind? Is it symptomatic for any programs with those dependency? Especially curious about rust because it seems to be hyped very much lately (I have almost zero rust experience and even less bias about it, just being curious)
They don't want to need to update Rust in order to do a presumably small security patch on Firefox.
Which honestly sounds like a totally awesome and legit reason to use -esr. Keep -current current with upstream, stable branch gets patches from firefox-esr.
Keep in mind stable patches to the ports tree are pretty rare on OpenBSD. They didn't do them as binary packages until fairly recently, either.
In a stable branch you want the size of changes to be small and targeted despite how severe the issue is. You don't want to take on new bugs from patches that aren't related to issues you want to see fixed.
If the change is so small, I don't see why they wouldn't be able to backport the fix? Don't upgrade firefox (and any dependencies), just fix the bug.
It would be a lot harder if the actual fix was more complicated and a much more complicated diff. Possible that master has diverged sufficiently from their version that backporting the fix would be unreasonable.
If the change is so small, I don't see why they wouldn't be able to backport the fix? Don't upgrade firefox (and any dependencies), just fix the bug.
As far as I understand, you cannot call it Firefox anymore if you deviate from upstream (which is understandable, because upstream doesn't want to get bug reports for custom changes):
That's a good point. Perhaps they don't have deep Firefox expertise and resources to test. Perhaps it's better to let Mozilla decide what gets backported to esr and how to do it.
This is the old packaging design where disk space and bandwidth were expensive, so you tried to have one version of each library or package on disk. This design leads to cascading complexity and breakage when many package depend on the same library and some need different versions of the library.
Modern packaging has changed the approach to bundle dependencies-- using more disk space and bandwidth but isolating apps from each other and allowing independent upgrade cycles. Flatpak and Snap work like that and the title wave of interest in containers on servers is related, as server containers are also used to isolate dependency stacks.
Flatpak and Snap are Linux-specific, though. If the BSDs had a comparable solution to package GUI apps along with their dependencies, I presume that Firefox would be one of the first apps to get that treatment.
Shared libraries aren't just about reducing disk space and bandwidth consumption; it's also about fixing bugs in one place fixing it for all consumers. It requires discipline to only fix bugs and not break consumers, though, and therein lies the devil.
Shared libraries just don't work anymore. That's one of the reasons behind Go and containers.
That way, the users always have working applications. Some applications may take more time to be up to date with their dependencies, but at least some of them can be updated asap without worrying about the others.
For example, I use Debian 10 stable on my laptop, on which I installed Snapd. I then installed Firefox with Snap, and I can be confident that Firefox will have the latest security updates, while keeping the system stable (some parts of the system are probably unsecure, but at least everything still work, they will be updated later)
Although, I would like to point that, while the storage space issue is solved with current tech, the bandwidth is still an issue when on mobile data.
On your personal system it probably works really well. In reality tracking thousands of containers on top of operating systems which are using different methodologies gets really complex and in reality isnt ever really done well. At least in my experience. ( granted fortune 100 / 500 never does anything well :) )
What seems to be happening is the burden of the developer to maintain dependencies is now shifted to operations to maintain the automation that manages the various versions of containers because the complexity is too great to do it any other way.
What I see happening in the application of this model is very similar to train wrecks. You see the crash coming for a long time but cant do anything to stop it.
That’s a really broad claim to toss around with no documentation. It wasn’t true at least as far back as the 1990s and I would especially want a citation for the belief that there was a unified single voice across so many different engineering teams.
The same has been proposed in NixOS [1]: basically updating nss, sqlite and other common dependencies for firefox requires recompiling tons of software, which in turn requires testing, especially in the case the update was a major one. NixOS is special in this regard, because the dependencies can be updated just for a specific package, by adding ad-hoc packages for multiple versions, and it's ultimately what has been done for Firefox.
it wouldn't necessarily. you could just update rust, build firefox and test that. leave the others alone (with older rust).
now if the complaint is solely about rust, not specifically FF, then that's a different story. it might indeed be that FF is the only package using Rust right now?
BSD ports tree can be thought of as versioned as a whole, and packages are basically a snapshot of one particular state of it. If you update one port, and it's a dev dependency for other ports, their packages will reflect that.
Right but you will also find versioned ports e.g. gcc4, gcc5, etc. It's not so unreasonable to assume the maintainers could / would do something similar with rust.
If you don't know about Pledge and Unveil you can think of it as similar to Firejail sandboxing from Linux but on steroids. It dynamically limits the types of kernel calls and filesystem addresses a process can make use of so if a rogue thread causes a process to perform an illegal operation under the Pledge rules OpenBSD will kill the process with SIGTERM. Whats more Pledge and Unveil allow the process to execute whatever calls are needed when the program initializes itself but it will relinquish the privilege to run those calls for the remainder of the program's runtime once it no longer needs them (after init).
- Firefox is hard to package so they won't package it (on stable).
- OpenBSD-current users are not affected as firefox has already been committed.
- Firefox-ESR is still maintained.
> (thanks to cbindgen and rust dependencies) on the stable branch (as this would require testing all rust consumers)
Can someone explain what they mean with this? What have other non firefox rust consumers to do with firefox being packaged?
The best guess I have is that to package firefox they need to package cbindgen and rust. And if they package cbindgen and rust officially all packages using it would now need to be tested as now (potentially) all their dependencies have been packaged?? But this seems strange TBH.
EDIT: So I guess they still will package Firefox-ESR on stable? Which will have (or maybe already has) all that dependencies, too? So it's just about having to do that packaging less often?
It appears that they are unwilling to update to the latest Rust-stable release due to the testing burden on the OpenBSD-stable team; which then conflicts with the Firefox Rust update policy of, as I read it, ‘latest Firefox stable will use latest Rust stable’.
Ok thanks, I guess this is somewhat understandable.
Through I wonder a bit why they don't automate the tests (maybe the computation cost?). (E.g. rust automatically runs the tests of all libraries/programs published on crates.io to find regressions, through that takes a day or so to complete).
Okay, suppose someone backporting a Rust update runs a big batch of tests and finds, say, two dozen packages with regressions.
Now what?
Spend two weeks investigating all the test failures? Backporting updates to these packages as well, all while users are patiently waiting for their Firefox to have its zero‐day fixed? Are the tests even correct? Were they failing before and nobody noticed?
And all this to only get automated tests passing. Any regressions in behavior not tested are not noticed (or more likely, noticed by users much later, requiring further investigation at that point in time to narrow down the Rust backport as the cause).
In the meantime, this packager’s work on other OpenBSD packages in -current (what most OpenBSD developers actually use day‐to‐day) completely stops.
That’s some insight into the mindset of a software packager. Non‐security backports to language runtimes are a serious maintenance burden.
Discard the outdated assumption that a single installed dependency version is sufficient for all packages, and then improve the packaging system to permit multiple releases of Rust, Python, etc. to coexist as dependencies so that packages can migrate gradually over time rather than forcibly whenever one crosses the line.
Homebrew does a fine job of this. Installing python@2 doesn’t necessarily mean it’ll be made “the default”, but it does make it available for dependencies without interfering with the default python 3 package.
Python 2/3 is not a representative example, because it's actually designed to be installed side-by-side by the authors. Many libraries and apps on Unix are not.
NixOS changes a lot of things to make it all work. If you're willing to pay that tax for the sake of package management, great! People who use BSDs generally aren't.
Python 2/3 is not a representative example, because it's actually designed to be installed side-by-side by the authors. Many libraries and apps on Unix are not.
Rust is designed to support multiple toolchains with rustup. I've got the following installed on my desktop box:
If I remember correctly, the granularity for this mechanism is pretty much arbitrary - i.e. you can have 1.2.3 and 1.2.4 installed side by side if you wanted to. From a practical perspective, if this means that the apps can start depending on 1.2.3 specifically, it requires the distro to package every minor version separately, which is very time- and space-consuming. With gcc and clang it's easier, because usually only the major versions get packaged side by side, and minor versions are simple upgrades.
> Okay, suppose someone backporting a Rust update runs a big batch of tests and finds, say, two dozen packages with regressions.
> Now what?
> Spend two weeks investigating all the test failures? Backporting updates to these packages as well, all while users are patiently waiting for their Firefox to have its zero‐day fixed? Are the tests even correct? Were they failing before and nobody noticed?
Yes, people do exactly that. It's part of running a "rolling" distro, see e.g. the Debian Testing transition tracker at https://release.debian.org/transitions/ - These transitions are running essentially all the time; they're only "put on hold" as a first step in the process of making a new stable release. And even then, newer versions of packages such as rust can still enter stable as part of an "unrelated" security update.
* switch to a shorter stable release cycle (4 weeks, 6 weeks, etc.) with dynamic releases (e.g. if a zero day happens, fix current, and do a new stable release)
* switch to only supporting -current
* switch to something like Nix to allow stable users to "switch" some packages from stable to current, without having to bump their whole system (this would have allowed stable users to update their Firefox-stable to Firefox-current)
A good packaging process should not assume that downstream packages can be trusted to have meaningful processes. Relying on Firefox having -esr releases is a temporary workaround.
The OpenBSD port for Rust 1.39 - the long awaited async/await release - wasn't available for several (6?) weeks. The reason for this was the maintainer is a single person with a complex setup, trying to wrestle with LLVM issues and more esoteric things like Sparc64 support.
It's a small miracle things like Rust are supported on OpenBSD at all, let alone things outside of -current.
> We expect esr releases will stay on the same minimum Rust version, so backporting security fixes may require Rust compatibility work too.
The policy suggests that ESR will update infrequently to the latest Rust-stable at each major .0 release, so as long as 6.7-stable and ESR end up having the same Rust stable version, that will work out. That’s a coincidence-based success, not a certain one, though.
For anyone else that wasn't aware, ESR[0] stands for extended support release, so an older Firefox that still has security patches back ported (like Ubuntu LTE I guess)
Can't the required rust version be (temporarily) treated as part of firefox then, and not bother building everything else using rust with that version?
It seems likely the OpenBSD would consider such a patch if it were presented to them, but they might also reject it due to ideals they hold that outsiders can’t predict.
Is there a better alternative to Firefox? Of all the browsers it seems like the "least bad" choice (above Chromium, and other proprietary browsers) and I use it, but is there something safer, simpler, and more secure?
> is there something safer, simpler, and more secure?
No.
Building a good browser is hard. Typically you can get something "safer" (from a privacy/business model perspective) and/or "simpler" (from a development perspective) relatively easily - see KHTML and other niche efforts. But when it comes to "more secure" while supporting modern web features, you need a lot of skilled eyeballs on code and a lot of people trying to break things. You achieve that either with tons of visibility, or with tons of money. Those small projects have neither. Mozilla has both.
The downvotes in this section clearly show the real state of browser technologies and the few choice one will have if they use an alternative OS.
Brave and Vivaldi are still forks of Chromium, Waterfox is a fork of Firefox. Thus you are not going to find any updated alternatives like those on the BSDs anytime soon.
Meanwhile, WebKit-based browsers doesn't seem to suffer from the overuse of dependencies and multiple languages nor does it have packaging hell unlike Chromium and Firefox.
> Meanwhile, WebKit-based browsers doesn't seem to suffer from the overuse of dependencies and multiple languages nor does it have packaging hell unlike Chromium and Firefox.
In the case of Blink at least, they basically own almost all their dependencies. From my memory WebKit is the same. It makes sense that they would end up that way given their history - KHTML -> Safari -> Chrome all target multiple runtime environments and in the case of Safari especially they were targeting an environment without a bunch of existing unix/linux history behind it.
In comparison historically Firefox has been more aggressive about pulling in 3rd-party dependencies like cairo, ANGLE, etc.
(You could count ANGLE as a 3rd-party dependency for Blink, I suppose, but I don't because AFAIK it's a Google project originally)
I use vivaldi. It provides a lot of customization for my taste unlike others.
My biggest problem with most browsers is tied to how they handle tab management and lacks basic features that should be there without installing an extension.
Example - stacking, auto closing, filter, finder etc. Not hiding tabs after certain number of them.
It's basically a lightweight interface to webkit. There's only so light one can go, however; a browser basically ships an entire rendering stack and big chunks of an OS, and with all the weird features and backwards compatibility issues that have accumulated, one can only go so small.
My understanding is that there will never be a consumer-facing browser called "Servo." Instead, pieces of Servo will get merged into Firefox as they become production-ready (this has already begun). Presumably, eventually everything will be merged in and Servo will stop existing as a separate project.
I'd still be seriously concerned about how few eyeballs they get with regards to security---Google has entire fleets of machines dedicated to fuzzing every line of Chromium code and both Chrome and Firefox use advanced sandboxing to make it so that a single exploit in the JavaScript engine wouldn't be able to result in user-level remote code execution. I'm sure they're both probably fine (if nothing else from security by obscurity) but modern browsers are complicated pieces of software that require a lot of engineering to keep secure.
KDE also has Konqueror, which now allows you to use either KHTML or WebKit as the rendering engine. KHTML works fine on simple sites like Hacker News nowadays but chokes on newer ones often.
If security is something you’re looking for, “Firefox plus some ancient, unmaintained legacy code and patches jammed in by random third parties” is not substantially more appealing than just Firefox by itself.
Your description does not apply to the two examples given. It's obvious you don't even know what they are even if you knew what they were 5 years ago. Look again.
Pale Moon and the like aren't just ancient Firefox code. And without all the 'features' Firefox keeps adding in that are irrelevant to a browser that renders html and executes JS they're just as secure.
I ran into Pale Moon years ago when Let's Encrypt was started and that experience gives an adequate flavour of their approach to security.
The up shot is Pale Moon's maintainers decided they don't trust Let's Encrypt for spurious reasons. But if you use Pale Moon you've probably never noticed this because Let's Encrypt is cross-signed, and so even though Pale Moon claims not to trust them, the cross-signature makes everything still work because they didn't intervene.
So that's a bad decision, combined with total incompetence to produce the appearance where everything looks fine.
That's exactly the sort of thing in a browser that should make people run away screaming.
In the years that followed Pale Moon wanted to keep StartSSL (the outfit which straight up lied to us about issuing clearly bogus certificates for money, Pale Moon's maintainers apparently feel that this shows "integrity") and described Mozilla's distrust decision as a "foot gun". How did that work out? Oh right, it turns out that everybody else doesn't trust liars either and so Pale Moon eventually went along with this because as others explained in practice they're mostly just applying patches blind to core Firefox subsystems like NSS.
If I'm to avoid Firefox, I'd like to avoid its forks as well - they don't really improve upon anything meaningful both of those have had more issues than Firefox in the past. I'm thinking smaller than Firefox.
From looking at the commit history of Pale Moon, it is maintained by essentially three people. Their maintenance strategy is to freeze at an old version of Firefox, and randomly backport patches purely to try to keep somewhat up-to-date on JS or DOM features. Given the sheer size of the codebase, its inherent complexity (a JIT compiler is going to be very ripe for potential security vulnerabilities), and the utter lack of any sign of trying to mitigate these problems (e.g., fuzzing, or even merely attempting to identify security fixes in Firefox that may warrant backporting), if you are worried about any security issues in Firefox, moving to such a fork should only worry you more.
>utter lack of any sign of trying to mitigate these problems (e.g., fuzzing, or even merely attempting to identify security fixes in Firefox that may warrant backporting),
Would you please leave personal swipes out of your HN comments, or edit them out if they make it in? They break the site guidelines, you've done it more than once in this thread, and I'm sure you can make your substantive points without them.
I don't read release notes, I read the commits and the patches themselves. Actually, I did check after posting, and they appear to do the bare minimum--port the posted CVEs, which won't even account for all the security bugs. There are definitely several commits I've seen them do where they specifically revert changes that rewrite functionality to be safer, but don't actually fix any specific known security flaw.
If I really wanted to scare you, I'd tell you about such security-friendly-sounding commits as "we don't want the newest release of the [crypto] library, let's freeze at an older version" or "re-enable support for RC4 in TLS." But that would be unfair, because you'd actually have to read the commit to judge for yourself if I'm quoting them in good context.
> Actually, I did check after posting, and they appear to do the bare minimum--port the posted CVEs, which won't even account for all the security bugs
If you actually read the release notes, you wouldn't say this.
It's hard for me to make sense of what you're saying.
1. You don't seem to think the Pale Moon has value, nor that it should be used.
2. You say you don't read their release notes--which makes sense if you don't think it's worthy.
However:
3. You say you do spend time reading their patches--which doesn't make sense if you don't think it's worthy.
4. Their release notes frequently mention "defense-in-depth" patches which are their own work, not, as you put it, to "randomly backport patches purely to try to keep somewhat up-to-date"--which conflicts with your claims about their work.
So you say you read their patches, yet you don't appear to read all of them. And you say you don't read their release notes, yet you speak as if you have comprehensive knowledge of their work.
Then you say something that's supposed to be scary, but then you say that it might not actually be scary, and you won't tell us whether it actually is.
So, regardless of whether Pale Moon is valuable or useful or secure to any degree, isn't your comment a textbook example of FUD? What's your purpose here?
Yes. Firefox forks that forked before Mozilla jumped the shark (v37, then multiprocess, then rust) that evolved into their own thing without all the features/attack surfaces that aren't strictly required for a browser to just render html and execute JS.
Simpler, sure. Safer and more secure, how? There's been a lot of new security features in Firefox recently that you'd be missing out on if you used something that old. You can't put "just" in front of "execute JS" (or "render HTML" for that matter); that's a pretty complex task with a lot of security concerns. I can't imagine that the communities of these Firefox forks can keep up with backporting upstream security fixes, especially since the modern Firefox codebase has diverged so much.
The problem seems to be one of command line interface.
In the C/C++ world, you specify the language version using a flag passed to the compiler. e.g. -std=c++98
In most other programming languages, you specify language version by installing multiple copies of the compiler/interpreter and running the corresponding version.
The C/C++ way works fine if your language spec is updated once every 3 years. It does not work fine for anything much more frequent than that. Until recently C and C++ were popular enough that the other approach didn't need to be accommodated. Now it does.
I agree with the point you're making, but you're actually wrong about C/C++.
It's true that C/C++ compilers usually have a flag that gates language features. However, you're still using the same libc not matter which standard you've picked. For example, there were some changes made to format strings in C99, like parsing hexadecimal numbers with scanf. If you have a C99 libc, you'll always get that behavior, even if you specified "-std=c89".
Rust language editions are not similar to C++ standard versions.
Every new version of the Rust compiler brings changes to the language, so the language you're actually using depends on the tuple (compiler version, edition), not just on the edition.
Rust language editions are similar but not equivalent to C++ standard versions, because compiling code written for an old edition with a new edition is not guaranteed to work, due to breaking language changes like new keywords.
But you are correct to point out that old Rust editions will get new language features if they do not break existing code.
The release process claims that the changes in new Rust releases are purely additive and backward compatible; the only exception is soundness fixes, which I'd expect affect a negligible amount of code.
> compiling code written for an old edition with a new edition is not guaranteed to work, due to breaking language changes like new keywords.
This is not at all similar to C++ standards, and the specific case being discussed in this post demonstrates exactly why!
Barring bugs in implementations, developers could develop against whatever version of the compiler they like, and as long as it supports the same language version, distro or OS maintainers wouldn’t have to update theirs, and everything would work.
I worked on an open-source project that targeted C++11. It could build on basically any Linux distro you can imagine, even ones with very old versions of GCC. (it didn’t support BSD, but for unrelated reasons).
This is the point the Rust project seems to miss when harping on about backwards compatibility: forwards compatibility is just as important.
In practice for C++ I've found it expedient to whitelist some subset of functionality from newer standards, because there's often useful features that already work in the major 3 compilers, but it'll be another 3 years until the last of the compilers adds the last obscure bits for full support of the new standard.
The whole modality of a single package in a single configuration at a single version is broken and dead dependency-hell of yesteryear. Still, ports and packages collections mechanically and unthinkingly continue this failed and broken modality of wasted effort. Packaging multiple versions and multiple configurations side-by-side similar to Habitat (hab) and Nix independent is the only way to go. This approach supersedes vendored dependencies because it allows garbage collection and sharing of identical dependencies rather than duplicating them. Furthermore, it allows real choices without an either-or and real real flexibility that single recipes that always attempt to track a rolling version can never possibly achieve.
Also broken is maintaining multiple packages that package multiple versions of the same package with combinations of external, frequently changing platform packages like rubygems that are packaged manually. Multiple versions of the same package should share common recipe declarations as much possible and be managed more cleanly. Native extensions should be built automatically in CI/CD using polling or notifications rather than manual methods that too often lead to outdated, vulnerable dependencies and create too much pointless, repetitious busywork.
- it's much bigger and resourceful than openbsd maintainers.
- it decided to adopt this fancy update policy, and instead of making it easy/seamless, left it up to the whole open source community to play catch up.
I disagree. The compiler is normally something that receives major updates in every new distribution release. That's why language specs exist for.
If you are maintaining a large number of machines in an enterprise environment, you're not keen on updating half of the installed system just because you update your browser, simply because the necessary testing and fixing of regressions costs a lot of time and money for no real gain for both users and administrators.
You can have multiple Rust toolchains side by side. This is a non-issue in that case. If there's some other usecase that this breaks, I'd like to know about it and see if we can do something to improve the scenario.
The point is that, despite it being possible theoretically, the OpenBSD project does not want to take on the maintenance burden and additional clutter of having 40 (!) separate versions of the compiler toolchain in the ports tree with a new one landing every 6 weeks.
I wonder if there are others who support this opinion that desktop Unix has very complicated future unless complex apps will begin to bundle their own libraries.