BeOS, AmigaOS, Solaris, most other 80s OSes - they’re effectively dead. Windows and macOS have effectively died once already. The BSDs can stall for years at times. Most Linux distributions (including RedHat) are typically only as good as the fortunes of the commercial (or occasionally public) entity they have behind. In all this, Debian endures, with its slow but inexorable progress, simply because its ideological foundations - not its technical ones - are eminently superior to all the others. Debian contributors don’t do it for the money, so they will be there when money runs out; and they don’t do it for being cool either, so they will be there when OS work is not cool. People will come and go, but the ideal of the “democratic OS” will always be there - hence, Debian will be too.
How many of us would be happy working at a software company with a bug tracker from the 90s, artifact management done with FTP, little to no tooling to manage large changes and do code review, no standards around source control, etc.? Those are symptoms of a software development culture stuck in the past.
It would be pretty frustrating to go home from my day job, where we have a much better development workflow, and try to make a contribution to the Debian project using Debian's tools and processes. And I'm sure I'm not the only software engineer who would feel that way. That can become a real problem for the future of Debian if it's not addressed.
There is no need for complexity in order to transfer files, that is why File Transfer Protocol works well. If you refer to public FTP services, those were deprecated 3 year ago. See https://www.debian.org/News/2017/20170425
How many of us would be happy deciding things in groups with processes defined 100 or 200 years ago? But that's what representative democracy effectively does, every day, in most of the West. People just find ways to cope and move on, since the process is just a mean to an end.
> That can become a real problem for the future of Debian if it's not addressed.
Yes and no. Yes, processes should be improved all the time. No, it's not a real problem in the long run - I've been hear more or less the same story basically since Debian started, but it's still arguably the biggest and most relevant Linux distribution in existence. People come and go, the ideal endures.
Thanks but no thanks
The thing that's wrong with the Debian project is that ideological stuff doesn't anymore attract talented engineers who are interested in working for free on something dry like package management.
For me, working with packaging is a joy, once I learned how it works I see it as a thin wrapper around an upstream code base that, after built with whatever upstream tooling, is copied within a package alongside it's dependencies information.
I moved from Ubuntu to Debian a few years back, if I need anything beyond the ordinary I can always set it up manually and I am pretty content with that.
Most of the points listed by Michael derive from that. It's impossible to change something. We can only do small things for which no cooperation is needed.
And yet the migration to systemd, which required lots of changes in disparate packages, happened. And the migration away from python2 will happen too, albeit perhaps not as fast as the people driving it would like. And the new source format happened. And for repeatable builds - Debian leads the world.
Methinks "we can only do small things for which no cooperation is needed" might be overstating the case a smidgen. Lots of things with aren't small happen in Debian on a regular basis.
Migration away from python2 is wanted by doko, the Python maintainer. If he didn't want that, nothing would move. We were stuck for a long time with Python 2.6 because he didn't want to migrate to Python 2.7. As he is also maintainer of gcc and Java, nobody wanted to vote him out.
I may have missed the headlines around the new source format. Ack for repeatable builds.
What about bikesheds/PPA? Many discussions a few years back but mostly blocked because FTP masters want it to be integrated into DAK and under various other non-technical constraints.
This is a great sentence, probably one of the most important (and underrated) ideas in FOSS and engineering more generally. A lot of critical work is not lucrative or glamorous - does your project recognize and support the people who do that work?
Classic Mac OS died, and what they call "Mac OS" now is really NeXTSTEP.
Classic Mac OS is about as dead as software can get: it's no longer developed or developed for, there's no backwards compatibility in its successors, and they don't even make hardware that can run it anymore.
I went SLS, Slackware, Debian, then Ubuntu for something like 15 years, and now just switched to Fedora + RPM Fusion, and so far much happier with it.
However I wouldn't dream of running on a fleet of several. I currently have 6 nspawn containers I run, and it's not as consistent to ensure an update won't break it.
Debian is great if you are running many servers. It's slow moving rate is due to care of not breaking the world.
The community is great, but current package management techniques and processes are the equivalent of SVN, with modern approaches like Nix or Guix being the equivalent to Git. In Debian, the whole tree of packages has to be in sync. That works well for Arch, as it is a rolling release, but IMHO that slows down Debian as it doesn't use their manpower efficiently.
Longtime ago, when Nix was not popular, there was a discussion in debian-devel about adopting Nix. It was probably premature. This discussion has resurfaced a number of times. I think currently they would benefit enormously from Nix or rolling out their own tooling that implemented equivalent ideas.
With such a big community and large package set, packages should be able to be decoupled from each other so that they can depend on different library versions and move at their own pace. Also, Nix-like tooling would allow to automate and test most package updates when upstream changes, or find common vulnerabilities and exposures (CVEs) automatically. Currently, there's a lot of manual intervention needed to do this.
This would also be advantageous for end users, as they could mix and match packages from different channels. PPAs are an inferior solution.
Despite some of the difficulties the author mentioned in the article, Debian has successfully spearheaded some ambitious project-wide initiatives, like reproducible builds. So I don't think it's out of the question that they could vastly improve the packaging experience for both users and developers with something like Nix or Guix.
Of course the biggest question is: how does one get there from here? For example-- can the Nix packaging approach coexist and play nice with the current Debian packaging system for years to come?
Yes, Nix or an equivalent implementation like Guix stores all packages in a separate tree (e.g. /nix). In fact, Nix can be used outside NixOS. It's in fact quite popular in some distros and macOS.
Hence, rolling out Nix or an equivalent tool can be done smoothly. Both can co-exist nicely.
Of course the devil is in the details-- graphics drivers, bootstrapping, etc.
What is Arch’s charter? How are its leaders elected? How are its processes defined?
But that isn't really true. Arch historically has always been a DIY distribution with an equally DIY contribution structure. Our leaders has been BDFLs for close to two decades until the process was formalized and we held our first project leader election this year.
There isn't any RFC process, but some consensus making on the mailing list and who wants to work on stuff.
The only other formalized structure is the Trusted Users which are elected in a formalized process.
There is probably a lot of bad things with a less formalized process, but it allows Arch to move fairly rapidly and decide things without a lot of internal politics.
Your living in a world where OS/2 kills windows 3.11 for workgroups. Where Sega's master system outsold Super Nintendo.
Win 3.11 was more user friendly than OS/2.
Not sure about the Sega/Nintendo issue, the Master System was comparable to the NES, the SNES to the Genesis/MegaDrive
The switch is disappointing for me, since I’d prefer Debian with a minimalist wm. However, manjaro + kde is good enough for light usage, and definitely easier for more mainstream users.
I can provision 100+ Debian servers in any configuration I want under 15 minutes by utilizing the features of the OS itself and, forget them after setting them up.
We actually lost one Debian server in a system room (in a rack of unlabeled cluster of identical servers) and, it was working flawlessly when we re-found it months later.
I do still have a throwaway cheap VPS with Arch, but even then I can't recommend it because the security story is largely non-existent.
EDIT: My comment was ambiguous, I didn't mean that there are no Archlinux-specific patches; rather that there's more of an effort with Archlinux to let upstream be upstream.
But putting that aside, all distros need huge amounts of patching to make each package get along with the rest of the system. Without patching, many of them won't even build in the first place.
I've read this at least a dozen of times, mostly on HN, and mostly by Archlinux advocates. Many people seem to ignore that Debian testing and Debian unstable are continuously updated (rolling releases). Please stop propagating false claims that taint Archlinux's community reputation.
Pretty picture: https://repology.org/repositories/graphs
Numbers for the X axis: https://repology.org/repositories/statistics/total
Numbers for the Y axis: https://repology.org/repositories/statistics/newest
Summary for people who like neither pictures nor tables:
* Debian Unstable (31k) has way more packages than Arch (9k without AUR), but the AUR (57k) has way more packages than Debian.
* The total number of packages that are at the latest upstream version are about equal for Debian (17k) and AUR (15k). Arch (without AUR) has way less total updated packages (7k).
* Arch has about the highest percentage of fully updated packages (85%), Debian is lower (72%), and the AUR is even lower (69%).
* NixOS rivals the AUR in number of total packages (53k), has a big margin in total latest upstream versions over everything else (24k, thus 30% more than Debian or Arch), but does not have as high as an update percentage (79%) as as Arch.
The numbers are not perfect because of split-packages and alternative packages (e.g. the AUR often has addtional `-git` variants), but they give a rough idea.
Hope this helps!
In contrast, Arch has been both up-to-date and rock-solid - my current install has been carried over through three PCs since 2015.
I have also had update issues with Ubuntu. There was a bug with Ubuntu 20.04 where a server would lose its default route when it had multiple network interfaces. And another bug where, after an update, network interfaces were renamed on a reboot rendering the server inaccessible. Is having a server with more than one network interface that unusual?
I have yet to find a distribution where updates are not problematic.
I think the parent comment was silly, but let's not pretend that Sid is a meant to be used as a daily driver.
Compare "outdated projects percentage":
Moreover, when comparing different distributions, it would make more sense to have a closer look at the release process rather than compare how they label their packages. Since Debian tests its packages for a longer period of time than Arch, Debian testing should be just as stable as Arch stable.
> Debian testing should be just as stable as Arch stable
Sure, but how up-to-date is Debian testing when compared to Arch?
Guarantee is a strong word. Can Arch guarantee this? Occasional breakage is bound to happen with bleeding-edge rolling releases.
> no security team
Weaker guarantees than stable, but that doesn't mean Debian doesn't handle security issues in unstable or testing. It'll be too late if they start dealing with security issues once a package enters stable.
> no support system
Actually, support is the same for any Debian release. https://www.debian.org/support
> Sid isn't meant to be used as a daily driver
That shouldn't matter much for people who're willing to use Arch as a daily driver.
> if your computer stops working that will be expected in Sid but a gigantic bug in Arch
A gigantic bug but still happens nonetheless.
> Sure, but how up-to-date is Debian testing when compared to Arch?
According to repology, Debian testing has twice the number of latest packages than Arch official . Considering that packages of higher importance tend to be more actively maintained, I'd assume that Debian won't be significantly behind the latest release for packages that exist in both Arch official and Debian.
Automatically upgrading every day is not smart, since then you're virtually guaranteed to catch every breaking change. See https://wiki.debian.org/DebianUnstable#What_are_some_best_pr...
> If security or stability are at all important for you: install stable. period. This is the most preferred way.
> If you are a new user installing to a desktop machine, start with stable. Some of the software is quite old, but it's the least buggy environment to work in.
> Testing has more up-to-date software than Stable, and it breaks less often than Unstable. But when it breaks, it might take a long time for things to get rectified. Sometimes this could be days and it could be months at times. It also does not have permanent security support.
> Unstable has the latest software and changes a lot. Consequently, it can break at any point. However, fixes get rectified in many occasions in a couple of days [...]
> The most important thing is to keep in mind that you are participating in the development of Debian when you are tracking Testing or Unstable.
Since we're comparing Debian with Arch, I'll add that Arch also has testing and staging repositories, in addition to the ones meant for normal usage.
Here are two specific examples which other distros might struggle with:
repackaging a tarball to a proper system package which is tracked by the package manager
building a proper system package from the source (with one command! no `configure; make; make install` lunacy):
I'm a Debian fan, but these two points are very true. Arch documentation is great, and writing PKGBUILD files is easier than packaging for distribution via Apt. I don't even use Arch, but I still release for it because it's easy.
He's now experimenting with a distro focussing on a fast package manager: https://michael.stapelberg.ch/posts/2019-08-17-introducing-d...
For clarification: I am referring to this benchmark https://michael.stapelberg.ch/posts/2019-08-17-linux-package...
For those who are unaware, Portage and Paludis are both magnitudes slower than the other packages managers. However, the comparison is not completely fair, as they both where source based package managers in the beginning, which have to work a bit differently. Nevertheless, Paludis is still a lot slower when being used with binary packages as it doesn't take all the shortcuts the others take.
Nix fulfills every single requirement Michael has put forward (except the squash>tar thing, which I still don't understand).
That's all there is to this. I sympathize with Ericson's frustration. It's exhausting watching people re-invent inferior solutions to Nix, instead of just hopping in and fixing or using Nix. Of course, John Ericson is one of the few people motivated, qualified (and maybe has the buy-in) to make changes in Nix. I'm thankful for that on-going work.
Can you provide more info on that?
The new unstable CLI has an evaluation cache at least.
What most people do today is look for keys in the object and just evaluate what they need. (The Nix language is lazily evaluated so you can explore like this pretty well out of the box in the repl.)
Or, they just grep Nixpkgs :D.
All of this, problems and solutions alike, is a weird situation to be in. I still stand by "just get rid of nix-env -i", but I want there to be better solutions too.
Nixpkgs seems overly complex, and is in some ways, but the fact its trying to herd a gazillion upstream packages that don't meaningfully coordinate makes this harder to fix than it should be.
$ time nix-env -qa > nixdb.txt
Executed in 470.40 secs fish external
usr time 25.11 secs 35.00 millis 25.07 secs
sys time 10.05 secs 27.00 millis 10.02 secs
$ ls -s nixdb.txt
Sure, Nix or Nixpkgs could have such (or a better) database native, but I don't see problem with above. Maybe someone cares to explain?
Let me quote a thing starting at 26:49:
> The performance improvements distri provides, definitely some of them can apply to Nix. I think there are some low hanging fruit in Nix.
The other isuess do reflect some persistent rhetorical issues we've had with explaining Nix:
> There was little differentiation with Nix vs NixOS. (Some of his philosophical difference could be resolved by just using Nix.)
> There was no recognition that the "nix language part of Nix" is cleanly layered away from the layers that actually do the work of running jobs and moving files around, and can be replaced like Guix does.
So I do want our materials to highlight this so experimenters realize Nix is less of an all-in proposition than it sounds (if one is already willing to do the extra work of blazing their own trail).
I'm disappointed that the overhead of trying new things here is higher than I think it needs to be, because making these things from scratch means doing lots of not-innovative between the resesrchy bits.
I care because there are very few people who care this much about packaging, and if our efforts were less divided it would go a lot further.
I'm a little disappointed by what feels like a superficial take on Nix, but I am quite used to that now. And indeed, our documentation is bad about at explaining the essence of the thing.
Here's the thing, Nix is to slow (even ignoring `nix-env`'s terrible search functionality which should just be removed). What that requires is some good old boring profiling and optimization work. If he were to contribute that to a distro/package manager that basically shares his vision, this would be much more useful to the world.
Cause, at the end of the day, the work isn't so much maintaining the package manager as maintaining the packages. That's simply too much work for anyone to do alone.
http://blog.williammanley.net/2020/05/25/unlock-software-fre... is good piece on why the ultimate issues with packaging are social, not technological. At this point, when the vast majority of devs don't seem to act as if there is a commons that even needs integration, I don't think any 1-person technological solution is going to be so good as to upend the social situation.
It's true that the commons needs people willing to put in time and effort on boring things, but they have to be boring in the first place. If the author were to show up and say "Hey, Nix, if you rearchitect in this massive way it may or may not bring big improvements" and sent in a pull request, it would be rightly rejected. But it's still possible that a few days of rearchitecture can deliver the same results as a year of profiling and microoptimizations. The point of distri, as I understand it, is to have something to point to and say, this architectural change will actually work, and it's worth implementing in an actually-used distro.
I agree with this great summary! I’m glad I got my points across :)
The author has a history of delivering quality OSS projects: i3, Debian Code Search, RobustIRC, gokrazy.
I mean, the windows kernel is going on 20 years old. the Mac OS kernel is based on unix, which is even older. Linux is also based on unix. As are Android and ios.
The more we add to these operating systems, the harder it becomes to walk away from them because we have so much invested in them.
Does this mean that in 200 years we'll still be using the descendants of these early operating systems? Under what circumstances would someone decide to start something truly new? And what would it take to ever reach a feature parity with the existing options?
And to be clear, I'm not saying there's a reason to walk away from these. I'm not an operating system programmer, I don't even really know that much about it. I'm just wondering if it will always make sense to just keep adding to what we already have.
> When I was walking into NEC a couple months ago with my good friend at Red Hat, I asked him why he worked at a Linux company. He told me, "Because it will be the last OS". It took me a while for that to really sink in -- but I think it has a strong chance at becoming true. Any major advances in security, compartmentability, portability, etc. will wind up in Linux. Even if they are developed in some subbranch or separate OS (QNX, Embedded, BSD), the features and code concepts could (and most likely will) find their way into Linux.
I think it's mostly true, and for me at least, Debian is the last distribution, because it's so well put together, IME. Same goes for Emacs, 'the 100 year editor' and I've recently been getting into Common Lisp, 'the 100 year language'.
The Linux kernel has changed enough that you can't really say it's the same thing. Heck, even userland has changed substantially - who here remembers having to run MAKEDEV for userspace access to devices? But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux. ABI wise, the Linux kernel is functionally backwards compatible, and that's nothing to idly dismiss. I think operating systems will keep morphing until someone makes a radical leap of progress that they can't adapt to.
That's not to say that there are no advances still to be made. But as many observed in the discussion on DevOps, much of the activity in the sphere of information technology looks like busy work and not progress.
 - https://sites.google.com/site/steveyegge2/tour-de-babel
 - http://www.paulgraham.com/hundred.html
 - https://news.ycombinator.com/item?id=25160461
This would be one of my big objections to systemd - I seem to have gone from a very decoupled kernel userland (eg I can boot almost any media and then chroot into my system) to one where the kernel version and systemd are pretty tied together making things more difficult.
On the otherhand it's random chance if a glibc-dep binary from a modern program compiled today will run on a linux install thats only 5 years old. And given the rapid addition of new compiler features it's getting to the point where GCC on a 5 year old install can't even compile a quarter of the programs written new today.
Containers outside of their useful server-side context, containers for desktop applications, are the fever of this future shock.
As for common lisp, it's not even popular now, so I wouldn't expect it to suddenly become popular in 100 years.
MacOS and Linux, at some level, are already very, very different than earlier unix systems. The BSD's are more conservative, but there's still systems being reworked across all of them (HAMMER in Dragonfly, pledge/unveil in openbsd, etc.). The ideas of unix will probably be with us forever, but the precise details of implementation are transient.
New kernels and OSes that show up are at a massive disadvantage compared to Linux because of this. I think that an OS that prioritizes being able to borrow drivers from Linux or one of the BSDs would give itself an advantage.
The security model of today's desktop OSes is pretty lacking. There's no reason every application you run should have all of your authority right from the start, but generally right now they do. (Sandboxing can improve the situation but getting sandboxes right without restricting the user too much is difficult.)
1. App is sandboxed, and has to ask for access to every bit of your information (photos, contacts, etc)
2. App asks at a time when it seems reasonable (I’m taking a photo and need to save it)
3. Now app has the ability to exfiltrate everything you just gave it access to (like all photos), and it now has that ability (in the majority of cases) without on-device oversight
That is, on file save the app would be handed essentially an open, writable and closeable, file descriptor -- the app might not even know the name of the file it's writing to.
On file open, the app might be handed an open, read-only, file descriptor.
Making such a mechanism usable for both the end user and the app developer gets complicated for sure, but the idea of not just permanently allowing "read+write ~/Photos forever" is definitely out there.
- If you're just taking a photo and need to save it, why does the app need access to all your photos? Surely an append-only capability would be sufficient.
- This depends on the app, but if you're just taking a photo why does the app need internet access? If the app is a typical camera app, it sure doesn't - you might often want to pass the data to an app that does (via e.g. a share sheet) but in general the camera app itself has no need to reach out to the internet. (And if it does, does it really need to be unrestricted access to the internet?)
- Why is it so easy for apps to request access to everything and so hard for the user to say "no, actually you only get to see this"? (iOS has been improving this lately but it's still a pretty rare feature.)
But yes also as you allude to, it's not obvious to the user what access a program has after it's been granted.
Application inertia is hard to overcome. You really need a killer feature and developer adoption to have a chance.
We never really did.
The most successful project (internet + web) did a much better job.
But legacy wise, we have a breakdown of boundaries at so many levels of the stack.
Breakdown of boundaries meant squeezing more performance from both memory and CPU which were expensive and lacking.
Whereas Apple can release a new device on new OS with some custom hardware/drivers and as long as the internet + web parts are compatible with those interfaces there is no issue for customers.
Maybe you're right though, but also isn't that what Posix was meant for?
Also for things like Wi-Fi, they define those standards but the underlying vendor implementation is totally custom and can get really ugly still, just like an OS.
Even "relatively" simple monolithic systems are nightmarishly complex to change.
Apple doesn't really give a shit about backwards compatibility, past the bare minimum to keep their platform devs from revolting.
POSIX, originally, was intended to provide a solution to every-vendor's-Unix problem. A nice side effect of it was absolutely that we got (mostly) standard interfaces.
> We never really did.
Sure we did. VMS, MS-DOS, CPM, Netware, etc.
OSX set them back in that sense (yes; and also ahead. Don’t @ me), but I personally think there are some current opportunities around making graphics-first (and task-first) operating systems.
You won't get another Windows / Linux / BSD. They do their jobs very well, and as WSL emulating Linux proves they are near interchangeable. I have no doubt Linux could emulate the Windows kernel near perfectly too if we had the Windows source to see what is required. When you've already go a slew of interchangeable parts, we create another one?
If something new is to displace them, it's going to have to do something very different. The only thing I can see is not a replacement, but a security kernel that allows us to establish a trust chain from the hardware to some application. It wasn't necessary when the hardware sits on your lap or desk, but now we are tending to rent out CPU cycles from some machine on the other side of the planet and yet still want to have some assurance of privacy, they are becoming kinda essential. There are a number of proprietary ones out there now, but I think that's doomed to fail. No one in the right mind would put their faith in a binary that could be in cahoots with anybody, from the USA government to a ransomware gang.
Any credible attempt will have to be designed from the start to focus on comatibility with whatever it replaces, have huge investments to the ecosystem transition from the corporate owner, have credible commitment to the phaseout of the superseded OS, and be prepared for a very long transition period.
Will be interesting to see what happens with Fuchsia. Its success might be a big loss to open computing though.
Android has been steering away from that position even though Google does still choose to publish AOSP as code drops. Being based on Linux has definitely helped to keep open a while longer. It feels likely that Fuchsia would in the best case have a similar role as Darwin.
The "new thing after Unix" is already here and has been here for some time. It's called Hurd. It depends on a microkernel who's only job is to pass messages from different components, but microkernels don't work well (at least not better than monolithic kernels) on x86 due to context switching overhead.
x86's maintain their architecture due to software compatibility. That's what we're really stuck on. A lot of improvements have been bolted on to x86, like cache, all the instruction stream acrobatics, SIMD, VMs, long mode, etc. but it's still "heavy" ; things are done to make it look fundamentally the same to existing programs.
When CPUs start becoming cheap little tiny things set in a superfast fabric to talk and cooperate with potentially thousands of other cores (like GPUs) as fast as they can chew local instructions, and all the stuff with I/O is worked out, we're ready to take the next step. You can see a little bit of the future with things like the Cell BE, but it needs to be much higher scale to change what OS is dominant.
> the future
> Cell BE
Are you a time traveler from 2000?
NextStep implemented Unix atop a microkernel for security benefit.
Hurd splits all the subsystems of Unix into independent facilities that are connected via the microkernel. One of them can crash and not affect the rest of the system, but also I got the idea reading about Hurd that there's no particular requirement one subsystem lives on a specific CPU or even machine as long as they can communicate.
Maybe I'm thinking of Plan 9?
> but microkernels don't work well (at least not better than monolithic kernels)
In terms of performance, isn't "at least not better" (meaning, as good as) sufficient for performance? Because the point of microkernels afaik aren't about better performance, but security (edit - and robustness).
> on x86 due to context switching overhead
how high is the extra overhead on hurd over a more conventional OS?
Windows NT shipped in 1993, so the kernel is closer to 30 than 20 at this point!
They control the chips. They control the hardware. They could write the software in the middle.
It's called Darwin. The kernel they created is called XNU and it was open sourced in 1996.
1) People announce leaving publicly and the FLOSS community takes notice of it. This is a sign of health of Debian. In many other projects few people notices.
2) The number of Debian developers, projects, and packages has been increasing for decades.
3) For each person writing on mailing lists and blogs there are 10 people quietly contributing.
4) The same applies to the occasional flamewars. Vocal minorities are not representative of the thousands of DDs and contributors.
The piece was cogent, respectful, and constructive.
How about addressing his points?
The same goes for the rest of the things he lists. Yes, he might prefer to do them some other way, but the way they are done now has obviously been working very well for a long time.
As for Debian being incapable of making big changes - that's just rubbish. He's been there for 10 years for pete's sake, systemd was a big change requiring many packages to updated spanning several years and several releases, while still delivering a working system as it happened. That's not big? How about altering the source package format, or moving away from sha1 for signing, or making everything build reproducibly, or moving all developers using its collaborative development platform from FusionForge to GitLab? Sorry, he's just plain wrong on the "can't make big changes point". Debian regularly makes big changes every major release. In Bullseye they will have made big strides in migrating away from Python2. Changes of this scale are things other projects regularly struggle with, but not Debian.
His posts lists a whole pile of things about Debian he's discovered he no longer likes, which is fair enough. But as he says in his introduction it's him whose that's changed, he gone from a student with lots of time on his hands who was happy to be part of loosely collaborating group to being a member of a very focused and highly directed team at work, and he's discovered he prefers the latter. Great, I get it, happens to all of us. But that doesn't mean the Debian no longer works. It clearly works very well. It just means he's no longer a great fit for that way of doing things.
In particular mentioning systemd, which I was very happy to discover on returning to Linux after some time away made this answer very relatable.
Big Ben is a Victorian hand wound clock. It’s accuracy is maintained by moving a stack of old English pennies balanced on its pendulum. It must be wound by hand three times a week, and it takes one and a half hours to wind every time.
My guess is that the author of the piece we are discussing would find this to be an excellent analogy for the Debian processes he is complaining about.
So he has been told that this is a Gmail issue, but insists on using it. Meanwhile he complains that the rsync package maintainer is blocking his changes out of personal preference. Double standards much?
This is not some esoteric email client that a handful of developers refuse to let go.
That's simply not the case, and the OP is right that there were many Debian Developers in 2019 and there are many other Debian Developers now.
Not saying that does not mean that it won't be noticed, or just brushed over, if a prolific member decides to step down.
> The piece was cogent, respectful, and constructive.
Just for clarity, I agree with that.
> How about addressing his points?
Who, the OP? Even his nick name suggests affiliation with Debian, there is normally support and action of more people required to bring bigger change.
But yes, IMO Debian surely needs to continue to adapt or be doomed to frustrate more developers in the future.
Things like (from the blog post):
> I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.
Just seems baffling to me, he proposed to do the actual work (and with his record one could be certain that he'd follow through) of a feature where one can only win (i.e., don't like it? Just continue to use what you like).
Such resentment against unproblematic changes, bringing value to some group but not taking away value from others, is tedious and demotivating.
But who takes up the fight to change Debian? In the end it probably needs to come from within, i.e., a sizeable part of Debian Developers need to drive and push forward, or at least reduce the barriers for those who wish to do so respectfully, without breaking what is now.
No, damage control or image management in no way implies a Company or PR department. Any group can engage in these activities. Later you point out, the OP appears to be affiliated with Debian.
> Who, the OP? Even his nick name suggests affiliation with Debian, there is normally support and action of more people required to bring bigger change.
Yes, the OP. I perhaps should have used the words ‘commenting on’, or ‘responding to’ instead of ‘addressing’.
I am not expecting the OP to solve the problems, but I am suggesting that it would be more constructive to comment on the substantive content of the original article than to write innuendo about how many people are just quietly contributing, or implying that the author may be part of a ‘vocal minority’.
1. I said it seems he is affiliated, but anybody can nick name himself a variant of "debian developer" in any forum.
2. It implies that a formal body of the organisation, that can be a single person like the DPL, else it's not damage control by Debian like you suggest, but that of a single person - which can hardly be framed as damage control in this case, the blog did clearly refer to Debian as a whole, not a single person.
> I am not expecting the OP to solve the problems, but I am suggesting that it would be more constructive to comment [...]
That's what you say now, but not what you said originally. As said, change needs to come from Debian within, not some HN discussions - talk is cheap.
Thanks for the constructive down vote, though ;-)
Nowhere did I suggest Debian was doing damage control.
You are simply misrepresenting me.
“That’s what you say now, but not what you said originally”
Another misrepresentation. What I said originally, and my follow up comment are perfectly consistent.
I’m curious why you feel such a strong need to defend DebianDev and deny that there is any damage control happening.
Every time similar content is shared here there's a number of people making exaggerated claims around Debian being dead or in deep trouble.
The other comments accusing me of doing damage control are a good example.
I recommend attending Debian events in person (once COVID is gone) to see that 99% of interactions between people are very friendly.
Your opening line in no way makes that clear:
> A couple of healthy reminders to avoid drama and FUD:
If your intent was to "in no way respond to the blog," you should have instead written something like this:
"Unlike the article, it seems like a lot comments here are intent on spreading FUD about Debian..."
> I recommend attending Debian events in person (once COVID is gone) to see that 99% of interactions between people are very friendly.
In the meantime, I'd recommend reading the blog: in it a Debian developer mentions having very friendly interactions with other Debian friends before diving into a technical, respectful, and detailed critique of the developer UX in Debian.
Especially the word "avoid", knowing that the article is already written.
> In the meantime, I'd recommend reading the blog: in it a Debian developer mentions having very friendly interactions
And still, you keep assuming that I'm not talking in good faith and that I'm trying to subtly attack he article.
I didn't see it the first time around, and it caught my attention since I've had an interest in becoming a Debian Maintainer for about 15 years now, just never fully did it (even have a GPG key signed by a few Debian Devs) and a lot of the frustrations echoed with me.
I had not realized that my blog template update would result in a repost to various feed consumers. Will try and figure out why that happened.
Maybe I can also add the year to the post header when the post is older than X days.
Anyway, bringing this post up again is not a bad thing :)
Is it better now? Stabilizing? Anything to actually set it apart that you'd call out specifically as being advantageous for Linux on the desktop?
There have been no significant changes to Fedora packaging model until three years ago, when Modularity was introduced and Pagure was deployed to ease contributions and support building modules. And the modularity concept is primarily used for alternate software streams in Fedora, so the vast majority of Fedora packages don't use this feature.
Hobbyists could cobble together hardware and ship it to a data center, but if you couldn’t afford a serial console you’d have to unrack the machine every time you messed up a kernel upgrade.
Now I can just remotely blast a clean install on bare metal, and then build containers or VMs on that, and it’s so easy I can rebuild all infra every morning, from scratch, just for fun (ahem, to verify it’s idempotency.)
Gone is the need for the high quality package management of Debian: The Universal Operating System. I lament this as much as I lament the decline in quality and commoditization of many many other things in life. Food, ISPs, journalism, education.
All great tooling becomes a victim of its success. Packaging is so good now and simple that you don’t have sysadmins anymore, and many companies know little or nothing about what they are doing. So people build the golden build and clone away.
The top complaint, as it was for decades, was that packages were out of date. But I don't think there was as much cultural attitude that running a package from a few months or a year ago was as much a cardinal sin as it is today. People here suggest switching away from projects if there hasn't been a commit in two months. Back then, it wasn't so painful to deal unless you ran into the need for a very specific feature or bugfix. Then there was testing and unstable.
Too true. A lot of my interaction with sys admins at work is "hey can you install this package from the repo?" and only because I don't have administrator privileges. Not to downplay our sys admins, but it feels like they are overqualified sometimes.
Well, almost. What's inside your container, though? For anything of moderate complexity, my containers always end up having some apt-get or yum installation in them; it's not like I want to Dockerfile up the manual from-source install steps for every small package, nor do I expect to have perfect upstream Dockerfiles to FROM that include all the exact bits I need....
Feels very much like saying “all the stuff I don’t (think I) need, is useless”.
or a report, no you need to create a bogus product (which is not a single click either), move the report there, and then delete the whole product.
It's really a bit a PITA to use, the search is IMO quite good though.
But the truth is probably more interesting.
A world where big important decisions made in person go completely undocumented, yet weeks of bickering and bikeshedding over nothing are kept forever.
I felt so busy in college, what a tragedy! If only I know what it mean later in life.
The best part about college for me is I could disappear for the summer, going somewhere and doing what I wanted, knowing that my life would wait for me until Fall.