A functional approach & Scheme for all stuff is a big plus.
I'll bite.
Why?
Being "functional" and using "Scheme for all stuff" is just a means to an end. What value does it bring me as a user or administrator?
Because I can think of at least one obvious downside: Scheme has no mainstream adoption. So they've picked a niche language (yes Scheme is niche) that users now have to learn.
So the value it brings, in terms of features, maintainability, etc, better be significant in order to justify that pain.
And I'm talking real value, not the value of intellectual rigor or functional purity. I'm talking about actual, real, tangible features that can't be gotten anywhere else.
"Devuan can be adopted as a flawless upgrade path from both Debian
Wheezy and Jessie."
That is nice, to have a supported path of simply add sources, signing keys and apt-get dist-upgrade. Impressive work.
I'm personally not convinced there's a viable near future for Linux without systemd (not counting the mess that is android (as a potential workstation / server distro) - I think Debian/kFreeBSD is a better path.
Is there an assumption that once Debian decided to go with systemd and an increasing amount of Debian standard software depended on it, the kFreeBSD port will have to be given up at some point?
Well, that's one way to look at it. Another way to look at it, is that when "all of Linux" went with systemd, there'd be little real interest and help upstream in various packages to port to "linux-without-systemd" - but as long as open Solaris (by any name), free-, net- and openbsd exist - there's likely to be some help with porting to a world without systemd.
What makes is so hard to write init scripts or run scripts or unit files for a daemon?
Of course, when you irreversibly mess things up by adding a hard dbus dependency, integrate udev in the init system, put systemd-resolved on 127.0.0.53 everywhere plus tons of other mess, then of course it's hard to "port" everything because you messed everything up so badly that it only works with your init system now.
I never understand when OSS projects give themselves names that will never go mainstream. If the goal is semi-popularity at least, then "Devuan" will not work.
As I noted on another thread, the choice to call their initial release "Jessie" is also bizarre -- that's the same as the release name for Debian 8.0, which was released in 2015. Between that and the project name (which is a typo of "Debian"), it all makes the project look like it's intentionally trying to create confusion between itself and Debian, rather than trying to succeed on its own merits.
The name "devuan" has been selected cause it's a merge of "debian" and "VUA" where "VUA" stand for "Veteran Unix Admins", the name of the group that started the fork.
The first release has been named "jessie" cause it's a 1:1 replacement of debian jessie, and for our luck jessie is also a name of a minor planet, so, it match our nomenclature and reflect that is a very close path to switch from debian jessie, as Devuan consider itself the "real" continuation of debian after wheezy.
Well, their release is essentially Debian Jessie with a few defaults modified (have the "init" meta-package depend on sysvinit instead of systemd), a few packages removed (systemd-sysv) and a few packages rebuilt to not depend on libsystemd (to no longer optionally support additional integration under systemd for freedom reasons).
Keep in mind that they originally wanted to release their version on Jessie in early 2015...
Something close to Dev One, I'd reckon. Mind, apparently I and most of the people I know pronounce nginx incorrectly, and that's reasonably mainstream.
Since the effort started from some Italian developers/sysadmins, I would be surprised if dev-one wasn't a good approximation of the correct pronunciation. As an Italian, that would be how I'd pronounce it.
This is exactly the case, and people are being very US-centric about it, complaining about something which doesn't sound right to them, but which sounds perfectly natural to the original developers.
In this particular case I don't think it'll matter. All the big distros have (thankfully[1]) standardized on systemd. At least we now have something to point the systemd haters towards, I guess.
[1] I doesn't even matter too much to me _which_ solution we have as long as it's reasonable (e.g. upstart wasn't) and a reasonably complete daemon supervision solution which can actually capture the output of the daemon's it's running, etc.
I sort of understand the reason for using systemd on desktop computers, where things may be rather volatile, but in a stable environment such as a server or a mainframe, the value that this complex piece of software brings to the table is not obvious to me; besides, others have mentioned that it is a sysadmin's nightmare, so...
I'm not super interested in rehashing the flamewar, so this is anecdata: from the perspective of sysadmin of thousands of physical machines, systemd has been a godsend. It gives me a standard way to supervise, gather log output from, and restart long-running-processes with dependencies on other processes. It's replaced a number of "sleep 10"s and "sleep 60"s that our infrastructure has grown over the years in shell scripts at various points in init. Starting services in a consistent environment (not inheriting anything from the sysadmin's own session / environment) has also been useful in an environment with lots of sysadmins.
It's certainly not the best solution out there for any single thing it does, but the standardization - i.e., that it is integrated with my distro and also every other distro - is extremely useful. Most existing things I want to run tend to have systemd support, and most homegrown things are very easy to write systemd units for.
Also, there are a lot of reasons to want fast reboots for servers, and systemd provides this. (Again, so do other approaches, but the standardization means this actually happens.)
It's worth noting that there are sort of two debates: systemd vs. other similar systems (upstart, SMF, launchd, whatever) or systemd/similar systems vs. sysvinit. From experienced, large-scale UNIX admins, I think the wide consensus is that the answer to the second question is "definitely anything other than sysvinit". See, for instance, that the Debian tech committee debate was about systemd vs. upstart vs. insserv or something else, and nobody voted for sysvinit.
This is just a personal anecdote, but so far systemd is the only init system ever to fail to reboot by first stopping all services it could (including sshd!) and then get infinitely stuck waiting on one problematic service. In the 12 years I've used Linux and OpenBSD, no other init systme ever did something like that.
What a nightmare that was to debug and fix.
I'll be happy to go with a system with fewer moving parts, even if that means reboots take a minute longer.
While I don't think this was related to the init system, I have had a Linux box completely hang while trying to reboot it because an NFS server that it was connected to went away. I'm pretty sure this was an infinite hang as it sat like that for over five days before I could physically get to it to hit the power button.
Hanging during reboot is something that, IMO, should never happen.
In my anecdotal experience that's the _only_ case where systemd has failed to reboot, and it's not exactly systemd's fault that the kernel just hangs indefinitely.
Most servers boot really slowly anyway (memory tests, raid cards etc), so why would anyone care if it takes one or ten seconds to boot Linux when the server boots in 120 seconds. Other init systems are capable of restarting things with dependencies as well, like BSD's rcng for instance. I am using runit to supervise my services since I can't rely on systemd.
My experience with sysvinit is that it definitely takes well over ten seconds to boot Linux and start all services, especially if you've been putting "sleep 60" into certain scripts to make boot ordering reliable.
Again, I'm comparing systemd to pre-systemd Linux distros (including Devuan, which explicitly continues that tradition), not to systems broadly comparable to systemd. If Devuan had a different fancy init system or service manager (which could just be bundling runit configuration for everything the OS ships with) instead of an old-school init system + old-school init scripts, this would be a very different comparison.
Startup order (again, with traditional runlevels and nothing like insserv) isn't enough to express "Once networking is up, start this job" or "Once this service is ready to accept connections, start this other servie".
Or containers! There are definite advantages to having your container's pid 1 be an actual init process (rkt-style) instead of the application being contained (Docker-style), and there are often advantages in that init being a full-featured init instead of a dumb EWONTFIX-style init. (Although I have written and used the EWONTFIX-style init for other purposes, where that led to a simpler architecture.)
>"others have mentioned that it is a sysadmin's nightmare"
Any examples? If anything I'd expect systemd to make things easier for sysadmins, would be interested in finding out what challenges sysadmins face because of systemd.
Haven't heard of this particular issue, but my main point is that the reaction should then be: "Ok, so let's fix that in systemd, upstream systemd .service files or whatever, and it gets fixed for everybody all at once.". The reaction should not be "systemd sucks!".
That is the real value of standardization.
EDIT: As far as I can tell as an outsider systemd is at least engineered well enough that these things can be solved, eventually. That wasn't the case before: Everybody had their own quarter-assed solutions that only worked for their specific case and would fail horribly in other cases, etc. etc.
There are things that are worth diversifying on. Basic subsystems that only serve to keep the OS running is not one of them -- the benefits of standardization far outweigh the costs of lack-of-diversity. (Of course this is just my opinion, yours may differ.)
As to freedom: I'm not sure what you're driving at here. Can you expound?
I think freedom and diversity simply do not exist one without the other. Freedom of choice that has been essential in making Linux popular in the first place may be severely limited by imposing more and more "standards"; I would argue that the very spirit of Linux lies exactly in this freedom. This is why we have so many distributions; and the more we standardize, the less need there is for this diversity, which, I think, is bad for Linux and, ultimately, for its users.
We have all these 'words' like standardization, fragmentation, duplication that sound good but seem unworkable in open source without creating a monolith.
There is a real risk this approach ends up concentrating power and influence and destroying dynamism. I suspect Linux would not have got to where it is with 'gatekeepers'.
For proper standardization the design and development has to be done openly with the collaboration of major open source vendors and inputs of users to prevent one entity gaining control rather than now of trying to push vendor controlled projects as standards which looks too self serving.
If the project is already done the vendor should be willing to give up control to a 'standards body'. 'Standards' can't be controlled by a single vendor. It's not a standard then.
> I suspect Linux would not have got to where it is with 'gatekeepers'.
Uh, Linus? ... and distros, in general? There are very few people who decide to fork their own distro because, ultimately, good enough is good enough.
You're portraying this as some of cabal of nefarious shady characters trying to do back-handed deals... it really was just a case of the other non-RH distros realizing that systemd was actually good enough.
This is still all Free/Libre software, so if things go awry there's always the option of forking.
I was actually talking about standardization. If Systemd seeks to be a standard then Redhat would have to give up control to a 'independent standards body' where others can influence the design, development and direction of the 'standard' outside the influence of Redhat and its employees.
That seems to be the only credible way to create workable standards. Open source ecosystems clearly do not yet have the thinking and infrastructure around standardization so it may take concerted effort and time to develop the right processes.
Ah, sorry. I was using standardization in the "de facto" sense.
I think the "forking threat" applies nonetheless as long as RH doesn't at least broadly go along with the "committee" of the (F)OSS community. Honestly, I don't really see how they would extract any value from not doing that, so I'm mildly mystified by all the "they're trying to take over the world" rhetoric.
EDIT: Aside: Honestly, I don't see much value in trying to "de jure" (as in: ISO or similar) standardize a thing like systemd. Do you see value in doing that?
EDIT#2: I used to work in the semi-embedded space, and I'm sure many of the people I was working with would kind of see a value prop, but I would point out that they were already working with 2-5 years out-of-date Linux distros anyway...
Forking is an extreme action, a last resort. Standards are about establishing ground rules to avoid escalations and disagreements. The whole idea of standards is collaboration and co-operation.
Software is not static and decisions need to be made by everyone the standard impacts. Which is why it is important that control rests in a collective that represents everyone's interests. I think that's the only way we could call it a 'standard' without taking liberties with the word. Wouldn't you agree?
(I think we've reached thread depth limits here, so I'll stop at this.)
Yes, it's a last resort and that does give the incumbent a bit of leverage. However, we've seen this type of thing before in the (F)OSS world: EGCS is an almost model example.
> Software is not static and decisions need to be made by everyone the standard impacts.
Sure, but systemd actually has very good reference documentation which you can hold them to.
> Which is why it is important that control rests in a collective that represents everyone's interests.
That may be the thing we disagree about, actually. I actually do think that the systemd maintainers actually do have everyone's best interests at heart and not just RH. That doesn't mean that they formally represent everyone's best interest, obviously. (I readily acknowledge that I may have this perception because I happen to agree with them on many issues, but I do try to be as impartial as possible when discussing it.)
>I think that's the only way we could call it a 'standard' without taking liberties with the word. Wouldn't you agree?
I don't. Off the top of my head, my best example would be AC vs. DC[1]: They were both 'standards', but were basically pioneered by two individuals. (For -- as it turns out -- absurd reasons.)
The point is that a 'standard' for me means something more along lines of a 'standard of service' or 'standard of care'. It's more about having a formalized set of parameters of X. It's not so much about the process itself.
I kind of better like the current Linux community without a backwards standard body, where everyone can write their own stuff and use it. It's just like early flight where anyone with a bike workshop could build a flying machine out of bicycle parts if they wanted to.
Well, it's not the name that makes a person, it's a person that makes the name... That's on the one hand. On the other hand, I agree with you - "Devuan", "Deviant"... Arrgh.
For those who forgot what Devuan was, like i did – basically a Debian fork without systemd.