IANAL, I can only cite court decision: "And, the digitization of the books purchased in print form by Anthropic was also a fair use but not for the same reason as applies to the training copies. Instead, it was a fair use because all Anthropic did was replace the print copies it had purchased for its central library with more convenient space-saving and searchable digital copies for its central library — without adding new copies, creating new works, or redistributing existing copies."
Well, I made my predictions. Let's come back in a few years.
Netscape didn't attack Microsoft's business software, operating systems or other pieces of their offerings.
Google also didn't seriously attack Microsoft's business.
And neither had the capability to build large software very fast.
Google is both a software company and an infrastructure company as is Microsoft today. Their software is going to become more of a commodity but their data centers still have value (even perhaps more value since all this new software needs a place to run). It's true that if you're in the business of hosting software and selling SaaS you have an advantage over a competitor who does not host their own software.
> Netscape didn't attack Microsoft's business software, operating systems or other pieces of their offerings.
That's not how it was interpreted at the time: Netscape threatened to route around the desktop operating system (Win32) to deliver applications via the browser. "Over the top" as they say in television land.
Netscape didn't succeed, but that's precisely what happened (along with the separate incursion of mobile platforms, spearheaded by Apple... followed quickly by Google, who realised they had to follow suit very quickly).
> And neither had the capability to build large software very fast.
How do we determine that a specific instance of a filesystem mount is "remote", or even requires a "network"? Consider that the network endpoint might be localhost, a netlink/unix/other socket, or, say, an IP address of the virtual host (practically guaranteed to be there and not truly "remote").
systemd has .mount units which are way more configurable than /etc/fstab lines, so they'd let you, as the administrator, describe the network dependency for that specific instance.
But what if all we have is the filesystem type (e.g. if someone used mount or /etc/fstab)?
Linux doesn't tell us that the filesystem type is a network filesystem. Linux doesn't tell us that the specific mount request for that filesystem type will depend on the "network". Linux doesn't tell us that the specific mount request for that filesystem type will require true network connectivity beyond the machine itself.
So, before/without investing in a long-winded and potentially controversial improvement to Linux, we're stuck with heuristics. And systemd's chosen heuristic is pretty reasonable - match against a list of filesystem types that probably require network connectivity.
If you think that's stupid, how would you solve it?
> How do we determine that a specific instance of a filesystem mount is "remote", or even requires a "network"?
Like systemd authors do! Hard-code the list of them in the kernel, including support for fuse and sshfs. Everything else is pure blasphemy and should be avoided.
Me? I'd have an explicit setting in the mount unit file, with defaults inferred from the device type. I would also make sure to not just randomly add landmines, like systemd-update-done.service. It has an unusual dependency requirements, it runs before the network filesystems but after the local filesystems.
I bet you didn't know about it? It's a service that runs _once_ after a system update. So the effect is that your system _sometimes_ fails to boot.
> systemd has .mount units which are way more configurable than /etc/fstab lines
It's literally the inverse. As in, /etc/fstab has _more_ options than native mount units. No, I'm not joking.
Sounds like your admin, distro, or the systemd team could pay some attention to systemd-update-done.service
The "can only be used in /etc/fstab" systemd settings are essentially workarounds to do those things via fstab (and workaround fstab related issues) rather than depend on other systemd facilities (c.f. systemd-gpt-auto-generator). From a "what can you do in /etc/fstab without knowing systemd is working behind the scenes" point of view, then yes, systemd units are vastly more configurable.
This service is the standard part of systemd. And my distro is a bog-standard Fedora, with only iSCSI as a complication.
Are you surprised that such a service exists? I certainly was. And doubly so because it has unusual dependency requirements that can easily lead to deadlocks. And yes, this is known, there are open issues, and they are ignored.
> From a "what can you do in /etc/fstab without knowing systemd is working behind the scenes" point of view, then yes, systemd units are vastly more configurable.
No, they are not. In my case, I had to use fstab to be able to specify a retry policy for mount units (SMB shares) because it's intentionally not exposed.
> How do we determine that a specific instance of a filesystem mount is "remote", or even requires a "network"?
The '_netdev' option works a treat on sane systems. From mount(8):
_netdev
The filesystem resides on a device that requires network access
(used to prevent the system from attempting to mount these
filesystems until the network has been enabled on the system).
It should work on SystemD and is documented to in systemd.mount
Mount units referring to local and network file systems are distinguished by their file system type specification. In some cases this is not sufficient (for example network block device based mounts, such as iSCSI), in which case _netdev may be added to the mount option string of the unit, which forces systemd to consider the mount unit a network mount.
but -surprise surprise- it doesn't reliably work as documented because SystemD is full of accidental complexity.
Sure, and systemd would translate that directly into a dependency on network startup, which is precisely equivalent to the approach I mentioned that depends on operator knowledge. It's configuration, not "just works" inference.
> Sure, and systemd would translate that directly into a dependency on network startup...
You'd think so, but the Github Issue linked by GP shows that the machinery is unreliable:
In practice, adding `_netdev` does not always force systemd to [consider the mount unit a network mount], in some instances even showing *both* local and remote ordering. ... This can ultimately result in dependency cycles during shutdown which should not have been there - and were not there - when the units were first loaded.
> ...not "just works" inference.
Given that SystemD can't reliably handle explicit use of _netdev, I'd say it has no hope of reliably doing any sort of "just works" inference.
I saw many corner cases in systemd over the years. And to echo the other poster in this thread, they typically are known, have Github issues, and are either ignored or have a LOLNO resolution.
And I'm not a systemd hater. I very much prefer it to the sysv mess that existed before. The core systemd project is solid. But there is no overall vision, and the scope creep resulted in a Cthulhu-like mess that is crashing under its own weight.
> "I found one bug in systemd which invalidates everything"
I'll refer back to the story of Van Halen's "no brown M&Ms" contract term and the reason for the existence of that term and ones like it.
"Documented features should be reasonably well-documented, work as documented, and deviations from the documentation should either be fixed or documented in detail." is my "no brown M&Ms" for critical infrastructure software. In my professional experience, the managers of SystemD are often disinterested in either documenting or fixing subtle bugs like the one GP linked to. I find that to be unacceptable for critical infrastructure software, and its presence to be indicative of large, systemic problems with that software and how work on it is managed.
I really wish SystemD was well-managed, but it simply isn't. It's a huge project that doesn't get anywhere near the level of care and giveashit it requires.
LFS never had academic, educational, or pedagogical merit. It was always sheer faith that by doing enough busywork (except the truly difficult stuff), something might rub off. Maybe you learn some names of component parts. Components change.
Could you expand on this comment please? (I don't think your viewpoint should be so rudely dismissed through downvoting and moving on.) What do you mean?
Why do people even begin to believe that a large language model can usefully understand and interpret health data?
Sure, LLM companies and proponents bear responsibility for the positioning of LLM tools, and particularly their presentation as chat bots.
But from a systems point of view, it's hard to ignore the inequity and inconvenience of the US health system driving people to unrealistic alternatives.
(I wonder if anyone's gathering comparable stats on "Doctor LLM" interactions in different countries... there were some interesting ones that showed how "Doctor Google" was more of a problem in the US than elsewhere.)
reply