It's interesting to me that he highlights 32 bit time_t as one important point of compatibility. Makes perfect sense if your goal is to keep your program working on many operating systems. OTOH 2038 is only 14 years away now, or well closer to today than when curl was launched. I wonder when no one will think working with 32 bit times is worth the trouble? My guess is about 3 months after the Y2K38 date, maybe even longer.
This is something a bunch of devs growing closer to us ops-guys have learned as well: Dependants are drag. Infrastructure and central systems stop being agile and nimble if enough stuff depends on it. Trivial changes become really hard.
Like, if you have your service with a backend and a frontend and want to toss your API between those out and deal with it, who cares? Do it.
If you want to throw out some weird edge case in a base template because "It's ugly", you need to look at 20 legacy services, and 20 somewhat modernized services people don't want to touch, and some 50 more services if they use that. That's the work necessary to understand how much work that change would mean. At that point, we're not talking about executing the change, or reverse engineering arcane mysteries.
I'm not going to say it's glorious work, but it makes a roadie or an admin proud if you can keep the "wonderful star" afloat and change everything below it without anyone noticing.
There's a bunch of things on there that haven't had a release in over 25 years. Are they still doing current builds on SCO, Xenix, OS/2 and NextStep?
I've run these, I know some curl exists on them, but in my experience that curl is always like at least 15 years old.
If you say you don't want to break some support matrix and the last build on a bunch of that matrix was well over a decade ago, how do you know it's not broken?
I don’t think you can build for Xenix, the most modern GCC that can target it is 2.7 and that’s already ancient, not to mention the network code. It’s probably “at some point it was running on those systems”.
Right. I'm not speaking poorly on the curl project, (I'm actually one of their financial sponsors), I'd just want to see a bit more evidence of this claim. I've got an OpenVMS/Alpha, Next/68k and HPUX/PARISC machine in my closet. If we want to get modern software on there, let's roll, I'll fire them up.
It's a pretty cool machine but I assure you it's extremely useless these days.
The TIFF format it uses, for instance, no longer supported by imagemagick. It doesn't support modern DNS resolution and if you want NFS you'll need a legacy v3 server.
The Ethernet is a blistering 10baseT and the SCSI drive doesn't crack 300MB.
The ssh server exists but modern clients no longer support it's archaic protocol.
You've got perl and zsh, that's nice. There's also some antique Apache for it so you can use WebDAV and you can get it to play mp3s in real time if they're mono channel and under 32khz (yes, that's what I meant)
To be honest, that’s par for the course for a machine of that era.
That doesn’t mean you cannot waste some time having fun with it: I’ve ported some software to Xenix (including Doom, sort-of-working: https://github.com/gattilorenz/Xenix-doom), and while completely useless it’s intellectually stimulating.
> Countless users and companies insist on sticking to ancient, niche or legacy platforms and there is nothing we can do about that.
Most people, even proprietary closed source developers, write software where one of the first sentences in its license is something like "This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
That is inexpensive.
Some people, myself included, develop software where the fitness for purpose is guaranteed and warranties are explicit and absolute.
That is very expensive. And slow.
It makes sense to stick with ancient and niche platforms that have been pored over for 30-40 years by very slow and deliberate developers when someone may die, the environment may be ruined, or millions of dollars in damage may occur if something breaks.
At least that's what I tell myself when staring at Ada on INTEGRITY on PowerPC.
> It makes sense to stick with ancient and niche platforms that have been pored over for 30-40 years by very slow and deliberate developers when someone may die, the environment may be ruined, or millions of dollars in damage may occur if something breaks.
Contradicted yourself in one sentence!
If a single person dying is going to cost million dollars of damage, it makes sense to not use software or platforms where the developer pool is a about as big as the number of french Jugglers with Wilsons Disease
One of the reasons for this popularity is the permissive license. I would guess that the license is MIT, but seems to be a bit different [1]. Does anybody know (in simple words) what are the main differences with MIT? The copyright webpage does not elaborate.
The following seems to be the major addition compared to the MIT license:
"Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization of the copyright holder."
There doesn't seem to be a single canonical MIT licence, but rather several co-existing variants of it. The part you quote is a standard part of the X11 variant [1], while the Expat variant does not include it.
The SPDX license identifiers are the best thing we have for defining what the canonical version is (which is used by expat): https://spdx.org/licenses/MIT.html
There are many MIT-derrived licenses, some of which have identifiers prefixed with MIT- and others like X11 and curl have independent identifiers: https://spdx.org/licenses/
All the more reason to avoid calling any one licence ‘the MIT licence’, in my opinion. While I appreciate that SPDX provides a comprehensive list unambiguous identifiers, I don’t really see why they would be best suited to determine which of the many variants a name has been used for is the best candidate.
That’s not to say they necessarily aren’t; I’d be interested to see if any rationale behind that choice has been published anywhere. But if the choice was made more or less arbitrarily, or based on what seemed more popular to the authors, I’d be inclined not to treat SPDX as an authority on the matter.
The existence of the advertising clause was always the main difference between the traditional BSD license and MIT license. The above is interesting because it's also an an advertising clause, but it does something the opposite of what the BSD advertising clause did. BSD wanted the license and the Regents to be mentioned in advertising.
It appears to be a custom licence, which, as stated on the page, is inspired by the MIT/X11 licence. The only difference from MIT/X11 appears to be the the part before the warranty disclaimer, which has been shortened. SPDX has a separate entry for it [1].
Well yeah sure, but they count iOS, iPadOS, and watchOS as separate operating systems, so it’s really only like 98 operating systems, which is kinda like… pff… anyone can do that…
They are separate operating systems, try to write any large application that actually runs across all of them without any modification, single source code.
Try to write any large application that actually runs across all Linux variants and form factors. IMO that's a harder job than targeting the three iOS variants.
But you have to draw the line somewhere and the way curl didn't isn't unreasonable IMO.
Where do you (or curl) draw the line of what is a different OS though? Because I can imagine that on a low enough level they may be the same, work the same, etc. Same kernel, CPU architectures, etc. That is, how much code in (in this case) curl is specific to iOS and WatchOS?
I am not spending time researching this, however since Network.framework was introduced in 2018, that sockets are considered the old way, and modern applicaitons should use NWConnection instead.
See WWDC 2018 session,
"Introducing Network.framework, A modern alternative to sockets"
Curl is a library and CLI though, not a "large application". Makes a difference for things like this. I've written stuff like this that I only tested on Linux and macOS, and then it "just worked" on Android and iOS without modifications.
What is or isn't a separate "operating system" is always a bit fuzzy and depends on which layer you're looking at. Android is "just Linux" on some levels, but also clearly isn't on others. But also: Debian really is quite different from, say, Chimera Linux, or Alpine. But it's also very similar.
It also lists illumos and OmniOS for example, and it could be argued that it's really just one system. Same with Linux and ucLinux, and probably a few others.
In my personal experience, when you write code which caters to many legacy systems, your code will likely end up...
* adhering closely to well-regarded coding practices regarding naming, typing, modularity etc.
* being more standards-compliant (except in clearly-identified locations)
* having more robust abstractions
* exhibiting less bugs (in particularly on modern platforms)
* having a more robust build system
* being automatically compatible with future platforms, or at the very least exceedingly easy to make compatible
which is a good thing.
(But this may not be true if you cater to just a few legacy systems, because then you might make do with a bit of idiosyncratic combination of fudges and hacks.)
Strange that they list FreeDOS, DR-DOS and MS-DOS as separate OSes even though they are very similar. They are effectively ABI-compatible, unlike the different Unix flavors.
Almost, as everyone knows DR-DOS, MS-DOS and PC-DOS were like 99% compatible, the problem was when one was lucky enough to trip on that 1%, because they reverse engineered MS-DOS (even the IBM's PC-DOS license did not provide access to everything Microsoft was doing on MS-DOS).
Considering that DR-DOS is a linear descendant of CP/M with MS/PC-DOS ABI compatibility, it's certainly a separate OS. Same for FreeDOS. There's enough difference "under the hood" for each of them, however.
While I might stand corrected on that detail, OpenStep, OPENSTEP and NeXTSTEP had enough differences among themselves, with OpenStep also running on Windows, and many of the Sun's experiments with OpenStep own frameworks being on the genesis of Java and JavaEE.
> I don't believe in rewrites, no matter which language. I believe in replacing code and fixing components gradually over time. That could mean that we have a curl written mostly in rust in 10 years. Or in 20 years. Or not.
Could be, as curl comes preinstalled in Windows, MacOS and most of the linux distributions (non-minimal installations). But I don't think Android ships curl per default, but Android uses Java. Though I don't think there is an official number for this.
The curl tool comes installed in addition to the dreaded curl alias that plagues Powershell users since it is an alias that runs the invoke-webrequest command and therefore isn't acting much like curl at all.
A work-around is to invoke curl as "curl.exe" to prevent powershell from treating it as an alias
101 to be exact. I run it on ENIAC but due to program size it takes me about 11 years to re-enter the machine code every time Daniel does an update. I’m currently on version 1.01 hoping to get to 1.01A around 2042.
It's an excellent response to the rather dismissive attitude that a subset of developers have about legacy anything. Too many times they don't care about portability and/or correctness, and are happy to dismiss whatever they don't use as irrelevant.
They did it when all the world was an i386 (a take on the "all the world's a VAX), and now they're treating even 32 bit x86 the way they used to treat all non-x86. It's not a good look.
You can't take a binary compiled for Linux and plop on a ChromeOS installation and expect it to run. It works currently because ChromeOS runs Linux applications in a lightweight VM with a Debian environment.
Given that, I wouldn't count ChromeOS as Linux, because if we do, by that logic, we'd have to count Windows as Linux due to WSL.
Huh? I do that all the time (run Alpine compiled exes on other distros). I use an Alpine Docker image to build statically linked exes (using MUSL as libc) which are distro agnostic since they don't depend on glibc. Works just fine (for command line tools at least).
I often compile things inside Alpine containers statically and run them on my local machine, and plenty of times I've downloaded deb packages, extracted them on a raspberry pi running Alpine and it's ran fine (assuming glibc compatibility is set up.
New syscalls may be unavailable on distributions that use old kernels. It is probably possible to tell whatever libc you use to avoid fancy syscalls, but still there may be problems. Note that the other way should work (from old kernel to new)
With that logic, Why not count BSD as one? Linux is the kernel, but Debian has Gnu/HURD also.
Linux in that list is flavour, but that is also the same with BSD. BSD is the flavour and Free, Net, Open all come from BSD 4.3. And if we really want to push it, it also includes MacOS 10.
The BSDs have diverged significantly since then and not just in userland. Unlike Linux distros they do not all have the same kernel. There are of course common parts in their kernels, many of which date back to Unix, but there are also big differences between all of them.
I was also surprised to see Sailfish OS, Meego and Maemo listed separate from Linux, but my guess would be that the list comes from the build system of curl. Everything that is its own build target is listed there.
It's interesting to me that he highlights 32 bit time_t as one important point of compatibility. Makes perfect sense if your goal is to keep your program working on many operating systems. OTOH 2038 is only 14 years away now, or well closer to today than when curl was launched. I wonder when no one will think working with 32 bit times is worth the trouble? My guess is about 3 months after the Y2K38 date, maybe even longer.