The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...
I don't think it's fair to say "because they are lazy or don't understand". Who would want to understand that mess? It isn't a virtue.
A fairer criticism would be that they have no sense to use a more sane build system. CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.
I took the trouble (and even spent the money) to get to grips with autotools in a structured and detailed way by buying a book [1] about it and reading as much as possible. Yes, it's not trivial, but autotools are not witchcraft either, but as written elsewhere, a masterpiece of engineering. I have dealt with it without prejudice and since then I have been more of a fan of autotools than a hater. Anyway, I highly recommend the book and yes, after reading it, I think autotools is better than its reputation.
Autotools use M4 to meta-program a bash script that meta-programs a bunch of C(++) sources and generates C(++) sources that utilizes meta-programming for different configurations; after which the meta-programmed script, again, meta-programs monolithic makefiles.
Yes, that sound ridiculous, but it is that way, so that the user can modify each intermediate step, which is the main selling point. As a user I really prefer that experience, which is why I as a developer put up with the non-sense of M4. (Which I think is more due to M4 being a macro language, then inherent language flaws.)
Oh it has significant white space. Make generally doesn't handle paths with spaces, so if you put the build or source directory somewhere where the absolute path has a space, all bets are off.
autotools is the worst, except for all the others.
I'd like to think of myself as reasonable, so I'll just say that reasonable people may disagree with your assertion that cmake is in any way at all better than autotools.
My experience with cmake, though dated, is that it's simpler because it simply cannot do what autotools can do.
It really smelled of "oh I can do this better", and you rewrite it, and as part of rewriting it you realise oh, this is why the previous solution was complicated. It's because the problem is actually more complex than I though.
And then of course there's the problem where you need to install on an old release. But the thing you want to install requires a newer cmake (autotools doesn't have this problem because it's self contained). But this is an old system that you cannot upgrade, because the vendor support contract for what the server runs would be invalidated. So now you're down a rabbit hole of trying to get a new version of cmake to build on an unsupported system. Sigh. It's less work to just try to construct `gcc` commands yourself, even for a medium sized project. Either way, this is now your whole day, or whole week.
No, CMake can do everything Autotools does, but a hell of a lot simpler and without checking for a gazillion flags and files that you don't actually need to but you're checking them anyway because you copied the script from a someone who copied the script from... all the way back to the 90s when C compilers actually existed that didn't have stdint.h or whatever.
CMake is easy to upgrade. There are binary downloads. You can even install it with pip (although recently the Python people in their usual wisdom have broken that).
CMake can't do everything autotools does, but the stuff autotools does which CMake doesn't isn't relevant anymore in today's world.
The fundamental curse of build systems is that they are inherently complex beasts that hardly anybody has to work with full-time, and so hardly anybody learns them to the necessary level of detail.
The only way out of this is to simplify the problem space. Sometimes for real (by reducing the number of operating systems and CPU architectures that are relevant -- e.g. CMake vs. Autotools) and sometimes by somewhat artificially restricting yourself to a specific niche (e.g. Cargo).
It is relevant still, because sometimes you get a vendor system under support contract (can't be upgraded as a whole).
If you only support x64 Linux and at least as new as latest Debian stable, then I don't feel like you should be talking about these things being too complex.
I don't laugh at plumbers for having a van full of obscure tools, when they just needed a wrench to fix my problem.
Binary downloads? Backward compatibility may allow you to run a 5 year old binary on a system from today, but running a new binary on a 5 year old system is not even a goal.
Choke is easy to upgrade on a modern system, maybe. But that defeats the point, you could just be upgraded normally then.
Or maybe, maybe an old Linux x86. But if that's all you were trying to support then what was the point of cmake in the first place.
It was a few years ago now, so I don't remember the scenario, but no it was absolutely not easy to install/upgrade cmake.
You complain about support for 90s compilers, but it's really helpful when you're trying to install on something obscure. Almost always autotools just works. Cmake, if it's not a Linux or Mac, good luck.
Configure-make is easier to use for someone else. Configuring a cmake based project is slightly harder. In every other conceivable way I agree 100% (until someone can convince me otherwise)
Correct. I avoid autotools and cmake as much as I can. I'd better write Makefiles by hand. But when I need to deal with them, I'd prefer cmake. I can can modify CMakeLists.txt in a meaningful way and get the results I want. I wouldn't touch autotools build system because I never was able to figure out which of the files is the configuration that is meant to be edited by hands and not generated by scripts in other files. I tried to dig the documentation but I never made it.
> CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.
Having done a deep dive into CMake I actually kinda like it (really modern cmake is actually very nice, except the DSL but that probably isn't changing any time soon), but that is also the problem: I had to do a deep dive into learning it.
Simple projects: just use plain C. This is dwm, the window manager that spawned a thousand forks. No ./configure in sight: <https://git.suckless.org/dwm/files.html>
If you run into platform-specific stuff, just write a ./configure in simple and plain shell: <https://git.suckless.org/utmp/file/configure.html>. Even if you keep adding more stuff, it shouldn't take more than 100ms.
If you're doing something really complex (like say, writing a compiler), take the approach from Plan 9 / Go. Make a conditionally included header file that takes care of platform differences for you. Check the $GOARCH/u.h files here:
Why, again, software in the Linux world has to be packaged for multiple distributions? On the Windows side, if you make installer for Windows 7, it will still work on Windows 11. And to the boot, you don't have to go through some Microsoft-approved package distibution platform and its approval process: you can, of course, but you don't have to, you can distribute your software by yourself.
> Why, again, software in the Linux world has to be packaged for multiple distributions?
Because a different distribution is a different operating system. Of course, not all distributions are completely different and you don't necessarily need to make a package for any particular distribution at all. Loads of software runs just fine being extracted into a directory somewhere. That said, you absolutely can use packages for older versions of a distribution in later versions of the same distribution in many cases, same as with Windows.
> And to the boot, you don't have to go through some Microsoft-approved package distribution platform and its approval process: you can, of course, but you don't have to, you can distribute your software by yourself.
This is the same with any Linux distribution I've ever used. It would be a lot of work for a Linux distribution to force you to use some approved distribution platform even if it wanted to.
As michaelmior has already noted, Linux is not an OS. Anyone is free to take the sources and do as they wish (modulo GPL), which is what a lot of people did. Those people owe you nothing.
But consider FreeBSD. Contrary to Linux, it is a full, standalone operating system, just like Windows or macOS. It has pretty decent compatibility guarantees for each major release (~5 years of support). It also has an even more liberal license (it boils down to "do as you wish but give us credit").
Consider macOS. Apple keeps supporting 7yro hardware with new releases, and even after that keeps the security patches flowing for a while. Yet still, they regularly cull backwards compatibility to keep moving forward (e.g. ending support for 32-bit Intel executables to pave the way for Arm64).
Windows is the outlier here. Microsoft is putting insane amounts of effort into maintaining backwards compatibility, and they are able to do so only because of their unique market position.
Interesting that you would bring up Go. Go is probably the most head-desk language of all for writing portable code. Go will fight you the whole way.
Even plain C is easier.
You can have a whole file be for OpenBSD, to work around that some standard library parts have different types on different platforms.
So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.
You need a file for Linux 32 bit ARM (int32/int32 bit), one for Linux 64 bit ARM (int64,int64), one for OpenBSD 32 bit ARM (int64/int32), etc…. Maybe you can group them, but this is just one difference, so in the end you'll have to do one file per combination of OS and Arch. And all you wanted was pluggable "what's a Timeval?". Something that all build systems solved a long time ago.
And then maybe the next release of OpenBSD they've changed it, so now you cannot use Go's way to write portable code at all.
So between autotools, cmake, and the Go method, the Go method is by far the worst option for writing portable code.
I have specifically given an example of u.h defining types such as i32, u64, etc to avoid running a hundred silly tests like "how long is long", "how long is long long", etc.
> So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.
I assume you mean [syscall.Timeval]?
$ go doc syscall
[...]
Package syscall contains an interface to the low-level operating system
primitives. The details vary depending on the underlying system [...].
Do you have a specific use case for [syscall], where you cannot use [time]?
Yeah I've had specific use cases when I need to use syscall. I mean... if there weren't use cases for syscall then it wouldn't exist.
But not only is syscall an example of portability done wrong for APIs, as I said it's also an example of it being implemented in a dumb way causing needless work and breakage.
Syscall as implementation leads by bad example because it's the only method Go supports.
Checking for GOARCH+GOOS tuple equality for portable code is a known anti pattern, for reasons I've said and other ones, that Go still decided to go with.
But yeah, autotools scripts often check for way more things than actually matter. Often because people copy paste configure.ac from another project without trimming.
Maybe explain how would you have exposed the raw syscall interface in a high-level, GC'd language with userspace scheduling? Genuinely curious (I'm a bit of a PL nerd)
Well, for one I think it's completely unnecessary, or maybe I should say exceedingly lazy, to expose such a 1:1 mapping. Does syscall.Select need to take a struct that's exactly equal in member types to select(2)?
Who is that for? Someone fuzztesting the kernel? You know what, if you're fuzztesting the kernel then maybe you can implement this yourself, instead of forcing needless unportability onto everyone who is not fuzztesting the kernel.
And when I say exceedingly lazy, I mean the comment in the offending file saying "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT".
Of course you could ask why I even need syscall.Select. One example is that I needed to check if a read() would block before reading. Shouldn't I instead use goroutines and a synchronous read? Maybe. Sometimes. But the file descriptor could have come from a library, and the read is in a callback, and leaving a pending read after returning from the callback could be undefined or a race condition.
Ok, so wrap it with os.NewFile, set a read deadline, try to read, then set it back. But "if the file descriptor is in non-blocking mode, NewFile will attempt to return a pollable File (one for which the SetDeadline methods work)". And it seems that NewFile "takes ownership" of the fd, closing it when the finalizer runs.
I guess I could Dup() it first, and handle all the edge cases to prevent fd leaks.
Dude, I just want to call select(). Not rely on if it's in non-blocking mode, and fight os.File.
There's a trending post right now for printf implemented in bare metal and my first thought was "finally, all that autoconf code that checks for printf can handle the use can where it doesn't exist".
It’s always wise to be specific about the sizes you want for your variables. You don’t want your ancient 64-bit code to act differently on your grandkids 128-bit laptops. Unless, of course, you want to let the compiler decide whether to leverage higher precision types that become available after you retire.