Autotools started out much smaller and more reasonable. But they have a strong backwards compatibility system, and never seem to remove checks even when they're for long-obsolete systems. Someone might want to build modern tools for their IBM PC/AT! Better keep a check in every build of every program for all time! So the checks accumulate, without end, and the hacks to get the whole thing to keep running on every target accumulate, and it becomes an indecipherable mess.
Could the checks be binned into some categories or time strata, such that obsolete ones could be dropped. Eg, we could keep them all there under a --exhaustive flag for those 6 or 7 people who are really attached to their computer that they got in the 90s. But then add some more options like --fast or --cutoff=2015 to pare down the checks to what is relevant today
Oh, I'm not debating that. I'm taking issue with the line "If the xz project had not been using autotools, installing the backdoor would not have been possible. It’s as simple as that." It would have been possible but more difficult. Probably much more difficult. Autotools is a flaming mess of bad ideas, I've migrated projects' build systems off of it before and will continue to try to do so every chance I get.
> GNU Autotools is too complicated, unnecessary, and stupid ... m4 is horrible ... Nobody has the time to review the mess these tools generate
These tools are generally much too complex. I don't think that e.g. CMake builds are easier to understand and analyze. From a certain size it's almost impossible to fully understand and check such a system.
One could argue that this is the case with any type of system. This is true, but you have to bear in mind that important findings and principles of software engineering from the last fifty years seem to have passed by without a trace, especially when it comes to build systems. For example, most of these systems still use dynamically typed scripting languages to specify the build, even though these are increasingly being replaced by better languages due to their well-known disadvantages, e.g. languages that are accessible to a static check at compile time using suitable tools, or that do not leave the discovery of errors and backdoors to chance discovery at runtime.
Autotools, CMake, Qmake and even newer developments such as Meson or GN suffer from the same problems.
> There’s better build systems like CMake or meson (at least that’s what I’m told), but in fact plain Makefiles are superior.
There is no reason for this assumption. Non of these tools supports e.g. static analysis.
CMake - as autotools - is a meta build system; it e.g. generates make files, which are essentially scripts. Also CMake itself is essentially a VM with a scripting language. Both CMake and Make are Turing complete (and dynamically typed, as mentioned). And yes, not all build systems are the same; e.g. https://github.com/rochus-keller/BUSY has a statically typed specification language and intentionally avoids a Turing complete language.
Anyway, such constraints are also available in GN, and in CMake or Autotools you can implement them yourself, but they suffer from the same restriction I mentioned (not amenable to static analysis, limited verifiability at runtime due to combinatorial explosion of possible system states, etc.).
If your code is short, obvious, and working - then you are an expendable nobody.
If your code is 10M lines of incomprehensible shit - then you are an irreplaceable demigod.
Doesn't matter if both programmers' code has the exact same functionality.