Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Autodafe: Tools for freeing your project from the clammy grip of autotools (gitlab.com/esr)
72 points by shrubble on April 7, 2024 | hide | past | favorite | 70 comments


I've been building distros for years, and the most wasteful part of it is running zillions of auto. process that waste most of the build time of each projects looking for mostly the same things.

I always thought it was insane. Perhaps it was a good idea 30 years ago when we were actually building for dozens of 'unstable' variants of UNIX with a dozen compilers, but these days?

And yes, MOST projects can be compiled just fine with a 2 pages Makefile, more often than not with -j for parallel build, and as a bonus, won't keep around a dozen turd files.

Oh also, it is supposed to help 'portability' but MOST of my time is wasted trying to fix autotools configs when it invariably break in some new interesting and arcane ways.


I doubt there are many new greenfield autotools using projects popping up around. So the question is not really if one should use autotools or not, but if projects should spend effort migrating away from working autotools setup which inevitably is also a breaking change for downstream consumers. And framing it like that it is suddenly much more difficult question, especially as that migration often might be quite non-trivial


A graceful migration path should be implemented over a long period to allow downstream enough time to process the changes.


maybe parallel process for a bit eh? but yeah, it’s going to be a lot of dev hours to remove it. Still, at least there is some motivation now.


> […] looking for mostly the same things.

In case folks don't know, caching exists:

> A cache file is a shell script that caches the results of configure tests run on one system so they can be shared between configure scripts and configure runs. It is not useful on other systems. If its contents are invalid for some reason, the user may delete or edit it, or override documented cache variables on the configure command line.

> By default, configure uses no cache file, to avoid problems caused by accidental use of stale cache files.

> To enable caching, configure accepts --config-cache (or -C) to cache results in the file config.cache. Alternatively, --cache-file=file specifies that file be the cache file. The cache file is created if it does not exist already. When configure calls configure scripts in subdirectories, it uses the --cache-file argument so that they share the same cache. See Configuring Other Packages in Subdirectories, for information on configuring subdirectories with the AC_CONFIG_SUBDIRS macro.

* https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...


But it doesn’t work reliably. If you change the configuration options, you have to clear the cache. The cache contents are not reliably shareable between multiple projects or different versions of autotools.


Autotools was always horrible, but it had a purpose back in the 90s, since then it's entirely vestigial and does nothing useful. It standardizes flags to config scripts and make targets, yes, but you can follow that standard without actually using autotools. It enables cross-compilation, yes, but the far biggest roadblock to successful cross-compilation is autotools, without that it's pretty easy.

Meanwhile, these days, actually trying to build on a new platform is harder if the software is using autotools than if it's using plain makefiles. Because for all the noise about feature checking, nobody including the projects using autotools, are actually using the defines from autotools, they check the OS or architecture.


I currently make my living porting software to non-Linux and non-x86_64 cross-built systems, and I can vouch with certainty from lived experience that your assertions are entirely untrue.


> using the defines from autotools

Is there a list of them somewhere?


You can decide what to test for, for example

AC_CHECK_FUNCS([mlockall])

AC_CHECK_HEADERS([cpuid.h])

will define respectively HAVE_MLOCKALL and HAVE_CPUID_H.


Nowadays we have dozens of 'unstable' variants of Linux distributions with a two compilers, and dozens of additional scripting and managed languages.

And a couple of BSD derived ones.

As per Distrowatch statement from 2023.

"There are over 600 Linux distros and about 500 in active development"


And dozens of non-Linux systems. AIX still lives, I still see HP-UX running, QNX is hidden away quietly running many things you take for granted every day, and there are a number of embedded executives make the world run.

There a lot more to the world that writing scripts to steal people's bandwidth by pushing ads to web browsers.


Definitely, I was only making the point that even reducing UNIX === GNU/Linux is kind of myopic.


Even reducing Linux == GNU/Linux is pretty myopic. After all, even Alpine is used in a lot of containers.


And how many are based on Debian / Ubuntu? lol.


During the UNIX wars heyday, except for special ones like Appolo (which used a Pascal dialect) or Tru64/QNX, they were also either based on AT&T UNIX, or the BSD spinoff.

A few common bases hardly matter, when every distro is a special snowflake with its own set of incompatible changes.


Dealing with a zillion of handwritten Makefiles sounds more hellish than dealing with autotools. These Makefiles will be more fickle, too. I'm not seeing a convenience win.


Are we talking about the same autotools? I'm getting flashbacks to running into some weird bash error in my configure file on line 3563. Where the configure script in question was generated by autoconf, automake, aclocal and all that jazz. Or autoconf mysteriously failing after automake works, or some similar nightmare. When autotools go wrong, trying to fix it feels like trying to breathe while you're held underwater.

I'm sure part of the problem is I don't understand autotools as well as I could. But when "build expertise in autotools" is on the table, stabbing my eyes out with a fork starts to seem like an appealing option.

Suffice it to say, I prefer to work with handwritten makefiles.


I'm not sure what the best solution is, but I agree on the pain dealing with autotools when they fail. Shotgun debugging where you start poking at things randomly can work for some stuff, but never works here.

After working on many different software libraries and frameworks, my firm belief is that you really need to understand the lower layers to use and troubleshoot them effectively.

Best case scenario there are excellent error messages and you can easily review logfiles to understand the root cause of the problem.

But that is rarely the case. Instead you have to have a detailed mental model of the library/framework you're using, and you must be able to quickly picture what it will do internally for the inputs that you give it.

Once you get to that point, many bugs don't appear in the first place because you immediately see that the input/usage doesn't make sense. And the bugs that do happen become much easier to figure out from the output and cryptic error messages you get.

All this to say that it is really unappealing to work on things like bugs in someone else's autotool scripts. I just want to compile the program to run it. I don't want to spend months of my life to understand the inner workings of autotools.


Yes. If you don;t know what you're doing things can seem difficult and you will have problems. This is not a property of the autotools but a property of life in general.


I seem to get on fine in life in general. I think the suckiness of auto tools gets to me because it’s so utterly unnecessary. Autotools are complex not because it solves a hard problem but because the design is bad. And everyone who interacts with it pays rent for their bad decisions.

Compare it to CMake or even cargo - which fundamentally solves the same problem, faster and more reliably on more operating systems. And for no cost at all. The opposite: it’s easier to configure and use.

Makefiles can be quite elegant. But it really seems like a waste of human potential to put up with such crap software as autotools. How many neurons do you have devoted to it? You could have used the same time and effort learning something that matters or that brings you joy.


My personal experience with CMake is contrary to your claim.

CMake based compilation breaks far more often for me than with Autotools and because of the crappy documentation I can rarely fix it myself.

Cargo and other modern package manager OTOH are bliss compared to both CMake and Autotools but they are usually language specific and other modern non language specific build system are still leading a niche existence.


> Cargo and other modern package manager OTOH are bliss compared to both CMake and Autotools but they are usually language specific and other modern non language specific build system are still leading a niche existence.

I hear you about CMake.

I think what kills me about all of this stuff is that there's no essential reason we can't have something as nice as cargo for C and C++ code. Compiling C isn't a fundamentally more complex problem than compiling rust or swift. But instead of solving the problem in a clean, generic, cross-platform way that understands package interfaces and boundaries, C/C++ instead accreted hacky, vendor specific macros and junk for decades. And then C build tools need to be horribly complicated to undo all of that damage.

Should your code export functions with __declspec(dllexport) (as VC++ demands) or __attribute((dllexport)) for gcc. Can't choose? Maybe your library should define its own idiosyncratic DLL_PUBLIC macro which looks at whether _WIN32 or __GNUC__ is defined on the platform. And now the header file (and thus the exposed functions) can only be machine interpreted if you know which platform / compiler you're building for. What does it do for clang? What should clang do? Aaahhhh nightmare o'clock.

All of that when C could just do what every other modern language does and have a nice machine readable public attribute, that gets interpreted differently based on whether the code is compiled into a dynamically linked library or compiled statically. I know hindsight is 20/20. And I'm hopeful that C++20's modules will eventually help. But there's so much stuff like this to unpick that it'll take decades, if it happens at all.

We can have nice things. It just takes some engineering and a willingness to change.


I never got the hate for Makefiles. Granted I mostly use them for simple projects, but compiling C project takes just a few lines of code (compile .c to .o, .o to executable, optionally provide some PHONY convenience utils) and is very readable and hackable. What's not to love?

I'm sure I'm missing something, and it's possible that they don't scale well, but I prefer it to any other could system for small C projects.


With autotools you get DESTDIR, --prefix and a bunch of other things that work the same across all projects. With Makefiles everybody is rolling their own thing and you never know what to expect, or frequently have to implement those things yourself.

That said, autotools, with its multiple layers of file generation, makes debugging rather annoying. And it's generally much easier to fix a broken Makefile than figuring out why autotools goes wrong.


Problem is that you have a wide range of makes with different syntax. It has gotten better nowadays where GNU make is available pretty much everywhere, but two decades ago you'd have a range of incompatible makes on different UNIX systems, plus a bunch of incompatible makes on Windows as each compiler would come with its own make.

Assuming you can ignore Windows you'd typically end up with two makefiles (Makefile and GNUMakefile) and a bunch of includes sharing code all make variants understand.


You dealing with a zillion handwritten configuration files anyway, just hidden behind autotools in ways that most developers are even more clueless about than we are about makefiles.


> Perhaps it was a good idea 30 years ago when we were actually building for dozens of 'unstable' variants of UNIX with a dozen compilers

It wasn't. Back then what was typically getting in your way for porting a piece of software was autotools - and fixing it was typically significantly more complicated than adjusting a well written Makefile would've been. My hate for that autocrap mainly comes from that period.


25 years ago autotools, and their predecessor the Cygnus tools, were a breath of fresh air. Porting stuff was a nightmare (imake, mkmk, many hand-rolled Makefiles that only supported the author's own system) and autotools made it easy, especially if you were running a non-homogeneous collection of Unix and Unix-like systems, including both libc5 and libc6 variants of Linux.


I was living through that era with a zoo of Unix variants to build for (including, but not limited to, AIX, HP-UX, Solaris, IRIS, Tru64 and various Linux flavours) - and I always was excited about stuff that did just have hand rolled Makefiles, as that was something that was way easier to fix as the average software using autotools.


I approve of the concept, but I don't think this isn't the answer.

For a start, it says if you want to install into non-standard directories, or enable/disable options, you need to "edit your makefile". That isn't going to work for distributions, and I don't want to go to telling people to edit the makefile to change options on my program.

Also, in my experience most autotools scripts do something, maybe dealing with inconsistent linker flags on mac vs linux, or enabling/disabling some debug option, or saying where to find some library (and optionally allowing building without said library).

Coming up with some very simple format for configure scripts which are easy to read, and which do some simple search+replace substitution on a Makefile seems very sensible, and would get rid of awful autotools, is a great goal however.


> Coming up with some very simple format for configure scripts which are easy to read, and which do some simple search+replace substitution on a Makefile seems very sensible, and would get rid of awful autotools, is a great goal however.

That's CMake or Meson. Personally I prefer Meson because it gets rid of more cruft—at the cost of being more opinionated, but if it worked for projects as large as QEMU or PostgreSQL it probably will work for you.

Also, even if you don't need to detect standard headers or features (and there are still differences between Linux, BSDs and Darwin even if they're smaller than they used to be) you will almost surely need pkg-config support, and that makes ESR's solution inadequate without manual work.


The problem with pkg-config is that it's really really broken for cross builds. Meson depends on pkg-config so it's inappropriate for cross builds. It doesn't even pretend to support Canadian crosses.

CMake works for cross builds, as long as the company has ported their proprietary CMake program to build for your target or you supply your own hand-rolled tools file and trick it into using that.

Autotools works for cross building right out of the box almost every time.


This does not match my experience, pkg-config can be used for cross compilation. In Autoconf terms it's a "tool", not a "program".

If you're cross compiling to Arm from x86, you can either create a separate arm-linux-gnueabi-pkg-config installation that has a different search path for .pc files, or you can even just create a wrapper script that passes the right search path override to the system pkg-config. Either way, it's not a problem to cross compile with Meson, and most cross compilation environments are ready for it and have been for a decade or more. QEMU builds many cross compilation CI jobs and they handle Meson/pkg-config just fine.

Where pkg-config goes almost irrimediably wrong, it's for multilib compilations (such as using -m32 to compile for 32-bit x86 on a 64-bit host). In that case there's no alternative to passing overrides in the environment. Which is pretty bad, not entirely broken I guess but pretty close.

As to lack of support for Canadian crosses, that's a very niche case (and I say that as a former build system maintainer for GCC/binutils) and even then no build system is perfect. Looking again at QEMU, we need to build testcases for a dozen different targets, and Autoconf would only support one. In QEMU we use Meson for the more complex and configurable part of the build, and wrap it with a handwritten and decently readable configure/Makefile pair to handle target builds and orchestrate everything. We can do enough in Mason that the configure script's tasks are very specialized and would benefit very little from using a build-specific DSL like Autoconf's m4 dialect or CMake. For our use case it hits a sweet spot in terms of maintainability and is even more flexible than Autotools.


Canadian crosses may be a niche case but they're my bread and butter as a host tool maintainer for embedded development systems.

CMake is almost at par with the autotools in terms of functional support but much more difficult to analyze when things go wrong. I can easily instrument a generated configure script or makefile to analyze what's happening and then trace it back to its original input to the autotools to make adjustments. The same thing with a CMake build often requires not only analyzing generated makefiles but CMake sources distributed with my project and across my build system where they may be installed by various packages and sometimes even the C++ code of the tool itself. I will work with CMake if I have to and embrace autotools where I can.

Anything that depends on pkg-config is just plain broken for me. Sure, I could modify each and every project to use customized configurations set through writing wrapper scripts and modifying system tools and setting special environment variables to make it work for some simple cases, but that's not scalable in CI systems where I have no root access and it just doesn't work at all for any cross-build situations where build-host tools are used to generate target files. Tools are supposed to make it easier, not impossible.

It's great if you can focus on one example that works for you. It's not so great when you have to deal with dozens of different examples, all different, most of which don't work and require all kinds of manual intervention just to solve problems that have already been solved decades ago but for Chesterton's fence.


I agree that Canadian crosses are necessary, but they affect a really small minority of programs and they are either not using Autotools (LLVM/clang, QEMU) or not going to migrate away from it (GCC/binutils/gdb). All I wanted to say is that there are also cases that go beyond the Canadian cross and IMO it was not a mistake for Meson to focus on good support for build vs host and leave the target aside.

With respect to pkg-config, I don't know. I still find it a better tradeoff in terms of both ease of use and discoverability. I am not 100% sure it's a problem of the tool. In my experience doing cross compilation it's never gotten in the way, I can imagine it may introduce some headaches for those that work on the meta-buildsystem (e.g. buildroot or yocto) but overall it's hard to prefer either config-script-of-the-day (which almost always has exactly the same problem just multiplied by N) or cmake-macro-of-the-day.

Agreed 100% on the relative advantages/disadvantages of CMake vs autotools. As a developer Meson however is night and day when compared to both.


> you need to "edit your makefile". That isn't going to work for distributions

Is it not? [st] requires exactly that. And it works for distros, from what I can tell - debian/ubuntu, arch, almost everybody seem to ship it just fine.

[st] https://st.suckless.org/


OK, maybe "not work well for distributions"?

Looking at Debian, they are distributing a series of patches to the Makefile. Getting away from autotools by having every distro patch your makefile might still be better than autotools, but I'd hope we might be able to do even better?


> Debian (...) are distributing a series of patches to the Makefile.

Where do you see that, sorry? I'm looking at the "Download Source Package" section here:

https://packages.debian.org/sid/stterm

...and the only patch on there is debian/patches/0001-fix-buffer-overflow-when-handling-long-composed-inpu.patch, which doesn't touch Makefie.


I wasn’t sure what the package was called, so I searched for ‘suckless’, and found this, with two Makefile patches.

https://sources.debian.org/patches/suckless-tools/46-1/


I agree that autotools is often overkill, but I don't think the answer for most project is to force users to have to manually edit Makefiles.

Feature detection is a good thing, but tools like CMake or Meson do a better job at this, in my opinion.

I have done minimal GNU Make builds for large software projects before. In these cases, the main Makefile will depend on a file that is created after a one-time dependency check is done just to ensure that the build environment is sane. This file will exist until `make mrproper` is run, like any other build config file. I'm not personally a fan of monorepos, so instead, project dependencies that aren't already installed on the build machine/vm/container or that aren't a supported version will be automatically downloaded, configured, and installed in the build directory as part of the build process. Even with this sort of complexity, I adamantly believe that the average developer / user should just be able to type `make` and get either a useful error message or a working build. Likewise, I believe that the build scripts should be human readable, well, assuming that one has read through the GNU Make manual at least.

These days, I have more or less standardized on one-shot CMake scripts for my personal projects. They will only check that the current build configuration has any dependencies already installed (e.g. via `PKG_CONFIG_PATH`), which allows for more intelligent scripting by downstream maintainers. The scripts themselves don't have any significant surprises and largely avoid patterns that can't be searched for using a search engine. This makes it easier for me to maintain cross-platform builds, at least across *nix. I haven't really tested Win10 / Win11 builds recently, but should.


> but tools like CMake or Meson do a better job at this, in my opinion.

Not much experience with Meson, but as a package maintainer, I really hate cmake. It's got the most of my "why isn't it calling it to pkg-config properly" debugging hours. Maybe it's easier to write, but finding what a specific option does through the forest of macros and pages of "this can do 10 different things" documentation and how it interacts with the rest of the system is not my favourite thing...


After a long period of rejecting CMake for "simple" build.sh and Makefile projects, nowadays CMake is my goto choice. Top features:

- supports all kinds of compilers, even Watcom for DOS and Windows <all versions> projects.

- cross platform compiliation inlc. MSVC

- my #1 feature wide IDE support + usage from command line.


Autotools treats Makefiles like they are object code. We should promote Makefiles to being source code in a project and learn how to write them. The need for autotools died long, long ago when there were so many different platforms people were trying to support.


It's probably still the case a little; I've seen 21st century projects shipping a dozen or so of makefiles because in Unix/Linux land there's so many little variations, like BSD vs Linux.

I seems to me that Autotools is like Systemd : a relief for those who do that kind of task every day, and a bloated mess for others when you have to hack/fix it.


Yeah, sure, but you can just 'import' a platform-specific Makefile and make the rest generic.


But but but

"Checking whether compiler supports int... yes"


Very clever name, although I hope the tool isn’t as extreme as its namesake! See https://en.wikipedia.org/wiki/Auto-da-fé


Not sure if the name is so clever. For me autodafe is akin to pogrom or, at a bigger level, holocaust. Not sure i want to use tools called like that.


Good point. I knew the phrase at a surface level, but as you say, the real thing was seriously nasty.


Anecdotal evidence but I work on a large number of C repositories and autotools is by far the most common cause of build related issues. Hand written makefiles are typically the easiest to debug. Note that this doesn't include makefile-based systems like Buildroot or Linux kernel makefiles. I don't consider them good examples due to the stateful config file management.

My golden standard for building a project is the "empty makefile" only containing definitions for CFLAGS and LDFLAGS.


That's possible if you have one source for each executable. For slightly more complex projects don't you need at least an additional line? Something like

  App: main.o whatever.o ...


Of course, reality isn't always simple. But the core idea is that Make has a sane model with reasonable defaults and most faults of makefiles are introduced by those who add unnecessary complexity and disrespect the functional nature of Make.


Standards are good. Replacing standard autoconf with handedited Makefiles is going back to dark practices from the 20th century when this tool actually wants to "throw away the 20th century".


Heh, wish I'd thought of that.


It’s what you oughtn’t to do, but you do anyway!


Feature creep will do this one too


How naive!

> every C99/POSIX system supports the entry points they are guarding

The most popular compiler, msvc, still cannot support C99/POSIX. And some of my Makefile only projects still probes for features stored in a config.h

Good luck with destroying our infrastructure.


For a long time this was true but recently MS turned around and is now supposedly supporting even C17.

https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...


Meanwhile you still need to add a special compilation flag to get a c89 and above compatible preprocessor: https://learn.microsoft.com/en-us/cpp/build/reference/zc-pre...


Default flags of compilers tend to prefer backwards compatibility over correctness and modernity. Passing flags to make the compiler behave correctly or report all errors is nothing new.


Yes, but it shouldn't take +35 years to change the default to the correct behavior.


> entry points

I was curious where this was from, so I looked at https://gitlab.com/esr/autodafe/-/blob/master/de-autoconfisc...

As someone who targets clang/gcc/msvc/icc, I don't see much value in these defines. Let's pick on HAVE_STRDUP: what's the point?

If it's always defined on your target platforms, just use the stdlib unconditionally - the define is pointless.

If it's never defined on your target platforms, you'll have to roll an alternative of your own no matter what - the define is pointless.

So the presumed theoretical use case for this is if you sometimes want to define your own, and sometimes want to use the standard library. But do I actually want that? Rarely. Very rarely. If I care to target old systems, I'd generally rather unconditionally define my own version that doesn't conflict with the standard library, that gets tested and used in all builds on all platforms - it'll be less code than adding a bunch of #ifdef soup, and it'll be less brittle - no "works on my machine but fails on the build server" nonsense because of a typo in sometimes-dead code.

That leaves one even narrower use case which isn't entirely theoretical: wanting to backport a modern codebase to an "ancient" toolchain/stdlib via polyfills without touching the modern codebase or the toolchain/stdlib. That approach has it's niches, but... it is worth emphasizing, niches.


There are some few cases where I'd probably prefer the "standard" version where available: memcpy etc. that are often extensively optimised, and that are a lot of work to write a non-naive version off that performs well. But I agree that most of the time you categorically should not use feature detection like that, and in the few cases you're dealing with systems lacking things that basic you have bigger problems...


Almost nobody writes POSIX make, the vast majority of makefiles use GNU make extensions.


Exactly. bmake is the only true make


Do one run ./configure on Windows to compile with msvc?

(genuine question, not familiar with this)


No. One uses Linux plus mingw to cross build for a Windows host.

    ./configue --build=x86_64-pc-linux-gnu --host=x86_64-pc-mingw64
It usually magically just works, as long as the project author has a clue about portability. That's generally the limiting factor.


This. I have a project that has the additional problem that during building some generated Windows binaries have to be called.

On Debian with wine-binfmt the build can call the crosscompiled intermediate binaries and have them executed with Wine. Pretty crazy but it works.


No, you'd use cmake




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: