Hacker News new | comments | show | ask | jobs | submit login
A Tutorial on Portable Makefiles (nullprogram.com)
324 points by signa11 on Aug 20, 2017 | hide | past | web | favorite | 105 comments

An issue I have with make is that it can not handle non-existence dependencies. DJB noted this in 2003 [1]. To quote myself on this [2]:

> Especially when using C or C++, often target files depend on nonexistent files as well, meaning that a target file should be rebuilt when a previosly nonexistent file is created: If the preprocessor includes /usr/include/stdio.h because it could not find /usr/local/include/stdio.h, the creation of the latter file should trigger a rebuild.

I did some research on the topic using the repository of the game Liberation Circuit [3] and my own redo implementation [4] … it turns out that a typical project in C or C++ has lots of non-existence dependencies. How do make users handle non-existence dependencies – except for always calling “make clean”?

[1] http://cr.yp.to/redo/honest-nonfile.html

[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[3] https://github.com/linleyh/liberation-circuit

[4] http://news.dieweltistgarnichtso.net/bin/redo-sh.html (redo-dot gives a graph of dependencies and non-existence dependencies)

Make has no memory so it can't remember things. It simply compares the dates of files. If a dependency is newer than a target the target is rebuilt.

If you want to keep some kind of memory you have to build and keep track of it yourself.

But the problem you point at is simply poor design. It is not a normal occurrence for system header files to move around like you state. If they do, a full rebuild is indeed required. It shouldn't so often that that is an issue.

If an apt-get upgrade fixed an issue in a system header or a library, but the date of the fix predates the last build (quite common; last build from yesterday, fix from two days ago but downloaded today) then make will do nothing (or any subset of the right things but not all) and make clean; make will do the right thing.

Relying on time stamps is a design decision that was good for its time, but it is no longer robust (or sane) an a constantly connected, constantly updated, everything networked world.

djb redo takes it to one logical conclusion (use cryptographic hashes to verify freshness)

There are other ways in which make is lacking: operations are non atomic (redo fixes that too), dependency granularity is file level (so dependency on compiler flags is very hard, dependency on makefile is too broad; redo fixes this too); dependency is manual (redo doesn't fix this; AFAIK the only one that properly does is tup)

> Relying on time stamps is a design decision that was good for its time, but it is no longer robust

I agree with the sentiment, but a small nitpick:

Relying on time stamps for older/newer comparisons is not robust.

Using time stamps (and perhaps file size) for equality checks is quite robust. And the combination with cryptographic hashes is even better (if a file is recreated but has the same contents afterwards, timestamp checks would trigger an unneeded rebuild, while a crypto hash check would recognize that there's nothing to rebuild).

Typically if a system header has changed or been added due to upgrading a library package you'll need to rerun any configure script anyway (since it very likely checks for header features and those decisions will change). So unless your build system magically makes configure depend on every header file used in every configure test it runs, you'll need to redo a clean build anyway, pretty much.

Make has a whole pile of issues, but this one really isn't an aggravation in practice, I find.

apt-get upgrade does not usually upgrade a package, despite the name; 99.9% of the time it applies a bug or security fix, almost never changing any functionality or interface - and would result in the same config script.

And that assumes you actually have a config script, which is also a nontrivial assumption.

Djb redo lets you track e.g. security fixes that change libc.a if you are linking statically, but that's not usually done.

The only build system I know that guaranteed a rebuild whenever and only when it is needed is tup. (Assuming you have only file system inputs)

"It simply compares the dates of files."

test(1) also compares the dates of files

   test file1 -nt file2
   test file1 -ot file2
Is there anything else that make does in addition to comparing dates of files?

(Besides running the shell.)

tsort(1) does topological sorting

tsort + sed + sort + join + nm = lorder(1)

lorder can determine interdependencies

Is this a common problem? I can't think of any project that does this, and there's a simple solution as well: don't shadow system headers. That's just asking for pain, regardless of how well make handles it.

I don't think this problem is limited to system headers. Something as innocent as #include "foo/bar.h" can be affected by this if you pass -I with at least two unique paths to the compiler.

ok, sure, I revise my answer to don't shadow any header.

Easier said than done, especially integrating over the lifetime of a years-long project with many ever-changing dependencies :-)

Could you summarize how you handle this in redo? Also what about the case where a header file does exist but is out-of-date and because of that triggers an error (e.g., version compatibility check with #error) -- how do you handle that?

I, for one, handle it with a tool that mimics the compiler's preprocessing phase and emits both redo-ifchange information for all of the headers that are used, and redo-ifcreate information for all of the non-existent headers that are looked for during the process.

    JdeBP %cat test.cpp
    #include <cstddef>
    void f() {}
    JdeBP %/package/prog/cc/command/cpp test.cpp --iapplication . --icompiler-high /usr/local/lib/gcc5/include/c++ --icompiler-low /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3 --iplatform /usr/local/include --iplatform /usr/include  -MD -MF /dev/stderr 2>&1 > /dev/null|fgrep redo
    redo-ifcreate ./cstddef ./bits/c++config.h /usr/local/lib/gcc5/include/c++/bits/c++config.h /usr/local/include/bits/c++config.h /usr/include/bits/c++config.h ./bits/os_defines.h /usr/local/lib/gcc5/include/c++/bits/os_defines.h /usr/local/include/bits/os_defines.h /usr/include/bits/os_defines.h ./bits/cpu_defines.h /usr/local/lib/gcc5/include/c++/bits/cpu_defines.h /usr/local/include/bits/cpu_defines.h /usr/include/bits/cpu_defines.h ./stddef.h /usr/local/lib/gcc5/include/c++/stddef.h /usr/local/include/stddef.h ./sys/cdefs.h /usr/local/lib/gcc5/include/c++/sys/cdefs.h /usr/local/include/sys/cdefs.h ./sys/_null.h /usr/local/lib/gcc5/include/c++/sys/_null.h /usr/local/include/sys/_null.h ./sys/_types.h /usr/local/lib/gcc5/include/c++/sys/_types.h /usr/local/include/sys/_types.h ./machine/_types.h /usr/local/lib/gcc5/include/c++/machine/_types.h /usr/local/include/machine/_types.h ./x86/_types.h /usr/local/lib/gcc5/include/c++/x86/_types.h /usr/local/include/x86/_types.h
    redo-ifchange /usr/local/lib/gcc5/include/c++/cstddef /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3/bits/c++config.h /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3/bits/os_defines.h /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3/bits/cpu_defines.h /usr/include/stddef.h /usr/include/sys/cdefs.h /usr/include/sys/_null.h /usr/include/sys/_types.h /usr/include/machine/_types.h /usr/include/x86/_types.h
    JdeBP %/package/prog/cc/command/cpp test.cpp --iapplication . --icompiler-high /usr/local/lib/gcc5/include/c++ --icompiler-low /usr/local/lib/gcc5/include/c++/x86_64-portbld-freebsd10.3 --iplatform /usr/local/include --iplatform /usr/include  -MMD -MF /dev/stderr 2>&1 > /dev/null|fgrep redo
    redo-ifcreate ./cstddef ./bits/c++config.h ./bits/os_defines.h ./bits/cpu_defines.h ./stddef.h ./sys/cdefs.h ./sys/_null.h ./sys/_types.h ./machine/_types.h ./x86/_types.h
    JdeBP %
I also have a wrapper that takes arguments in the forms that one would invoke g++ -E and clang++ -E, tries to works out all of the platform and compiler include paths, and invokes this tool with them.

It's then a simple matter of invoking these redo-ifchange and redo-ifcreate commands from within the redo script that is invoking the compiler.

You can see this plumbed into redo in a real system in the source archives for the nosh toolset and djbwares.

* http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

* https://news.ycombinator.com/item?id=15044438

I use strace(1) look for stat(2) syscalls that fail with ENOENT. An advantage of this approach is that I do not have to imitate the C preprocessor, so parser differentials can never happen. The following default.o.do file from my blog post [1] handles the case:

  redo-ifchange $2.c
  strace -e stat,stat64,fstat,fstat64,lstat,lstat64 -f 2>&1 >/dev/null\
   gcc $2.c -o $3 -MD -MF $2.deps\
   |grep '1 ENOENT'\
   |grep '\.h'\
   |cut -d'"' -f2 2>/dev/null\
  read d <$2.deps
  redo-ifchange ${d#*:}
  while read -r d_ne; do
   redo-ifcreate $d_ne
  done <$2.deps_ne
  chmod a+x $3
This approach is also used for building Liberation Circuit if strace is installed [2].

I think the compiler should output the necessary information. To quote Jonathan de Boyne Pollard [3]:

> As noted earlier, no C or C++ compiler currently generates any redo-ifcreate dependency information, only the redo-ifchange dependency information. This is a deficiency of the compilers rather than a deficiency of redo, though. That the introduction of a new higher-precedence header earlier on the include path will affect recompilation is a fact that almost all C/C++ build systems fail to account for.

> I have written, but not yet released, a C++ tool that is capable of generating both redo-ifchange information for included files and redo-ifcreate information for the places where included files were searched for but didn't exist, and thus where adding new (different) included files would change the output.

JdeBP, could you please release your tool under a free software license? I suspect it has fewer errors than the similar CMake approach [4].

[1] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[2] https://github.com/linleyh/liberation-circuit/blob/master/sr...

[3] http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

[4] https://github.com/Kitware/CMake/blob/master/Source/cmDepend...

Just for the record: My personal preference is for Clang and GCC to be instrumented to emit the names of both found and non-existent header files.

I'm missing something.

Are you saying you want to be able to compile either way /usr/include/stdio.h and /usr/local/include/stdio.h, but remember what the last compilation used and know what header would be used in the next compilation, and if it's different, mark the target as stale and perform the action?

I guess you'd need to keep a log of the build and test cpp invocations for diffs.

I've never run into this scenario.

An obvious case would be a developer supporting multiple versions of a 3rd party library.

This is where I saw the beauty of including dependencies w a project. Even on my own systems, as environments change, things break, and having a stable in-tree reference had paid off.

It's a tough situation, but I find myself leaning to @tedunangst position over the years - usually I try to adapt my machines (incl software) to my needs, but this case I need to take control/responsibility, and here be dragons. Does cmake actually solve this? Do other build systems?

I personally use scons instead of Makefiles. Its dependency analysis is amazing, I haven't seen it fail a single time.

Please elaborate: What do you find amazing about scons?

Also, how does scons handle non-existence dependencies?

What would be a scons dependency graph for this C code?

  main() {
   printf("hello, world\n");
   return 0;
You can see a dependency graph I generated with redo here: http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

I love that I get to use python to write the dependency graph, it allows for some interesting stuff.

Other than that, it's mostly the ease of use. This is enough to compile a C++ project (that has all its .c and .cpp files in the same directory as the SConstruct file), and it'll pick up on all dependencies correctly:

    Program(target = 'a.out', source = Glob('*.c') + Glob('*.cpp'))
I also know for a fact that it's able to pick up on how the presence of a new file might trigger a rebuild of what could require it.

Regarding the last question, using --tree=all it prints:

      | +-main.o
      | | +-main.c
      | | +-/usr/bin/gcc
      | +-/usr/bin/gcc
I'm not sure if it's hiding dependencies on system headers or not. But I can force it to show them by adding /usr/include and /usr/local/include to CPPPATH (excuse the long code block):

      | +-main.o
      | | +-main.c
      | | +-/usr/include/stdio.h
      | | +-/usr/include/Availability.h
      | | +-/usr/include/_types.h
      | | +-/usr/include/secure/_stdio.h
      | | +-/usr/include/sys/_types/_null.h
      | | +-/usr/include/sys/_types/_off_t.h
      | | +-/usr/include/sys/_types/_size_t.h
      | | +-/usr/include/sys/_types/_ssize_t.h
      | | +-/usr/include/sys/_types/_va_list.h
      | | +-/usr/include/sys/cdefs.h
      | | +-/usr/include/sys/stdio.h
      | | +-/usr/include/xlocale/_stdio.h
      | | +-/usr/include/AvailabilityInternal.h
      | | +-/usr/include/sys/_types.h
      | | +-/usr/include/secure/_common.h
      | | +-/usr/include/sys/_posix_availability.h
      | | +-/usr/include/sys/_symbol_aliasing.h
      | | +-/usr/include/machine/_types.h
      | | +-/usr/include/sys/_pthread/_pthread_types.h
      | | +-/usr/include/i386/_types.h
      | | +-/usr/bin/gcc
      | +-/usr/bin/gcc
      +-main.o -- This part was removed to decrease comment size, it's the same as the main.o part above
The SConstruct for this last block is:

    Program(target = 'a.out', source = ['main.c'], CPPPATH = ['/usr/local/include', '/usr/include'])
Note that these were generated on macOS.

From what you are showing us, the answer to How does scons handle non-existence dependencies? is that it does not handle them at all.

Go and look at M. Moskopp's graph. It has a lot of dependencies for non-existent files that the compiler would have used in preference to the ones that it actually used, had they existed.

Did scons finally get a little less opinionated?

It used to be that scons really forced you to use subsidiary SConscript child files for anything more complicated than a couple files in a single directory instead of being able to lump it all into a single SConstruct.

I think that was quite long ago. Yes, `SCons` can fit into a single file if you so choose. But its behavior under recursive builds (with nested directory structure) is far more predictable than most build systems I have seen.

The problem comes in as soon as you need conditionals, which is likely when attempting to build something portably. There may be some gymnastics that can be done to write around the lack of their presence in standard make, but otherwise your options are:

- Supply multiple makefiles targeting different implementations

- Bring in autotools in all its glory (at this point you are depending on an external GNU package anyway)

- Or explicitly target GNU Make, which is the default make on Linux and macOS, is very commonly used on *BSD, and is almost certainly portable to every platform your software is going to be tested and run on. The downside being that BSD users need a heads up before typing "make" to build your software. But speaking as a former FreeBSD user, this is pretty easy to figure out after your first time seeing the flood of syntax errors.

> But speaking as a former FreeBSD user, this is pretty easy to figure out after your first time seeing the flood of syntax errors.

I seem to be about the only person that makes use of the following feature, but FreeBSD and GNU make will in addition to looking for a file named Makefile, also look for BSDmakefile or GNUmakefile respectively.

So when I write a makefile with GNU make specific contents, I name it GNUmakefile, and when I write one that is specific to FreeBSD make, I name it BSDmakefile.

The user has to do absolutely nothing different; they simply write

and if their make is GNU make and my makefile is a GNUmakefile then it builds. Likewise with FreeBSD make and a file named BSDmakefile.

The big win is when someone then has the wrong make. Instead of beginning to build and then failing at some point kicking and screaming, they will simply be told

    make: no target to make.

    make: stopped in (path)
by FreeBSD make, or

    make: *** No targets specified and no makefile found.  Stop.
by GNU make.

And at that point they will consult the README I have written for the project in question and they will learn that they need the other make than what they are using if they want to build this software.

Nowadays I write a GNUmakefile, then add a Makefile with a message telling people to use gmake. Same idea.

i have this in a (hand-written) configure script:

    case "$(uname -s)" in


    if test "$MAKE" != "$GMAKE"; then
      case "$(uname -s)" in
        populate $rootdir/Makefile.in Makefile

    cat <<EOF

    to build the programs from their sources:


    to test their correctness:

      $MAKE check

$rootdir/Makefile.in contains

    all .DEFAULT:
      @@GMAKE@ --no-print-directory "$@"

POSIX Make supports conditionals, thanks to recursive macro expansion. And while POSIX doesn't yet support a macro construct to invoke and capture shell utilities, you can use both the GNU $(shell COMMAND) and Sun $(COMMAND:sh) syntax. All the BSDs and Solaris support the latter, and GNU Make the former. Both constructs are ignored with an empty string expansion where they're not supported, and no implementation supports both.

To see what I'm talking about, see my proof of concept library that I've put together.


Another possibility is conditional compilation at the C level. thefile.c #includes thefile_unix.c or thefile_windows.c as appropriate.

That doesn't really help situations like "iconv is in libiconv on some systems, and is in libc on others" though.

Or just use CMake -- it's not perfect, but it explicitly solves this.

> it's not perfect

I used to be so hopeful for cmake, but just grew to be sort of annoyed by it. Build systems are, however, perennial like the grass, so we'll see more.

I just wish it would work with relative paths. It shouldn't break your build to move a folder around.

AFAIK cmake works perfectly fine with relative paths; what breaks?

The author of that blog post wants something fundamentally different from what CMake is. The project files generated by it are just temporary build intermediates, and it makes no more sense to distribute them than it would the .d files generated by gcc -MD. Perhaps he is expecting it to be like autotools, where the generated Makefiles can be bundled in your tarballs so that people do not need autotools installed to build your project, but no meta-build-system other than autotools that I have encountered even tries to support that.

The author's claim that "a CMake-generated project cannot possibly be the final say" is so incorrect that I am not surprising that he found CMake frustrating. It is quite the opposite; if you ever find yourself manually editing the generated project files for any reason other than debugging build-system issues you're doing it wrong and are going to have a very bad time. You must always find a way to express the thing you need within the CMakeFiles.txt, which sometimes unfortunately requires awful hacks.

Amusingly the SO question he links to as evidence that editing the generated project file is required now has an answer saying how to do it properly...

> The project files generated by it are just temporary build intermediates, and it makes no more sense to distribute them than it would the .d files generated by gcc -MD

Nobody was talking about redistributing anything. I just said I should be able to move the folder from folder A to folder B.

Not sure what to even reply to the rest of your comment...

The author of the blog post you linked to talked a lot about redistributing generated files, and I assumed you linked to it because you agreed with the opinions he expressed.

After moving the project folder to a new location you just have to rerun the cmake command to update the project files. CMake attempts to do this automatically when you build.

[A] >>>>>> I just wish it would work with relative paths. It shouldn't break your build to move a folder around.

[B] >>>>> AFAIK cmake works perfectly fine with relative paths; what breaks?

[A] >>>> I don't think it works: https://ofekshilon.com/2016/08/30/cmake-rants/

[C] >>> it makes no more sense to distribute the project files

[A] >> Nobody was talking about redistributing anything. I just said I should be able to move the folder from folder A to folder B.

[C] > I assumed you linked to it because you agreed with the opinions he expressed.

You just randomly... "assumed"? and then changed the topic?

The most likely explanation is you didn't even read, given how off-topic the comment was.

Please... you wasted > 15 minutes of my time trying to figure out how to reply to your irrelevant comment without coming off as a total jerk. I know it's fun to lecture strangers about the Right (TM) way to use $technology, but maybe spend 10 seconds to read the discussion before doing that.

Actually, I think the reply was basically on target... The complaint in the section of the linked article that APPEARS to be relevant to throwaway613834 is "CMake’s treatment of paths", which is all about the fact that build artifacts of cmake have absolute paths. Build artifacts of cmake, however, are build _output_ and thus shouldn't be relocated, distributed, or checked in (I would argue relocation is a form of redistribution, it's just the simplest possible case). These build artifacts are the output of a specific build, thus they are (as pointed out) a lot like .d files which I also would expect regenerate in the new build environment post moving/modifying. Just like with .o/.d, I would expect to move/distribute/checkin after cleaning these build artifacts, and then regenerate them afterwards. Since this is pretty simple to do, I'm not sure it's worth such a long section in this blog. This is even pointed out (a couple times) in the blog comments. The author also even references why it works this way as a default: "It is really hard to make everything work with relative paths, and you don’t get that much out of it, except lots of maintenance issues and corner cases that do not work." Unless you have spent a while deep-diving into the issues, wouldn't it be prudent to accept the word of the maintainers as pretty close to canonical?

I thought you meant giving it relative paths in your CMakeLists.txt -- which does work as expected.

> This makes CMake-generated projects almost useless: you cannot source control them or share them in any other way.

Yes, and this is a good thing. I'm surprised anyone would want to do this on a visual studio solution of all things -- managing solution and project file merges in source control is incredibly tedious.

A lot of things about CMake irritate me, but its treatment of out of source build files as purely intermediate is a major plus to me, not a negative.

I could see how this would mess up your use case; but you have to admit your use case is very narrow -- I guess you want to be able to just rename the build folder; if you moved it more than that even the relative path to the source dir would be off -- and the justification for the limitation seems sensible.

Whenever I start to write a moderately complex makefile I realize yet again that the makefile programming language sucks and that if I want to stay sane the only way to do it is to go meta (use a separate program to generate a makefile using the bare-minimum features)

Putting the logic into a more flexible configure script that generates definitions for your Makefile and imitates the autotools build process without actually using autotools is a good/obvious compromise. Even some large projects like FFmpeg and mplayer/mpv do this.

> Microsoft has an implementation of make called Nmake, which comes with Visual Studio. It’s nearly a POSIX-compatible make, but necessarily breaks [...] Windows also lacks a Bourne shell and the standard unix tools, so all of the commands will necessarily be different.

What I've been mulling over is an implementation of make that accepts only a restricted subset of the make syntax, eliding the extensions found in either BSD and GNU make, and disallowing use of non-standard extensions to the commands themselves (and maybe even further restricted still). In theory, a make that does this wouldn't even need to depend on a POSIX environment—it could treat the recipes not as commands but instead as a language. It wouldn't even take much to bootstrap this; you could use something like BusyBox as your interpreter. Call it `bake`.

Crucially, this is not another alternative to make: every bake script is a valid Makefile, which means it is make (albeit a restricted subset).

What you're mulling over is the precept for the autotools toolchain.. spare the next generation the same pain we bore :)

Autotools is the opposite of what I'm suggesting. But maybe I'm missing some nuance in your comment. I think, though, that you may have read what I wrote, interpreted it as "new build tool", and immediately applied XKCD 927[1]. See my closing remarks in my last comment.

1. https://xkcd.com/927/

carussell is suggesting the opposite of autotools in the sense of making a very restricted tool that only allows portable code (at least within the domain of that tool). By contrast, autotools tries to be expansive in many ways (1) it provides all kinds of bits and pieces of the toolchain, (2) it tries to support all systems by auto-generating build scripts using sniffed the system properties.

Particularly when it's used in combination with gnulib, which has an irritating tendency to build most of glibc if it doesn't recognize your build environment.

> What I've been mulling over is an implementation of make that accepts only a restricted subset of the make syntax, eliding the extensions found in either BSD and GNU make

> every bake script is a valid Makefile, which means it is make (albeit a restricted subset).

What do you think about writing a Makefile linter, like `shellcheck` is for shell scripts?

The naming is brilliant!

I hope you're being sarcastic because there are already a half-dozen build tools called "bake"... it's literally the first name that everyone thinks of.

Honestly, just use Cmake. It is far easier to make it work cross playform and better yet cross compile. There's no good reason to write a Makefile by hand and no large projects do it anyway

Hm you could argue the largest project did it that way... The Android platform used to be tens of thousands of GNU Make (including the GNU Make Standard Library). For 5+ years. I think they are migrating or have migrated to a custom system now.

Also, the Linux kernel is pure GNU Make (no autotools, since that doesn't really make sense).

But yes those are platforms/OSes and not applications. Applications typically work on multiple Unix flavors and Windows, so something like cmake makes sense. Although honestly it's a shame that they made the same mistake as GNU make -- not paying enough attention to the programming language design and ending up with a crappy macro language.

> Also, the Linux kernel is pure GNU Make (no autotools, since that doesn't really make sense).

While that is true, they have a lot of wrappers and pseudo-meta-programming that makes it very simple to add a module to the kernel. It's unlike any other "pure GNU make" project I've seen.

The Android platform used to be tens of thousands of GNU Make[files]

Yes, and it was incredibly flaky and awkward to use! And slow.

I think they are migrating or have migrated to a custom system now

It used so many tricky macros that it already was a custom system, just one that happened to be built on top of Make.

CMake is rather crufty itself. There are many better options these days.

I would love for this to be true, but I'm afraid I don't have any evidence that it is. What are the better options you'd recommend?

More specifically, Google's Bazel [0] is the most serious attempt I know of to replace CMake/autotools, but it only just began to support Windows [1] and requires the JDK. Say what you will about CMake, but it's a quick-and-easy MSI to install on Windows and has supported Windows for probably longer than I've been alive. Google's other open source build system, GYP [1a], supports Windows because it builds Chromium, but GYP might as well be undocumented. Unless you like spelunking into the Chromium source, GYP is a non-starter. (Props to the Node folks for actually doing this for Node and Node extensions.)

I haven't used Gradle [2], but I've heard more than a few times it's slow (warning: might be FUD)... and it also requires a JRE.

Ninja [3] is blazing fast, but it's specifically designed not to be used directly, but rather through a build system generator like CMake.

I'll be the first person to ditch CMake if a feasible alternative presents itself, but if you want to build a cross-platform, moderately-complicated C++ project today, especially one with dependencies on other libraries that use arbitrary build systems, I don't know of any such alternative.

[0]: https://bazel.build

[0a]: https://gyp.gsrc.io

[1]: https://docs.bazel.build/versions/master/windows.html

[2]: https://gradle.org

[3]: https://ninja-build.org

Boost.Build and QMake are two relatively popular choices that are mature and stable.

I have particularly noticed the uptick in QMake usage in OSS projects lately, probably because it is so straightforward to use for the most common cases. I think it's not more popular than it is only because it's associated with Qt, and many people (wrongly) assume that it can only be used for Qt apps, or that it depends on Qt.

Meson (https://www.mesonbuild.com/) seems to have a lot of momentum; GNOME has just adopted it. It is designed around a non-Turing complete, object-oriented DSL, plus extension modules written in Python.

Beyond being adopted by GNOME, I have hardly heard of it being used in another context.

> ... I have hardly heard of it being used in another context...

dpdk seems to be moving in that direction i.e. using meson.

systemd, Wayland and Xorg are switching too.

No one wants to manually do dependency management in even a moderately sized project. I really haven't found an ideal way to have these -MM -MT flags integrated into Makefiles; I've tried having an awk script automatically modify the Makefile as the build is happening, but of course the updated dependencies will only work for later builds, so it's only good for updating the dependencies. Any other approaches HNers used and really liked?

Simply add:

  -include $(OBJS:%.o=%.d)
in your makefile (with -MMD in CFLAGS).

Don't you need a pattern rule for %.d? GNU Make doesn't seem to come with this rule.

There is a profoundly ugly example here:


    %.d: %.c
        @set -e; rm -f $@; \
         $(CC) -M $(CPPFLAGS) $< > $@.$$$$; \
         sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \
         rm -f $@.$$$$
I copied it into a running example here for anyone curious:


If you have -MD or -MMD in your CFLAGS, GCC (and Clang) will generate the .d files during compilation without you having to add anything else to the Makefile, and without requiring ugly sed magic.

I just tried this, and it seems wrong on two counts:

1) The sed magic is required for adding the dependencies of the .d file itself. For example, if the timestamp of your header changes, it may have acquired new dependencies, and a new .d file has to be generated BEFORE make determines if the .c file is out of date.



The purpose of the sed command is to translate (for example):

    main.o : main.c defs.h

    main.o main.d : main.c defs.h
The first line isn't correct because the .d file itself has no dependencies.

2) By the time you are compiling, it's too late to generate .d. The .d files are there to determine IF you need to compile.

EDIT: I am trying to generate a test case that shows this fails, but it seems to actually work.

Hm yes I'm convinced it works, but I have to think about why. I guess one way of saying it is that the previous .d file is always correct. Hm.

(2) is not quite correct. The old .d file from the previous compilation is actually all you need to determine whether the .c file needs to be recompiled. It works in all cases. If the .c file is new (or you're doing a clean rebuild of the whole project,) it will always be compiled, because there will be no corresponding .o. If the .c file, or any of the .h files in the old .d gain new header dependencies, they must have been modified, so their timestamps will be newer than the .o file from the last build, hence the .c file will be recompiled and a new up-to-date .d file will be generated (because a new .d file is always generated when the .c file is compiled.)

If (2) is not correct, then (1) is not needed either. The old .d files from the last compilation pass are sufficient to know which files need to be recompiled in the current compilation pass. Make does not need to know the dependencies of the .d files themselves, it just needs to load all the existing .d files at startup.

EDIT: Yep, I'm fairly confident this works :D. I don't know if whoever wrote that manual page knew about -MD, but I think it might be newer than -M, which would explain it.

The problematic case is with generated header files. Suppose foo.c includes foo.h, where foo.h is generated by a separate command. On a clean build, there's nothing telling Make that it needs to build foo.h before foo.c, so it may not happen (and worse, it may usually happen but sometimes not when doing parallel builds). A separate invocation of `gcc -MM` works for this, as when it generates the dependency information for foo.c it will see that it needs foo.h before you do the actual build.

Personally I've never found it too burdensome to just manually specify dependencies on generated files.

Wouldn't the header need to be explicitly listed as a dependency to prompt it's generation anyway?

Hm yes I just figured that out the hard way -- <scratching head>.

This feels hacky, but yes it seems to work. I'll think about it a bit more. (I might clone this feature for a build tool -- since the gcc/Clang support is already there, it seems like any serious build tool needs this. Although some have their own #include scanners which is odd.)

Thanks for the information!

I guess a simple way of explaining it is that if there are any new header dependencies, one of the files that make already knows about must have been modified to add the #include statement, so make will correctly rebuild the .c file and generate the new .d file, even though it's working on outdated dependency information.

Though, I guess I wasn't quite correct either. See plorkyeran's sibling comment re: generated header files.

> Although some have their own #include scanners which is odd.

I once worked in a place that had its own #include scanner (partially because it used so many different compilers and other tools... suffice it to say that Watcom C++ was in the mix). To make it work, you had to install a custom filesystem driver that intercepted disk reads during compilation and logged them. A rather... bruteforce approach. But it had the advantage of working with everything.

There's a -MT flag which does what the sed line does. From the gcc man page:

  Change the target of the rule emitted by
  dependency generation... An -MT option will
  set the target to be exactly the string you
So in your example one might do something like

  -MT "$@ $(basename $@).d"
which would output

  main.o main.d : main.c defs.h
for the main.o target.

This is what I do too, and it seems perfect to me for adding set-and-forget dependency awareness to an ordinary Makefile. The first build with a new .c file will work. Its .d file won't exist, but the -include directive silently ignores this because of the - prefix, and it will be built anyway, because its corresponding .o doesn't exist either. Subsequent builds will use the dependency information in the .d.

Also consider adding -MP to CFLAGS to prevent errors if you delete a .h file.

'include' being one of those features that isn't in basic POSIX make, though.

There exists DJB's redo approach [0], which i implemented [1], where dependencies and non-existence dependencies are only recorded after the build. A typical dofile is a shell script, so you do not need to learn another language. Targets also automatically depend on their own build rules (I have seen such a thing only in makefiles authored by DJB).

I wrote a blog post to show how to integrate dependency output for both dependency and non-existence dependency generation [2]. The game “Liberation Circuit” can be built with my redo implementation; you can output a dependency graph usable with Graphviz [4] using “redo-dot”.

There is only one other redo implementation that I would recommend, the one from Jonathan de Boyne Pollard [5], who rightly notices that compilers should output information about non-existence dependencies [6].

I would not recommend the redo implementation from Avery Pennarun [7], which is often referenced (and introduced me to the concept), mainly because it is not implemented well: It manages to be both larger and slower than my shell script implementation, yet the documentation says this about the sqlite dependency (classic case of premature optimization):

> I don't think we can reach the performance we want with dependency/build/lock information stored in plain text files

[0] http://cr.yp.to/redo.html

[1] http://news.dieweltistgarnichtso.net/bin/redo-sh.html

[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[3] https://github.com/linleyh/liberation-circuit

[4] https://en.wikipedia.org/wiki/Graphviz

[5] http://jdebp.eu./Softwares/redo/

[6] http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

[7] https://github.com/apenwarr/redo

Use Rust.

Kinda sorta kidding, not-kidding. Usually I'd wrangle CMake into submission since I tend have diverse targets(Win32, Android, Linux, OSX, etc) however Rust(and moreso Cargo) makes this a pleasure to do.

Even native dependencies are straightforward unless there's some sort of build-fu going on(like luajit). You can just pull in the GCC crate, it'll shell out to clang/gcc/msvc and since you're using it in a build.rs you can configure it however you want since it's just another Rust program.

Doesn't cmake take care of most of this? Is there any reason not to use cmake on middle to large scale projects?

I am genuinely curious. I've only recently started looking at cmake, and it seems like they should generate portable Makefiles, or at least have an option to generate them.

Yes it does, and no there isn't, in that order. CMake is pretty horrid in many respects - I won't deny that - but it gets a few key things right, and it has a good-size ecosystem. That's enough to make up for its notable deficiences.

Yes, including nmake (windows-compatible).

> The bad news is that inference rules are not compatible with out-of-source builds. You’ll need to repeat the same commands for each rule as if inference rules didn’t exist. This is tedious for large projects, so you may want to have some sort of “configure” script, even if hand-written, to generate all this for you. This is essentially what CMake is all about. That, plus dependency management.

This isn't a case for CMake. It's a case against POSIX Make. The proposed "portability" and "robustness" of adherence to the POSIX standard are not worth hamstringing the tool. GNU Make is ubiquitous and is leaps and bounds ahead of pure Make.

Are you desperately in need of a hacker in any area of your life???

then you can contact certifiedhacker04@gmail.com

he will help you at affordable prices, i offer services like -hack into your cheating partner's phone(whatsapp,bbm.gmail,icloud,facebook and others) -Sales of Blank ATM cards.

-hack into email accounts and trace email location -all social media accounts,

-school database to clear or change grades,

-Retrieval of lost file/documents

-DUIs -company records and systems,

-Bank accounts,Paypal accounts -Credit cards hacker

-Credit score hack -Monitor any phone and email address

-Websites hacking, pentesting.

-IP addresses and people tracking.

-Hacking courses and classes.

my services are the best on the market and 100% security and discreet work is guaranteed.

I love that their definition of "portable" is software that runs exclusively on UNIX.

This article barely addresses what really causes trouble in practice, namely non-portable tools. Sed for example has different switches on macos and linux. MinGW is another world.

Also check out https://github.com/c3d/build for a way to deal with several of the issues the author addresses (but not posix portability)

Wait... "%.o: %.c" is nonstandard?!?

Yes, base Make is almost useless for anything larger than a utility.

According to the POSIX make manpage [1], the following double-suffix rule is part of the defaults:

        $(CC) $(CFLAGS) -c $<
[1]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ma...

Not quite; I think '.c.o' is the equivalent in the original Make syntax?

That's nowhere near equivalent. For example, it cannot replace the following things:

  pkg/test/migrations/%.sql: pkg/db/migrations/%.sql

  build/%.cover.out: prepare-check FORCE
(In the second one, $* is then used in the build rule, which is why % does not need to show up in the prerequisites list.)

Examples from https://github.com/sapcc/limes/blob/62e07b430e2019a6c1891443...

It's been a while since a I wrote a make file but as far as I remember it was very easy to create a full featured cmake file if the project used the layout which cmake assumed (easy for new projects).

However, porting existing projects from traditional make files to cmake could be next to impossible.

where, except for Windows, is requiring GNU Make a problem?

It's not really a problem on Windows, to be honest. MSYS2 gives you that, gcc, and a bunch of POSIX utilities to boot. It also gives you packages for libraries, and a Unix-like filesystem structure where they are deployed (/usr etc). Yet you end up building clean Win32 binaries, with no emulation layers.

I've heard BSD and mac both use much older, non-GPLv3 versions of make. Portability means supporting older versions too.

/usr/bin/make on macOS is GNU Make 3.81, which is fairly ancient (2006) but still a much more featureful target than POSIX make.

Installing GNU make on BSDs is not difficult; for any project with a nonzero number of dependencies it's likely to be the most trivial one. If you're writing a C89-zero-non-posix-dependencies library then sure, you can't use GNU make, but you also probably only have a ten line makefile anyway.

That's the Ubuntu 14.04 version. Old, but make changes very little.

My first step on a non-GNU system back when I had to use those was to install the GNU utilities and add them to the beginning of my PATH. I suppose if I had to support them primarily I'd have gone the portability route, but fortunately I was always worrying about them secondarily instead.

I like Tom Duff's http://www.iq0.com/duffgram/com.html for experimental programs contained in a single file.

More nifty portable Make facts:

- For portable recursive make(1) calls, use $(MAKE). This has the added advantage of BSD systems which can electively install GNU Make as gmake being able to pass in the path to gmake to run GNU Makefiles [1]

- BSD's don't include GNU Make in base system. BSD's port and build system uses Make extensively, and has a different dialect [2]

- In addition to that, you will likely choose to invoke system commands in your Makefile. These also have the same GNU-specific features that won't work on BSD's. So keep your commands like find, ls, etc. POSIX-compliant [3]

- Part of the reasons tools like CMake exist is to abstract not only library/header paths and compiler extensions, but also the fact POSIX shell scripting and Makefile's are quite limited.

- Not only is there a necessity to use POSIX commands and POSIX compatible Make language, but the shell scripting must also not use Bash-isms and such, since there's no guarantee the system will have Bash.

- POSIX Makefiles have no conditionals as of 2017. Here's a ticket from the issue tracker suggesting it in 2013: http://austingroupbugs.net/view.php?id=805.

- You can do nifty tricks with portable Makefile's to get around limitations. For instance, major dialects can still use commands to grab piped information and put it into a variable. For instance, you may not have double globs across all systems, but you can use POSIX find(1) to store them in a variable:

    FILES= find . -type f -not -path '*/\.*' | grep -i '.*[.]go$$' 2> /dev/null
Then access the variable:

    if command -v entr > /dev/null; then ${WATCH_FILES} | entr -c $(MAKE) test; else $(MAKE) test entr_warn; fi
I cover this in detail in my book The Tao of tmux, available for free to read online. [4]

- MacOS comes with Bash, and if I remember correctly, GNU Make comes with the developer CLI tools as make.

- For file watching across platforms (including with respect for kqueue), I use entr(1) [5]. This can plop right into a Makefile. I use it to automatically rerun testsuites and rebuild docs/projeocts. For instance https://github.com/cihai/cihai/blob/cebc197/Makefile#L16 (feel free to copy/paste, it's permissively licensed).

[1] https://www.gnu.org/software/make/manual/html_node/MAKE-Vari...

[2] https://www.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...

[3] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/fi...

[4] https://leanpub.com/the-tao-of-tmux/read#tips-and-tricks

[5] http://entrproject.org

But there's no way in POSIX make itself to assign a value to FILES dynamically - you have to assign the FILES environment var or supply via command line args. POSIX make will expand ${FILES} by the replacement value in commands (and prerequisites but not targets). It then merely happens to be interpreted as part of the command it's placed into.

Updated, thank you.

Assigning the results of a command to a variable is not POSIX Make. However, both FreeBSD's Make and GNU Make support it.

There's the proposal to add it to POSIX Make: http://austingroupbugs.net/view.php?id=337

Another thing I wanted to add to the main post: since POSIX Make doesn't have conditionals, GNU Make and bmake both have different ways of doing them. If you look through FreeBSD's Make files, you'll see that conditionals exist, but they begin with dots:


There are going to be some compromises when writing portable scripts where it won't be POSIX, but could still be considered portable.

POSIX Make supports conditional constructs by way of recursive macro expansion. I've found it to be not much more clunky than GNU Make's conditional constructs. (Recently I've come to appreciate the simplicity and consistency of the BSD extensions, even though I don't use them. GNU Make syntax quickly becomes impenetrable.) And as you say, shell invocation from macros isn't yet supported by POSIX but _can_ be done portably. GNU Make supports $(shell COMMAND), everybody else supports $(COMMAND:sh), nobody supports both, and where not supported they expand to the empty string.[1] Here's a simplified example from my proof-of-concept library from https://github.com/wahern/autoguess/blob/config-guess/config...

  BOOL,true = true
  BOOL,1 = true
  BOOL,false = false
  BOOL,0 = false
  BOOL, = false

  OS.exec = uname -s | tr '[A-Z]' '[a-z]'
  OS = $(shell $(OS.exec))$(OS.exec:sh)
  OS.darwin.test,darwin = true
  OS.is.darwin = $(BOOL,$(OS.darwin.test,$(OS)))

  # Usage: $(SOFLAGS.shared) - for creating regular shared libraries
  SOFLAGS.shared.if.darwin.true = -dynamiclib
  SOFLAGS.shared.if.darwin.false = -shared
  SOFLAGS.shared = $(SOFLAGS.shared.if.darwin.$(OS.is.darwin))

  SOFLAGS = $(SOFLAGS.shared)

  	@echo "SOFLAGS=$(SOFLAGS)"
I try to keep my stuff portable across AIX, FreeBSD, Linux/glibc, Linux/musl, macOS, NetBSD, OpenBSD, and Solaris. I used to just require GNU Make, with makefiles that looked like:


          +gmake -f GNUmakefile all

          +gmake -f GNUmakefile $<
but now I'm moving my projects over to the more portable, dependency-less method. It makes things so much easier to be able to run `make test` without having to worry about first installing unnecessary dependencies, or often any dependencies whatsoever. When you're juggling a dozen VMs, trying to track the latest couple of releases of a platform, simplicity is key. Plus, more easily programmable environments encourage overly complex solutions which become maintenance burdens over months and years. With GNU Make I was always unable to resist the urge to turn every repetitious rule into a template, for example. Sticking to purely portable constructs means I'm forced to be smarter about how to approach something, including recognizing when something is best left un-automated.

[1] It's nice that POSIX will be formalizing the "!=" construct, but unfortunately it's not supported by Solaris make, and macOS will be forever stuck on GNU Make 3.81 which also lacks support.

target [target...]: [prerequisite...]

We invoke shell commands in a Makefile, and if we are concerned about POSIX conformance in the Makefile syntax, we need to be equally concerned about POSIX conformance in the shell commands and the shell scripts we invoke in Makefile.

While I have not found a foolproof way to test for and prove POSIX conformance in shell scripts, I usually go through the POSIX.1-2001 documents to make sure I am limiting my code to features specified in POSIX. I test the scripts with bash, ksh, and zsh on Debian and Mac. Then I also test the scripts with dash, posh and yash on Debian. See https://github.com/susam/vimer/blob/master/Makefile for an example.

Here are some resources:

* POSIX.1-2001 (2004 edition home): http://pubs.opengroup.org/onlinepubs/009695399/

* POSIX.1-2001 (Special Built-In Utilities): http://pubs.opengroup.org/onlinepubs/009695399/idx/sbi.html

* POSIX.1-2001 (Utilities): http://pubs.opengroup.org/onlinepubs/009695399/idx/utilities...

* POSIX.1-2008 (2016 edition home): http://pubs.opengroup.org/onlinepubs/9699919799/

* POSIX.1-2008 (Special Built-In Utilities): http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3...

* POSIX.1-2008 (Utilities): http://pubs.opengroup.org/onlinepubs/9699919799/idx/utilitie...

The editions mentioned in parentheses are the editions available at the mentioned URLs at the time of posting this comment.

Here is a list of the commands specified in POSIX:

Special Built-In Utilities: break, colon, continue, dot, eval, exec, exit, export, readonly, return, set, shift, times, trap, unset

Utilities: admin, alias, ar, asa, at, awk, basename, batch, bc, bg, c99, cal, cat, cd, cflow, chgrp, chmod, chown, cksum, cmp, comm, command, compress, cp, crontab, csplit, ctags, cut, cxref, date, dd, delta, df, diff, dirname, du, echo, ed, env, ex, expand, expr, false, fc, fg, file, find, fold, fort77, fuser, gencat, get, getconf, getopts, grep, hash, head, iconv, id, ipcrm, ipcs, jobs, join, kill, lex, link, ln, locale, localedef, logger, logname, lp, ls, m4, mailx, make, man, mesg, mkdir, mkfifo, more, mv, newgrp, nice, nl, nm, nohup, od, paste, patch, pathchk, pax, pr, printf, prs, ps, pwd, qalter, qdel, qhold, qmove, qmsg, qrerun, qrls, qselect, qsig, qstat, qsub, read, renice, rm, rmdel, rmdir, sact, sccs, sed, sh, sleep, sort, split, strings, strip, stty, tabs, tail, talk, tee, test, time, touch, tput, tr, true, tsort, tty, type, ulimit, umask, unalias, uname, uncompress, unexpand, unget, uniq, unlink, uucp, uudecode, uuencode, uustat, uux, val, vi, wait, wc, what, who, write, xargs, yacc, zcat

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact