Hacker News new | past | comments | ask | show | jobs | submit login
A Tutorial on Portable Makefiles (2017) (nullprogram.com)
130 points by hardwaresofton on Aug 1, 2022 | hide | past | favorite | 67 comments



2017 (when this is from) seems to me that it was around the time there was this weird resurgence of people claiming that “modern makefiles” (aka. carefully handcrafted makefiles) were fine.

…but, it was a fad, and it faded, because make is not fine, in any non-trivial incarnation.

Yep, sure. A little script for your projects that only you care about and runs on your machine… go for. I’m fond of make for trivial purposes, it’s easy to remember, it’s widely installed.

…but, make doesn’t scale; the only way to scale make is to generate your makefiles programmatically; once you do this, your makefiles are no longer makefiles, they’re arcane generated non-portable custom build scripts.

Various tools do this; autotools, cmake, etc.

Sure, those are tools which are can use if you want, but the elegance and joy of using make vanishes once you use them; you are no longer using make, you are now using a different tool.

…so, I have mixed feelings about posts like this. On the one hand, that’s cool; but on the other, you’re really wasting your time learning this stuff because, realistically, you’ll never use it. By the time you need it, you’ll be using a meta tool that generates human-unreadable Makefiles for you.

Related, extensive thread: https://news.ycombinator.com/item?id=5275313


Like many others, I too rediscovered the power of pure GNU Make years ago. Turns out it has some rather Lisp-like metaprogramming capabilities. It's extremely hard to do even basic processing but with enough effort some cool things can happen. I'm somewhat proud of my implementation of a simple version of the find utility as a pure GNU Make recursive function.

This rediscovery essentially derailed one of my side projects as I started writing a pure GNU Make build system and function library like this one:

https://gmsl.sourceforge.io/

https://www.cmcrossroads.com/article/gnu-make-standard-libra...

I even asked this author on Twitter and Stack Overflow about its implementation.

The end result was reinvention of code generation tools because that's what metaprogramming is. I didn't understand what autotools was doing before I tried doing it myself. My objective was to eliminate makefile regeneration whenever project structure changed, I ended up regenerating code 100% of the time and evaluating the resulting code inside the makefile itself. If we must metaprogram, it's better to do it in a proper programming language. At least it didn't have any dependencies other than GNU Make.


Hello! That’s me. Happy to answer further questions…


Hello! Huge fan of your work and I've also read your book. Are you still developing GMSL? I wonder if you're open to contributions.

My find function has been the single most useful thing to come out of my side project. The first thing I do in my makefiles is get a list of all source files and it's always been annoying because I like organizing my project in trees which requires recursive file system walks. I managed to write a pure GNU Make function that simplifies that to:

  sources := $(call find,src,file?)
The find function recursively walks the src directory and returns a list of all paths satisfying the file? predicate function.

I think it would be a nice addition to GMSL. In my project, I doubt it will ever help anyone.


Yes, I still work on GMSL. It's pretty stable and I mostly just fix incompatibilities or bugs that people report. However, I'm happy to look at additions.

One of these days I'll get it off SourceForge...

Is your find function a bit like my rwildcard? https://news.ycombinator.com/item?id=23270235


That's awesome. I didn't know about rwildcard until now. Is it part of GMSL? I searched for rwildcard on gmsl.sourceforge.io but didn't find it.

I think my function is needlessly complicated compared to rwildcard. Here's my code:

https://github.com/matheusmoreira/liblinux/blob/modular-buil...

https://github.com/matheusmoreira/liblinux/blob/modular-buil...

The file? and directory? functions were inspired by GMSL. I think I learned how to check whether a path is a directory from one of your posts.

I wrote a general recursion function. It takes a function to apply to lists and a function to compute whether an element is a base case.

The recursive file system traversal function applies a directory globbing function to the list of paths and has file? as base case. It returns all files without filtering.

The find function recursively traverses the file system while filtering out all items not matching the given predicate function. It was my intention to provide predicates like c_file? and header_file? but I stopped developing that project before that happened. Users can easily write their own predicates too.

I think rwildcard is probably simpler and more efficient! If my code helps in any way though, feel free to use it.


"Like many others, I too rediscovered the power of pure GNU Make" ... wait a sec, GNU make is not "pure!" I like it but I recognize that it has features that a more portable make would not have.


Perhaps pure was not the right word. By pure GNU Make I meant writing makefiles myself without higher level makefile generation tools.


Honestly, I never quite understood the point of generating the makefiles: if you have a tool that knows where the inputs are, where the outputs should be, and tracks the dependencies changing from build to build (something that "make" famously struggles to do), then surely that tool can also run whatever compilers/linkers/etc. are needed?

I guess you can commit the generated makefile into your repo so that those who want to build your project from sources don't need your custom build tool installed, only make...


Historically, Makefiles generation is merely a cache of platform introspection on systems that were orders of magnitudes slower than what we have now. I suspect that any other approach based on Makefile only would have implied higher memory consumption and/or implied slow rotating disk I/O.


Actually on my 64-thread Threadripper that cache is more important than ever as the make step parallelizes nicely but the configure step usually is 100% serialized.


Take a look at FreeBSD. Here’s what a typical Makefile looks like:

# $FreeBSD$

PACKAGE=runtime PROG= setfacl SRCS= file.c mask.c merge.c remove.c setfacl.c util.c

.include <bsd.prog.mk>

So, yeah, at least for code in C you can have something maintainable in make(1), it’s just you need to structure your project in a sensible way.


Does it support incremental builds? How does it detect include dependencies? Is it portable?


1. Sure (if you didn’t need incremental builds, why bother with make at all? Just use sh.)

2. It doesn’t, it’s programmers job to make an effort to append a file name to a single line in a single file.

3. It’s portable in the same way autotools are portable, which means you can make it work with relative ease.


Most builds are divided into two phases. The first phase "do we have libfoo installed, where is doxygen, are tests enabled" is run infrequently. The second phase "recompile everything that needs to be recompiled" is run over and over again. The second phase is more or less a solved problem now given that Ninja exists. The first phase is where a lot of people have different ideas and tastes, so we get cmake, meson, autotools, etc.

You can certainly do everything in one system. In practice this seems to lead to a build process that is much slower than it needs to be, see for instance Scons.


Or see cargo, go build, dune, etc.


These are all tools that specialise in building only a particular kind of project. They are useful, but aren't really in the same category of general build tools like Ninja and Make.


There are many nuances. How to control parallelism, how to present output, checking freshness, etc. It is much easier to generate Ninja files and let Ninja do all the boring work.


What are the dependencies of Ninja? (I seem to remember it being a Python tool?)


On macOS at least, Ninja is just a 362 kByte native executable without dependencies except dynamically linking with the system-provided libc++ and libSystem DLLs (you're probably thinking of Meson?)


Ah, that might be it! Thanks!


Do you want to hand parallelization in your little custom tool?


Well, cargo has it. So does go build. And dune. And MSBuild. And... basically, every language-specific (and non-specific) build tool out there handles it, so yes? Sure, it's not exactly according to the UNIX philosophy but judging by other comments in this thread, if one wants a low-level parallelizeable build tool, they should use Ninja instead of Make anyhow.



Yes, I can also write a big Makefile to hand compile my project; I can also compile it using shell script that explicitly executes the build steps.

but I dont, not because it’s impossible to do but because the effort in maintaining those is high.

Come on, are you serious? Hand crafted makefiles? Have you actually tried to maintain one of those? That redis makefile has a explicitly coded set of object files. Can you imagine trying to maintain that for opencv?

I’m just too damn lazy to waste my time maintaining that kind of stuff; it’s a pointless, thankless task… but hey, I’ll give it to you; there are people out there who prefer to do it that way.

I posit, they spend more time maintaining their build system then they would prefer, but they keep make for ideological reasons, not practical ones.


Are you aware that the first step in building linux kernel is literally bootstrapping a custom build system that just happens to generate makefiles (if not just "mostly make compatible" files - I got crosseyed recently trying to find if kbuild didn't try to run them as well)

Linux kernel provides "makefile" as interface for you, but it's mostly compiled from a custom KBuild system.


AFAIK, the makefile fragment generated by kconfig is nothing more than a set of CONFIG_FOO=y (or =m) statements; everything else is done directly in the makefiles, which have declarations similar to "obj-$(CONFIG_FOO) += foo.o" which become obj-y or obj-m (and so on) depending on the value set by that makefile fragment generated by kconfig. The set of object files to build can then be found in "$(obj-y)" (and so on), which is used as a dependency in a makefile rule to link everything together. The same kconfig also creates a C header file with #define rules for these same CONFIG_ symbols, which is passed to the compiler through a "-include" argument. It's not "mostly compiled from a custom KBuild system", IIRC kbuild is the name given to the whole system, including the makefile rules and macros which do the actual compilation.


Decided to go into the source again, and yes, it's mostly implemented in Makefiles, but they are wildly different from the kind of makefile discussed in the article, with considerable amount of machinery in macros and external C programs (whole "features" thing) to the point that individual makefiles in subdird probably wouldn't parse right if not executed through kbuild wrapper


You offer plenty of criticism for using and/or generating makefiles, but I can't find any place in the discussion where you suggest an alternative.

So what is your solution for building large projects?


Recovering an ancient project sent me back down the "which build tool should I be using?" rabbit hole. Getting a script to grave dig 57 inconsistently styled K&R C files (!) really stresses the build cycle, more than writing new code.

First approximation: No one's happy with any build tool, with good reason. Then I discovered Ninja. It's an assembly language for builds, advertised as a back end for higher level tools, but many people script Ninja themselves.

For me, 15 lines of Ruby code plus some heredoc boilerplate spawned a 138 line build.ninja file. Back in the day, this computer algebra system used to take an hour to build. Make still needs a frustrating number of seconds. Ninja on all my cores is a correct fraction of a second.

Ninja is exceptionally fast, and handles parallelism extraordinarily well. It scales; they use it for Google Chrome. Various tools such as CMake can target Ninja. It is unique for its design goals.

Most importantly, looking at Ninja code is not a "Just kill me now!" experience.

https://ninja-build.org/


That's funny, I wrote basically the same thing here a few hours ago. I "replaced" GNU Make with Python + Ninja and it's way better.

https://news.ycombinator.com/item?id=32303096

And my build is a lot bigger, with ~1000 lines of Python generating ~9000 lines of Ninja. I will probably write a blog post about it at some point, but it supports

    - build variants      
      - dbg, opt, asan, ubsan
      - code coverage with HTML output
      - uftrace instrumentation
      - malloc and tracing
      - clang and gcc (because clang coverage is better, ubsan is different, etc.)
    - test and benchmark binaries


I'd love to see your ninja files, to see how you handled such things.

Rather than complicating my build.ninja files, I handle build variants in my script. Easy enough to output a new variant, probably faster to use, and certainly easier for others to understand.

People are more likely to adopt ninja as we did if they see it first as the easiest way to manage a small project. Why bother with CMake or other tools, if one can script in one's favorite language? I'm sick of working with bad build description languages; ninja is going straight to the bare metal.


Sure you can check it out here: https://github.com/oilshell/oil

    $ wc -l NINJA* */NINJA*
       64 NINJA_config.py
      ...
      260 cpp/NINJA-steps.sh
      361 cpp/NINJA_subgraph.py
      305 mycpp/NINJA-steps.sh
      605 mycpp/NINJA_subgraph.py
     1615 total
I borrowed the little Python library that Ninja itself uses to bootstrap its Ninja files.

There are a few missing things like some textual code generation, but overall it should make sense.

It's structured a bit like a lightweight Bazel -- each directory has a NINJA_subgraph.py file that specifies what to build. And then the "actions" are shell scripts in NINJA-steps.sh.

At the top of one file it shows the directory structure for variants, which is basically

    _bin/$compiler-$variant/foo_test
and

    _build/obj/$compiler-$variant/foo_test.o

The variants control the compile and link flags: https://github.com/oilshell/oil/blob/master/cpp/NINJA-steps....

Overall this is the best build system I've used :) My pet peeve is having to 'make clean' when changing the variant, and this avoids that, making everything available at once.

I'm sure it's reinventing some of the 120K lines of CMakeLists.txt that comes with CMake, but this way I don't have to learn and debug a wonky shell-ish language!

One downside of the pattern is that you probably don't want to invoke shell on Windows (which doesn't affect my project). I find that very useful though, so I'm not sure what would replace it for Windows projects. (I guess batch files, but I don't think they're powerful enough)


+1 for ninja. I first encountered is using Apache TVM, which uses CMake. I was first of all using the default make backend, and it was taking a while. Then the docs said to try adding the `-GNinja` flag to Cmake to build with ninja. I was blown away by how much faster the compilation was, and now try to use it whenever possible.


One issue is that make will default to only using one job (i.e. one cpu), and you need to pass `-j NUMBER` to make it use more, while Ninja is parallel by default.

For my uses, I've not found `ninja` to be much faster than `make -j8` on an 8-core machine.

Not that the defaults don't matter, of course.


I'm surely also in that range. It took us years to write this project, but 57 C files is a small project. Others find themselves in the range where the build decision-making itself can take seconds, with tools other than ninja. Ninja is pared down for speed; having to declare keywords before I used them made me feel young again.

I had to write these build files over again, and I just couldn't stomach the idea of writing another makefile.


> Then the docs said to try adding the `-GNinja` flag to Cmake to build with ninja.

note that nowadays you can just pass --parallel to cmake and it will do "the right thing" whenever possible, c.f. https://cmake.org/cmake/help/latest/manual/cmake.1.html#buil...


Meson + ninja is the sweet spot for me

https://mesonbuild.com/index.html


I frequently see meson (and other tools like cmake) using build directories, which is something I don't like. Is there a reason to this that I am not thinking of (besides being able to "rm -rf" everything instead of doing something like "make clean")?


Other than it being cleaner in many ways (all build artefacts in one place, your base directory structure never getting touched, no chance of accidentally checking in build artefacts into your repo), you can have several different build configurations 'active' in parallel (build.release, build.debug, build.cuda etc.) and then activate the right build by just going to the relevant folder and running make.


> you can have several different build configurations 'active' in parallel (build.release, build.debug, build.cuda etc.) and then activate the right build by just going to the relevant folder and running make.

That is an interesting use-case I hadn't heard of before. But I guess if that is really needed, you could also just have multiple checkouts?


FWIW, you can easily have that (separate build directories) with make(1), it’s just that’s it’s not the default.


Of course, I use make most of the time, but I'm used to having a .o file right next to my .c file.


Try lndir to create a separate build directory that has symbolic links to the files in the source tree.


Any chance you can share that Ruby script? (It can be just the script if the source isn’t shareable.) I’ve wanted to explore something like this for a while and having a reference would be nice.


https://gist.github.com/Syzygies/fe8cf2096fcb91dd11b5c8cf763...

The source will be shareable once we get it running again. Other than historical curiosity, people are most likely to use it for assistance in converting old work to the successor program Macaulay2.

The script bin/ninja.rb creates two flavors of build.rb, one that keeps going no matter what, writing error files for each exception, and another for use once everything works. One might be able to craft a single file that does both, but I'll want to ditch these complications once we're done with the conversion, and end users won't want to see them.

It would have helped me to see more examples like this, learning ninja.


I recently learned more about Makefiles by creating one to help me distribute a recent hobby project, which uses Lua embedded in C. I wish I had seen this when I was working on it! I learned (stole) from the musl libc configure script and makefile instead.

Makefiles aren't as easy to work with as alternatives in other language ecosystems, but I found they let me do a lot without the need for extra dependencies. It's easy to shoot yourself in the foot with a configure script but the power is hard to beat.

Coincidentally the author of this blog post commented on one of my reddit posts and was kind enough to help me work out a number of kinks in the project.

https://jeskin.net/blog/clp/


Also related & interesting:

- https://frippery.org/make/ (inspired this post -- HN: https://news.ycombinator.com/item?id=32300356)

- https://github.com/casey/just (Make with less footguns, written in Rust)

[EDIT] A side-note -- I personally use Make with just all stacks I create with (front/back/infra/etc). I think it's quite undervalued but maybe because I've never dealt with an industrial-strength terrible Makefile setup that would turn me off to it.


My personal favorite: https://makepp.sourceforge.net

It's make, but works in the way you expect. Plus, with some extensions you can add custom functions to really make your life easier. We use it at $dayjob; at first I didn't like it, but eventually I learned to stop worrying and love it.


Thanks for sharing this! Evidently the HTTPS link is dead -- HTTP only:

http://makepp.sourceforge.net/

Straight to the tutorial for people ready to burn a couple hours figuring out if they should uproot:

http://makepp.sourceforge.net/2.1/makepp_tutorial.html


What is a makefile in 2022?

I use them as cross-platform task runners with commands like "make run-tests" "make run-dev-server". I use it for Go, Rust and even JavaScript (omitting npm scripts).

"make" is an odd keyword for such tasks but there really isn't another simple cross-platform task runner I can use.


I use "just" for this purpose, and I'm very happy with it:

https://github.com/casey/just


This is great, thank you for the recommendation


Same here. `make test` beats `./gradlew clean test` in one project and `pytest -s ...` in another, etc.


Yep we use them to run tasks on AWS CodeBuild and Lambda machines.

Make, like vi, has the advantage of just always being there already on whatever machine you need it on.


Me too, I usually have a make “up”, “services” and “test” that do docker “compose up —detach", “docker compose down —volumes” and “rm -rf” temp dirs and run tests with whatever language i’m using. Saves a free brain cycle when switching languages. Sometimes a “data” target for downloading raw data from the internet for stuff like GIS projects.

I never work in C or other languages that actually require the dependency management that make supports.


https://taskfile.dev/ has been my go to for this.


The article itself mentions scalability issues; but beyond this - there are often many complications in Makefile construction which result from idiosyncracies of libraries, utilities, platform settings, tool versions, etc. etc.

I used to write Makefiles - basically in the vein of this post's suggestions; but for several years have switched to using CMake. I have not had reason to look back. Why would I want to write Makefiles directly? What's the benefit?


I personally use Xmake and try to advertise it every time I get the chance : No DSL it's just Lua, dead simple yet featureful, and it is ninja fast, or at least claims to be I never bothered to check that out, it's fast enough for me.

https://xmake.io/#/


Also a convert to xmake. Documentation is a bit hairy, but solvable. The code is easy to read and contribute to.


I've been using this makefile to compile my C programs on Linux and Windows for several years: https://gist.github.com/orthoxerox/5ce97c72b03becfb433d923cf...

As long as the program is just a bunch of C (and header) files with no special treatment, it works great.


Check out babashka tasks. It's a take on Make for Clojure.

https://book.babashka.org/#tasks

Everything you do in there except shelling out will be portable, and if not, you can dispatch using the `fs/windows?` predicate.


If you like Make, but would like some additional functionality (tracing, debugger, profiling, etc.), check out remake: https://remake.readthedocs.io/en/latest/index.html

remake is a friendly fork of GNU Make that keeps up to date with recent releases but adds many niceties.


if you do cross platform(win,mac,linux), cmake is a good choice

if you do linux 98% of the time then make is good enough.

ninja does parallel build as good as 'make -j N' to me.

meson, bazel are also good, but I don't have huge projects to leverage.

all in all, make is what I use these days, solid, easy to read, get the job done.


I feel like I'm in an alternate timeline where you include one .cpp file in a unity build and you're done while other people in their timeline are generating massive systems to do the same thing but worse. How did this become the preferred way. Its bonkers.


Does Unity build for the 65C02S?

In all seriousness, some of us write code targeting several different microcontrollers and many vendors decided to roll their own toolchain instead of just adding a new GCC backend.

Build systems more elaborate than Make don't really help when your build comes down to invoking four different C compilers twelve different ways.


Of course my makefiles are portable, they fit on any kind of usb drive




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: