I recently decided it was time to get a better understanding of how makefiles work, and after reading a few tutorials, ended up just reading the manual. It's long, but it's very, very well written (a good example of one of the Gnu projects biggest strengths), to the point where just starting at the top and reading gives an almost tutorial-like effect. Just read the manual!
FWIW I also read the GNU Make manual, and based some code for automatic deps off a profoundly ugly example it had. Then later people on HN showed me a better/simpler way to do it.
After reading the manual and writing 3 substantial Makefiles from scratch, I still think Make is ugly and, by modern standards, not very useful.
In my mind, the biggest problem is that it offers you virtually no help in writing correct parallel and incremental builds. I care about build speed because I want my collaborators to be productive, and those two properties are how you get fast builds. GNU Make makes it easy to write a makefile that works well for a clean serial build, but has bugs once you add -j or when your repository is in some intermediate state.
> In my mind, the biggest problem is that it offers you virtually no help in writing correct parallel and incremental builds.
That’s interesting, because in my mind, parallel and incremental builds are the main features of Make, the features that it is best at, and all other features are secondary.
It sounds like your problem is with the correctness part. Make gives you no tools to enforce that your build rules are actually correct. Very few build systems provide any help here. What you are looking for is hermeticity. Bazel does this by sandboxing execution of all the build rules, and only the specified rule inputs are included in the sandbox. I recommend Bazel.
Otherwise, it is up to you to get your build rules correct, and switching build systems won’t help in general (although they may help in specific cases).
I find it surprising that you talk about using Make for a clean serial build, because if you want a clean serial build, you might as well use a shell script. Make’s only real purposes are to give you tools for incremental and parallel builds. Nearly any other tool you replace Make with will either have you sacrifice incremental/parallel builds or will give you the same hermeticity problems you would encounter with Make. Replacing Make, the main paths I see are towards improved versions of the same thing (e.g. Ninja, tup), completely redesigned versions of the same thing (Ant, SCons), systems and languages which generate makefiles (e.g. autotools, CMake), and the new wave of build systems which provide hermeticity (Bazel, Buck, Pants, Please). This last group is a very recent addition.
Mind you, Make is old and not especially well-designed, and it has plenty of limitations, but it’s good enough at incremental/parallel builds that it has stuck around for so many decades. Make is good enough at what it provides that replacements like Ant, SCons, Tup, Ninja, etc. don’t seem like much of an improvement.
I used Bazel/Blaze for many years, and even contributed to it a long time ago, and I agree it has many nice properties (although I'm more interested in building open source projects with diverse dependencies, which it isn't great for AFAIK.)
Another potential advantage of a build generator (described in that comment) is having one mode to enforce correctness of the build description, and another mode to be fast. In other words, you could generate a Ninja file with a sandbox like Bazel uses. You could use a tree of symlinks or run in a chroot with a setuid helper.
> Yeah I think we're in agreement, except that Ninja is not a replacement for Make.
I would definitely disagree with this. Make's higher-level logic/graph description features are only incidental, as Ninja proved, you can remove those features and end up with a tool that is equally valuable, more or less. To abuse an analogy, if I'm shopping for notebooks, I don't need to buy a notebook and pen shrink-wrapped together. I already have plenty of pens.
You're right that Bazel isn't good for open-source projects with diverse dependencies, but I think this is a problem that can be solved by developing some more infrastructure for that and writing the appropriate Starlark code (to be called from your your WORKSPACE file and create the appropriate repositories somehow). That code just isn't around yet.
All I'm saying is that Make was designed to be hand-written and often is, and Ninja was designed to be generated, and almost always is. See the manual:
Its not actually all that hard to make well-parallelized makefiles, provided you follow a few basic rules.
- Each build step has a unique artifact.
- That artifact is visible to Make as a file in the filesystem.
- That artifact is named $@ in the rule's recipe.
- Every time the recipe is executed, $@ is updated on success.
- If the recipe fails, it must return nonzero to Make.
- All of the dependencies of the artifact are represented in the Makefile
For example, here is how the format checks are run in my current project for some C code. Its mission: To verify that those source files which are under the aegis of clang-format are correctly formatted. BUILD_DIRS is a list of directories containing source code. CFORMATTER is the name of the formatting program. Not everything is under clang-format control, so FORMATTED_SRCS is used to opt-in to it.
- A formatted source file for each repository source file
- An empty file in the filesystem for each formatted source file that is identical to the repository source file.
- A tree of directories for the above.
Each format check is run exactly once, and only when source files change. If anything fails, then `make` returns nonzero and the build fails. Its also fully parallelized, since there aren't any neck-down points in the dependency graph. Every one of our pre-commit checks are structured this way. Build verification is as parallel as possible for fresh builds. Engineers can resolve and verify their problems quickly and incrementally when they fail.
You forgot the most important and most difficult part. Ensuring transitive dependencies really make their way into the makefile without huge manual effort. For C-like languages just adding an additional #include in a header file will break most makefiles. To solve it you need to generate makefiles using -MMD flags which are included from your main makefile and this is not very obvious.
As make was mainly intended to build C projects, not getting these batteries included is why i think parent, and many others, consider makefiles needlessly complex.
That's a nice set of rules, although I don't think they're all easy to follow or verify that 1000 lines of Make is following.
I considered 3 main use cases, and I wrote Makefiles from scratch for all of them. Make works to an extent for each case, but I still have problems.
1. Building mixed Python/C app bundles [1]
2. Building my website [2]. Notably people actually do complain about Jekyll build speed, to the point where they will use a different system like Hugo. So incremental/parallel builds are really useful in this domain!
3. Doing analytics on web log files (e.g. time series from .gz files)
One thing I didn't mention is that they all involve some sort of build parameterization or "metaprogramming". That requirement interacts with the problem of parallel and incremental builds.
For example, for #1, there is logic shared between different bundles. Pattern rules aren't really expressive enough, especially when you have two dimensions. Like (app1, app2, ...) x (debug, release, ASAN, ...)
A pet peeve of mind is having to "make clean" between a debug and a release build, and nearly all usages of Make have that problem, e.g. Python and bash's build system. You could say they are violating your rules because each artifact doesn't have a unique name on the file system (i.e. debug and release versions of the same object file.)
Likewise, Make isn't exactly flexible about how the blog directory structure is laid out. I hit the multiple outputs progblem -- I have Jekyll-style metadata at the front of each post (title, date, tags), so each .md file is split into 2 files. The index.html file depends on all the metadata, but not the data.
All of them have dynamic dependencies too:
1. I generate dependencies using the Python interpreter
2. I add new blog posts without adding Make rules
3. I add new web log files without adding Make rules
Make does handle this to an extent, but there are definitely some latent bugs. I have fixed some of them, but without a good way of testing, I haven't been motivated to fix them all.
I wrote up some more problems in [3], but this is by no means exhaustive. I'm itching to replace all of these makefiles with something that generates Ninja. It's possible I'll hit some unexpected problems, but we'll see.
My usage is maybe a bit out of the ordinary, but I don't see any reason why a single tool shouldn't handle all of these use cases.
> A pet peeve of mind is having to "make clean" between a debug and a release build, and nearly all usages of Make have that problem, e.g. Python and bash's build system. You could say they are violating your rules because each artifact doesn't have a unique name on the file system (i.e. debug and release versions of the same object file.)
There are at least two ways that this problem can be addressed. One is to support out-of-tree builds, one side directory per configuration. Builds based on the autotools do this by default.
The other is to use a separate build directory per configuration within the build. My current project uses local in-tree directories named .host-release, .host-debug (including asan), .host-tsan, .cross-release, and .cross-debug. All of them are built in parallel with a single invocation of Make, and I use target-scoped variables to control the various toolchain options.
The engineer's incremental work factor to add another build configuration isn't quite constant time, since each top-level target needs to opt into each build configuration that is relevant for that target.
> I hit the multiple outputs problem
I wouldn't really classify that as a problem in GNU Make, as long as you can specify the rule as a pattern rule.
I hear you on the testing problem. Make certainly behaves as if the Makefile's correctness isn't decidable. Even if you levied the requirement that a Makefile's decidability was predicated on the recipes being well-behaved, I'm not sure that the correctness is decidable.
> offers you virtually no help in writing correct parallel and incremental builds.
The key problem there that Make has no idea about the semantics of the shell code that appears in the build recipes. It has no idea how two build recipes interact with each other through side effects on objects that are not listed as targets or prerequisites.
I think ClearCase's clearmake (GNU-compatible) actually intercepts the file system calls (because the build happens on a ClearCase mounted VOB). So it is able to infer the real inputs and outputs of a build recipe at run-time. For instance, it would know that "yacc foogrammar.y" produced a "y.tab.h" even if the rules make no mention of this. So in principle it's possible to know that one rule is consuming "y.tab.h" (opens it for reading), that is produced by another rule (that wrote it), without there being any dependency tied to this data flow.
The interception could be done by injecting shared lib wrappers, I suppose. Our friend LD_PRELOAD and all that.
Of course, if we fix the parallel build with proper dependencies, a fixed incremental build also pops out of that.
GNU Make makes it easy to write a makefile that works well for a clean serial build, but has bugs once you add -j or when your repository is in some intermediate state.
Thanks, I bought your GNU Make book a couple years ago and read pretty much the whole thing! Along with reading the GNU Make manual a few years before that, I made an attempt to "give Make a fair shake".
I liked it, but it's honestly weird to me that a book published in 2015 can improve on the state of the understanding of a tool with 40 years of heritage :) I also get this "groundhog day" effect after about 10 years of seeing new GNU make tutorials / best practices on Hacker News every couple months. Everybody who learns Make has to go through this same thing.
-----
I read that you reimplemented GNU Make for Electric Cloud and I was impressed by that :) My project Oil [1] is a similar sort of project. It can run thousands of lines of unmodified shell/bash scripts found "in the wild". I got big distro scripts working last year, and I got thousands of lines of interactive completion scripts working recently, which I need to blog about.
For awhile I thought it would be nice to replace GNU make too, although (1) that's a lot of effort, and following Oil's strategy isn't possible since Makefiles can't be statically parsed and (2) I think it's happening anyway.
Major build systems are now split up into high-level and low-level parts, i.e. autoconf generating Makefiles, CMake generating Makefiles/ninja files. Android used to be 250K lines of pure GNU make (including GMSL), but now it's a high level Blueprint DSL generating Ninja too.
There aren't many pure Make projects anymore, at least in open source code. The exception I can think of are embedded ones like buildroot.
I like how Ninja focuses on build execution only, punting logic to a higher level (CMake, gyp, Blueprint). So my pet theory is that you can replace Make with a DSL that generates
1) A ninja file for fast, parallel, incremental developer builds
2) A shell script for portable builds for distro packagers/end users. This is just a clean serial build, so it can be a shell script rather than a Makefile.
I plan to test this theory by throwing out the modest 700 lines of Make I've written from scratch and replacing it with a Ninja/shell generator :)
I've debugged and read enough Make to be able to identify and fix most problems. But it still feels like like whack-a-mole to me. You can fix one problem and introduce another, since there's no real way to test for correctness (and efficiency). Problems can be reintroduced by seemingly innocuous changes.
> 1) A ninja file for fast, parallel, incremental developer
> builds
>
> 2) A shell script for portable builds for distro
> packagers/end users. This is just a clean serial build, so
> it can be a shell script rather than a Makefile.
If the developer isn't regularly using the same build that downstream users are, the build for downstream will be perpetually broken.
It's no different than CMake generating Makefiles or Visual Studio projects, or autoconf generating Makefiles, which is the state of the art in open source.
I'm not saying you write them by hand -- you generate them from the same build description, and the generator can preserve some invariants. It should basically do a topological sort ahead of time rather than at runtime.
The more likely source of breakage is that the user's environment is different, i.e. they don't have a particularly library installed). So even if you choose pure GNU make, you still have that source of breakage, and you generally should test for it. I test my shell in a chroot with different versions of libc and without GNU readline. I need that even though I'm using pure GNU make at the moment.
Yes. It is highly recommended that you read that book (buy it to support the FSF). If you want more I wrote a book on GNU Make (https://nostarch.com/gnumake) which takes things further than the GNU Make manual. A large amount of the content of the book came from a sequence of blog posts on GNU Make by me: https://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-art...
Your book is awesome, thanks for writing it! I converted our whole build at $DAYJOB from recursive to non-recursive, and your book helped immensely. It was a lot of work, but in the end it was worth it.
I think the main point nowadays is how to write Makefiles for highly parallel builds. It's surely the thing that gave me the most headaches. In the end such a talk will probably mostly be about how to avoid calling Make recursively.
Another talk I'd like to hear, which would be more meta: does it still make sense to bother with Make at all, aside from maintaining legacy systems? Are there features or paradigms which still make it worthwhile, compared to other popular build systems?
Can you humor a dumb question and tell me how you got the manual in your terminal? I'm on Debian, and when I `man make`, I just get a typical GNU CLI util manpage that's five paragraphs of intro and a line-by-line explanation of each command line option. Same for `info make`.
GNU Make is an awesome task-runner. I use it all the time, for all sorts of things. It's my shell-script replacement, and I write one-off Makefiles frequently. It's got a nice macro-processor, some helpful string manipulation tools, and expresses task dependencies perfectly. The manual is also very well written.
But if you're writing a software package, consider using CMake, Meson, Autotools, or something similar. Unless you're a superhuman, any of them will handle the corner cases better than you can. Especially the cross-compilation corner-cases. It's extra work, but the people who need to build and package your software down the road will thank you. Your artisan Makefile might be an elegant work of art, but what does it do when somebody wants to build from a different sysroot? Does it handle DESTDIR correctly?
I haven't found a reason to not just use Make yet when starting from green fields, including some very big projects. In fact, I have often been annoyed at some projects picking something like CMake for no reason beyond it's "more advanced," but which ends up just being an extra dependency I have to fetch and install.
If I were to pick one of the above for building large projects on linux variants, which one would you pick?
For me, I mostly work in embedded Linux - that's Linux for routers, custom hardware, sometimes a Docker image, etc. That means lots of packaging work. Sometimes it means build machines that can't even run the compiled binary. I compile libraries for platforms that the original author would never have imagined.
In that space, there are lots and lots of details that need to be "just so". Lots of specifics about the cflags and linker flags. Requirements on where 'make install' puts files. Sometimes restrictions where an output binary wants to search for config files (spoiler: use sysconfdir).
Out of all the libraries/programs that I've had to package, autotoolized projects are the easiest to handle by far, followed by CMake projects.
Hand-written Makefiles are usually painful to a packager in one way or another. The most common ones being hard-coded CC/CXX/CFLAGS/CXXFLAGS, non-standard target names, and non-standard/incomplete usage of directory variables.
I know that Autotools is crusty in a lot of ways, and that the learning curve is steep. It's a nasty mix of Perl and m4, and runs lots of checks that don't matter at all. It's also what I use for all of my own libraries and programs, because the result is worth the pain (for me).
So for any program that I expect more than two humans to ever compile, I'd recommend Autotools if you're a perfectionist and CMake if you're not. Within Autotools, I'd recommend Automake and Autoconf if the language is C/C++, and just Autoconf/Make otherwise. (But be sure to follow the Makefile conventions: https://www.gnu.org/software/make/manual/html_node/Makefile-...).
I wouldn't recommend a hand-rolled Makefile unless you're the only one using it, the build is really weird for some reason, or unless it's a wrapper around some other build tool.
I do this too. I have been trying to find examples of people doing something similar on the web but had no luck. I've replaced so many 5-line shell scripts in my user /bin by just having a single makefile in the root of my home directory and having aliases to the various commands defined in the makefile. I even have a rule that shows a rofi menu that allows me to run the rules easily although it's a bit hackish at the moment. Maybe the way to go would be pulling out the parts of Make that are good for a task runner into a separate application and add some nice extensions like .ONESHELL being implicit.
I’m currently managing data reduction for a bunch of space mission simulations using make. A simulation dumps a bunch of files as python pickles, which are reduced into csv, and then made into plots, and then into webpages. Each phase of this is in the makefile. None of the transformation rules are standard compilation macros like you expect to find in a makefile.
The setup makes it easy to update all the summary webpages when more sims are run. Just “make html”. It re-accumulates all the summary metrics, redoes graphics, remakes webpages.
This is the kind of processing pipeline that is often done in shell or with driver scripts, but then you always have to remake everything. I like the setup a lot.
I do the exact same thing with oddball compilation tasks like this, such as compiling a Latex document or batch-processing pictures.
I also use it to run code-formatters / static analysis tools while programming, sometimes with an secondary Makefile that has a name like 'maintainer.mk'.
I really wish blog posts like this would at least mention that they are going to use GNUMake extensions. There's some good stuff in here, but just calling it "make" and leaving it at that is misleading. It's not going to work on my minimal (non-GNU) Linux boxes that mostly run NetBSD make (bmake in many distros) or busybox style tools. I know a very small minority of people do something like that (or maybe not, what does Alpine run in containers?), but it seems worth at least a quick footnote to say that most things in here won't be portable.
/ # make --version
GNU Make 4.2.1
Built for x86_64-alpine-linux-musl
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Good to know; thanks! I'm kind of suprised; given how tiny Alpine is supposed to be and how they're muscl based I assumed they would have taken that philosophy into the userspace tools and used busybox or something. Not that I really know anything about Alpine except that it's nice when you just want something small.
It's pretty common knowledge that there are differences from the GNU package set and linux package sets.
I've been exposed to this multiple times: from downloading GNU packages using Homebrew on OSX, downloading different packages on Android, the variety of options available to ArchLinux users, etc.
Anyone with basic Linux knowledge should be well aware of this. From a minimum of following tutorials and having the basic command flags not work on common terminal programs.
I don't agree that "basic Linux knowledge" requires knowing which commands have different versions. You can be intimately familiar with the Linux kernel and never step outside of the GNU environment.
I've never used OSX, and have no desire to compile things on my phone.
And I know that there are non-GNU versions of some tools, like grep, but whenever I've attempted to use them, I find them lacking some key feature that I always use. I can't be bothered to learn the entire set of GNU-improved tools, since I'm always just going to be using the GNU version anyway.
> Using the assignment operator = will instead override CC and LDD values from the environment; it means that we choose the default compiler and it cannot be changed without editing the Makefile.
People can use `make CC=clang` to override assignments. That's a pretty common use.
Came here to say this. The "strength" of user specified variables depends on where they come from: the environment is weaker than those specified as command line arguments - which can lead to confusion.
There are so many gotcha with makefiles yet we still use them regularly. I'm considering moving on to something like Taskfile [1], though I haven't tried it yet.
I decided a long time ago that it's not worth having to install and learn a different tool for every ecosystem just to avoid make's quirks. Yes, make isn't perfect (it can even be quite annoying at times), but neither is everything else so it's worth it to me (from a personal and business perspective) to just make my developers use one thing for all projects, especially since it's already installed (or easily installable if not, there are packages for everything which may not be true of all the less popular alternatives) on most of their systems.
Not that I couldn't ever see making an exception to that, I'm sure there are some things out there that don't work with the make model and really do need their own build system, but in general make is "good enough" and it's not worth using anything else.
I pretty much went down the same path, though with all of Make's quirks, my generic Makefiles end up being nothing more than an index of commands that instead run bash scripts. But then the problem is shifted into bash's error prone syntax. So then for non-trivial logic, I end up having the bash script call a Python script, especially if dependencies are involved.
It would be nice to have something like EditorConfig for project commands, i.e., my Sublime or your VS Code could parse one common Makefile (or similar) and map its commands to Build, Run, Debug, etc in the UI.
In some sense, I think a different real solution is to Dockerize everything and then just docker build / docker run. Of course this can get complex for some projects, especially where the local setup is different than the production setup, e.g., Docker Compose vs Kubernetes for instance.
I've been using invoke [1] which has allowed me to give my projects nice UIs while remaining in the primary language. If I were working on Ruby I would stick with Rake, even though I think invoke is better.
I really like Invoke / Fabric for Python projects, though it feels weird to use in non-Python projects when it comes to dependencies. [Now I have to explain to non-Python people things like virtual envs, pipenv vs pip vs pipsi, pyenv, etc.]
Just curious if you install Invoke individually for each project or keep one global install?
I suppose I could always git exclude the tasks.py Invoke files.
I usually keep Invoke globally installed that way I can use it for setting up and interacting with pipenvs externally. Most of my calls beside the initial setup make use of 'pipenv run <command>' that way I never actually have to navigate inside of the virtualenv for most cases.
I'll second invoke. I use it for all of my python projects now and I love working with it. It makes it very clean to manage more complex tasks that have a lot of conditionals involved.
A particularly tricky task with GNU make is automatically adding target dependencies on header files and handling updates to them (gcc's -M and -MMD switches). It would be great if the article explained those best practices, too.
I use the patterns in @jgrahamc's article "Escaping: A Walk on the Wild Side" pretty often. Especially the \n definition, which works great to put on the end of a $(foreach) loop inside of a recipe.
@jgrahamc: Thanks for all you've written on Make over the years.
Here's all you need to track header file dependencies.
Basically, when compiling a .cpp file to generate a .o file, a corresponding .deps file is also generated.
And at make startup, we include all the .deps files we find in the bin (output) directory, if there's one.
I've been using makefiles for decades, and finally decided to try something more modern.
It turns out that CMake has come a LOOOONG way, and is almost nice to use, aside from the fact that it's horribly complicated. But with some basic templates to work from, it's actually pretty easy to get a project started.
Every time I start a little thing in C or C++ and I write a corresponding Makefile I get annoyed by it and write the thing in Rust. Cargo is really nice. I've considered using a Cargo.toml and build.rs file just to build C and C++ files before.
Make is a very underappreciated tool outside C/C++ circles.
I'm currently using it in an environment which uses Concourse as the CI tooling - Concourse takes care of version and dependency management at its level, and Make fills in the gaps for fetching the latest versions for local development environments. Because Make won't re-download files for already-made targets, local build cycles are fast after initial setup. Concourse can then re-use these Makefiles by symlinking the resources it manages before calling make, and Make won't try to re-download the resources. If you're not concerned with portability, you can even use make to fetch (and clean) Docker images since they live in predictable directories on Linux (but not OS X).
More people ought to approach the tool with an open mind.
But TFA says that the author is using := to prevent a recursive definition. It's unclear why the author doesn't just += without doing = or := (unless they want the recursive to simple conversion I talk about above). My guess is the author sligthly misunderstood the handling of the environment inside a Makefile.
My guess is that the left hand side is a make variable while the right hand side is an environment variable. However the make manual says “Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value.“ so maybe it’s not necessary?...
This reminds me why I stopped visiting HN. People down vote you for trying to help! Screw whoever asshole that downvoted me even though I prefixed my answer with “My GUESS is...”.
I have never worked much with compiled languages but I recently started creating gnu makefiles in all my projects. I read someone's article, maybe from here, that gave me the idea.
I have been putting everything in it. From the terminal commands needed to spin up the cloud infrastructure to all the docker build commands and of course all the build tasks like gulp and composer.
Even my docker compose commands have been replaced with "make up" and "make down". It is really handy and has really streamlined a lot of processes when it comes to local dev environment issues with different developers.
I basically just use it as an easy way to organize shell scripts.
The next step is to convert my ci/cd pipeline to just use specific makefile commands from the same makefile so that everything is more portable and I can easily execute the exact same process locally when desired.
Best practice is to keep it simple. For example, if the Makefile is written in such a way that lazy set or immediate set doesn't matter, then it is not complex at all. But these facilities are there for achieve logic. Complex logics are complex, especially when they are entangled. For example, we often use build tools to generate Makefile. Some of the logic is implemented in the build tool; some of the logic is implemented using Makefile -- of course the result is complex. There is no rule of thumb on how to organize complex logic, that is what programmers are paid (so much) to do. Respect your job and treat complex logic carefully, then you won't make too much a mess -- at least do not complain about it after making one.
If you're using GNU make, as the author appears to be doing, makefiles should be named GNUmakefile instead of Makefile so as to not confuse the user or the tools by conflating GNU make with BSD make or other make dialects.
For what you're describing I think the proper term as used by the GNU Manual is deferred. In other languages the phrase lazy evaluation is common.
The one thing I've always had trouble remembering with GNU Make is precedence. Variables defined as command-line arguments (make FOO=bar) override assignments, but environment variables (FOO=bar make) only override ?= assignments. != is both immediate (the shell expansion) and deferred (the result of the shell expansion).
I feel like there's some inconsistency when variables are inherited across recursive invocations, but in a simple test with make 3.81 (macOS) I couldn't find any. Maybe my suspicion is just a byproduct of my weird inability to remember the precedence rules.
I normally try to stick to portable make these days, anyhow. Between the ancient version of GNU Make on macOS, OpenBSD Make, and NetBSD Make, you're mostly stuck with POSIX syntax. If you add Solaris' default make, and especially if you add AIX' make, you can't rely on any extensions at all.
For me sticking with portable make is easier than installing and maintaining GNU Make on every flavor of operating system I test on, and definitely easier than installing autotools or installing and maintaining the most recent version of CMake.
OK, so I am as much of a hard core Pythonista as you will find. But I don't think it makes any sense to build C projects (or many other languages) with Python instead of makefiles. C programmers know make, the makefile idiom has been evolving for 40+ years. A seasoned C programmer is going to look at an idiomatic makefile and find it much more readable than some rando doing some one-off Python script to control a build. And getting build dependencies resolved correctly so that you can only build what is need is not trivial.
What I truly hate is all the IDE's that think they can do a better job than make. This is the curse of the embedded world. Building an embedded project requires calling particular tools, with peculiar flags, and I hate with a passion IDE's that obscure all of that in some crappy XML file that is git-unfriendly. A well-structured makefile is your friend in many ways.
make is a powertool. Learn it and your life will be better.
My experiences with Makefiles are different. I can understand mine up to 3 months after I write them. And if I read someone else's Makefile, it's a binary blob, full of hacks, optimalizations, walkarounds and compatibility quirks between GNU make, BSD make, with sometimes different Makefiles for FreeBSD, Linux, OpenBSD, DOS, and Visual Studio uses its own project file in the win32 directory. It's a mess.
As a C developer I have to say there exists no idiomatic makefile - configuring header file dependencies with -MMD/auto rebuild targets affected by FLAGS change already makes your Makefile look like magic. As the project grows, eventually you will need some kind of flexibility that makefile cannot do well. In this case, explicit Python scripts become more useful that a bunch of possibly implicit makefile rules.
Despite the downvotes, it is reasonable to write the project build script/workflow in an easy-to-read, cross-platform language. Makefiles often devolve into shell scripts or use of autotools in very slow and not-cross-platform ways. This becomes more spaghetti as you build dependencies between/across projects.
Lately with projects that do several things on build, I include a build.go and ask users to just `go run build.go [command]`. I get many features built in, it's quite maintainable and cross platform, etc. Even if it calls out to other makefiles or build scripts, at least you can centralize it and do different things per OS or whatever.
I bet Makefile has far more understandable and needs far much less lines than Python for its purpose.
Its declarative syntax and "rules" are just suitable for task runners.
Makefiles, Best Practices: Don't write Makefiles, use a higher level language to describe your goal and some tool to execute it (either directly or through generating a Ninja file).
The Make language is the assembly language of build systems. Do you really enjoy writing assembly all the time?
What's more simple than "add_executable(foo bar.cpp)"?
It's concise, cross-platform and hides all the boilerplate for you.
Sure, everything in CMake is not super intuitive, but I've led workshops were people got the hang of it pretty quickly for simple to advanced cases (including multiple targets and some scripting). Keep it simple, most of the time, you don't need the advanced features at all!
Can't say the same for Make, apparently, people need to write about it still today! And most of the Makefiles are just plain wrong and won't make any reproducible and reliable builds. That's a big no in my book for a build system!
Its here I look to KISS to guide me. A Makefile is simple and you can expect it to be on most systems, so it's a good place to start stringing things together. It helps you start building a CLI UI with relative ease, once you know you should mark all ephemeral targets as .PHONY.
As a project grows and gains users or requires more system support, the growth of Make usage to declare interdependence should always be discussed. At some point the team will need to start using a different tool, perhaps isolating Makefile usage to bootstrapping the bigger tool.
The reason I reach for Make is I know it, and it's rather intuitive for someone to glance at my Makefiles and get an overview of where to look next.
It is rather intuitive for someone knowing Make. Even then, which version of Make are you talking about? GNUmake? WHICH VERSION again? 3.82? 4.0? Some features in 4.0 aren't backward compatible of course, so you have to be precise about what you are exactly targeting.
If you are using anything advanced, then it will become unreadable for beginners too. Try to explain delayed and immediate variable interpolation to beginners, that their "VAR = foo" doesn't always work the way they think it does.
On the other hand, yes, the CMake sample above is self explanatory, works on all platforms (more than Make itself), all compilers without having you to write low level compiler invocations, supports IDE for easier debugging (sure, you can also tell your users they have to learn how to use GDB by hand, just... good luck). It will automatically rebuild the right files when their dependencies or command line changes, you don't have to code that by hand for each compiler (and probably have many subtle bugs in your implementation too).
Make has a place for simple things, not for build systems, we have way better ways to do them now, even if you refuse to use them.
I don’t think I ever saw (GNU)make version problems. Most Makefiles I have seen in the wild were using pretty basic features.
On the other hand, trying to get Kitware’s stuff to compile was a real horror. I ended up adding Docker to the project just to get the right CMake version. And debugging the buildsystem was hell. I hope I never have to touch CMake again in my life.
I'd like to understand where in my comment I stated that I "refused" to use better tools. I even stated that it's important for the team to discuss tooling as a project gains users.
Learning new tools is something I enjoy doing, but as my career continues my focus is changed. I haven't once walked into a project where I've felt the need to re-tool the build system because it wasn't living up to the task. And for your information, my company has a monorepo and we build using pants, which I'm starting to grow quite fond of.
I may have some bias towards learning new tools, likely why I like go so much. All the tools(now that we have modules and MVS), none of the hassle.
That has nothing to do with the compilation of your program.
This is a specific option to use in a specific configuration and specific compilers.
So if you want a configuration with some flags like that, then you can have a separate file for it. Each compiler is different after all, some will need different options.
"GNUmake is required, but you are running FreeBSD".
Not saying it's not available there, not all platforms have all software available right away or up to date.
Even then, upgrading CMake is the easiest thing ever: "pip install cmake". Or fetch it from the Github releases page quickly, it's always available as static binaries for all major platforms.
I have yet to find a build system I prefer over Make. I use Maven at work and use a Makefile to run mvn. It lets me easily run a wide variety of commands locally as part of my build.
You joke, but I've personally never seen a hand made make build systems, in any company I've worked at, that could handle "fuzzing" by deleting or modifying random files in the build system and having the expected output.
They all required a "clean" to get back in order.
For those that maintain handmade make based build system of reasonable complexity, I challenge you to "fuzz" it before each build. It will not be a comfortable experience.
On the other hand, I've seen many generated makefiles that behaved correctly that are totally unreadable.