Hacker News new | past | comments | ask | show | jobs | submit login
Makefiles – Best Practices (danyspin97.org)
315 points by lelf on Feb 1, 2019 | hide | past | favorite | 126 comments



I recently decided it was time to get a better understanding of how makefiles work, and after reading a few tutorials, ended up just reading the manual. It's long, but it's very, very well written (a good example of one of the Gnu projects biggest strengths), to the point where just starting at the top and reading gives an almost tutorial-like effect. Just read the manual!


FWIW I also read the GNU Make manual, and based some code for automatic deps off a profoundly ugly example it had. Then later people on HN showed me a better/simpler way to do it.

https://news.ycombinator.com/item?id=15060149

https://www.gnu.org/software/make/manual/html_node/Automatic...

After reading the manual and writing 3 substantial Makefiles from scratch, I still think Make is ugly and, by modern standards, not very useful.

In my mind, the biggest problem is that it offers you virtually no help in writing correct parallel and incremental builds. I care about build speed because I want my collaborators to be productive, and those two properties are how you get fast builds. GNU Make makes it easy to write a makefile that works well for a clean serial build, but has bugs once you add -j or when your repository is in some intermediate state.


> In my mind, the biggest problem is that it offers you virtually no help in writing correct parallel and incremental builds.

That’s interesting, because in my mind, parallel and incremental builds are the main features of Make, the features that it is best at, and all other features are secondary.

It sounds like your problem is with the correctness part. Make gives you no tools to enforce that your build rules are actually correct. Very few build systems provide any help here. What you are looking for is hermeticity. Bazel does this by sandboxing execution of all the build rules, and only the specified rule inputs are included in the sandbox. I recommend Bazel.

Otherwise, it is up to you to get your build rules correct, and switching build systems won’t help in general (although they may help in specific cases).

I find it surprising that you talk about using Make for a clean serial build, because if you want a clean serial build, you might as well use a shell script. Make’s only real purposes are to give you tools for incremental and parallel builds. Nearly any other tool you replace Make with will either have you sacrifice incremental/parallel builds or will give you the same hermeticity problems you would encounter with Make. Replacing Make, the main paths I see are towards improved versions of the same thing (e.g. Ninja, tup), completely redesigned versions of the same thing (Ant, SCons), systems and languages which generate makefiles (e.g. autotools, CMake), and the new wave of build systems which provide hermeticity (Bazel, Buck, Pants, Please). This last group is a very recent addition.

Mind you, Make is old and not especially well-designed, and it has plenty of limitations, but it’s good enough at incremental/parallel builds that it has stuck around for so many decades. Make is good enough at what it provides that replacements like Ant, SCons, Tup, Ninja, etc. don’t seem like much of an improvement.


Yeah I think we're in agreement, except that Ninja is not a replacement for Make.

Make is both a high-level and a low-level build tool (i.e. logic/graph description vs. execution)

Ninja is only a low-level build tool -- it focuses on execution only, punting logic to a higher level, which I like.

See my comments here:

https://news.ycombinator.com/item?id=19057836

I used Bazel/Blaze for many years, and even contributed to it a long time ago, and I agree it has many nice properties (although I'm more interested in building open source projects with diverse dependencies, which it isn't great for AFAIK.)

Another potential advantage of a build generator (described in that comment) is having one mode to enforce correctness of the build description, and another mode to be fast. In other words, you could generate a Ninja file with a sandbox like Bazel uses. You could use a tree of symlinks or run in a chroot with a setuid helper.


> Yeah I think we're in agreement, except that Ninja is not a replacement for Make.

I would definitely disagree with this. Make's higher-level logic/graph description features are only incidental, as Ninja proved, you can remove those features and end up with a tool that is equally valuable, more or less. To abuse an analogy, if I'm shopping for notebooks, I don't need to buy a notebook and pen shrink-wrapped together. I already have plenty of pens.

You're right that Bazel isn't good for open-source projects with diverse dependencies, but I think this is a problem that can be solved by developing some more infrastructure for that and writing the appropriate Starlark code (to be called from your your WORKSPACE file and create the appropriate repositories somehow). That code just isn't around yet.


All I'm saying is that Make was designed to be hand-written and often is, and Ninja was designed to be generated, and almost always is. See the manual:

https://ninja-build.org/manual.html#_philosophical_overview

Concretely, if you look at how Android or buildroot used GNU make, you can't do that with Ninja alone -- you need another tool.

GNU make is Turing complete, and Ninja isn't. IMO the Ninja design is better (which isn't surprising since it's not an accretion over decades).


I'd just like to point out that Ant, the only alternative to make you list that I've used extensively, is essentially a scripting system.


As I recall, the guy who developed Make wrote it in a weekend.


And never expected it to be so successful, and realized that he couldn't fix some of its problems because people were already relying on it.


That may be true, but GNU Make wasn't built in a weekend.

Just like a crappy Lisp interpreter can be built in a weekend, but an ANSI Common Lisp or Racket won't be built in a weekend.


Its not actually all that hard to make well-parallelized makefiles, provided you follow a few basic rules.

- Each build step has a unique artifact.

- That artifact is visible to Make as a file in the filesystem.

- That artifact is named $@ in the rule's recipe.

- Every time the recipe is executed, $@ is updated on success.

- If the recipe fails, it must return nonzero to Make.

- All of the dependencies of the artifact are represented in the Makefile

For example, here is how the format checks are run in my current project for some C code. Its mission: To verify that those source files which are under the aegis of clang-format are correctly formatted. BUILD_DIRS is a list of directories containing source code. CFORMATTER is the name of the formatting program. Not everything is under clang-format control, so FORMATTED_SRCS is used to opt-in to it.

BUILD_DIRS_FORMAT = $(addprefix .format-check/,$(BUILD_DIRS))

$(BUILD_DIRS_FORMAT): mkdir -p $@

# There aught to be a better way to control the suffix without becoming a match-anything # rule...

.format-check/%.c: %.c | $(BUILD_DIRS_FORMAT) $(CFORMATTER) $< -style=file > $@

.format-check/%.h: %.h | $(BUILD_DIRS_FORMAT) $(CFORMATTER) $< -style=file > $@

# Record the fact that each format check passed by touching a uniquely-named file. # note the call to `false` on error, since `echo` always succeeds.

.format-check/%.diffed: .format-check/% @(diff -u --color=always -- $* $< && touch $@) || \ (echo Formatting errors exist in $* ; false)

check-formatting: $(addsuffix .diffed,$(addprefix .format-check/, $(FORMATTED_SRCS)))

It's artifacts are:

- A formatted source file for each repository source file

- An empty file in the filesystem for each formatted source file that is identical to the repository source file.

- A tree of directories for the above.

Each format check is run exactly once, and only when source files change. If anything fails, then `make` returns nonzero and the build fails. Its also fully parallelized, since there aren't any neck-down points in the dependency graph. Every one of our pre-commit checks are structured this way. Build verification is as parallel as possible for fresh builds. Engineers can resolve and verify their problems quickly and incrementally when they fail.


You forgot the most important and most difficult part. Ensuring transitive dependencies really make their way into the makefile without huge manual effort. For C-like languages just adding an additional #include in a header file will break most makefiles. To solve it you need to generate makefiles using -MMD flags which are included from your main makefile and this is not very obvious.

As make was mainly intended to build C projects, not getting these batteries included is why i think parent, and many others, consider makefiles needlessly complex.


That's a nice set of rules, although I don't think they're all easy to follow or verify that 1000 lines of Make is following.

I considered 3 main use cases, and I wrote Makefiles from scratch for all of them. Make works to an extent for each case, but I still have problems.

1. Building mixed Python/C app bundles [1]

2. Building my website [2]. Notably people actually do complain about Jekyll build speed, to the point where they will use a different system like Hugo. So incremental/parallel builds are really useful in this domain!

3. Doing analytics on web log files (e.g. time series from .gz files)

One thing I didn't mention is that they all involve some sort of build parameterization or "metaprogramming". That requirement interacts with the problem of parallel and incremental builds.

For example, for #1, there is logic shared between different bundles. Pattern rules aren't really expressive enough, especially when you have two dimensions. Like (app1, app2, ...) x (debug, release, ASAN, ...)

A pet peeve of mind is having to "make clean" between a debug and a release build, and nearly all usages of Make have that problem, e.g. Python and bash's build system. You could say they are violating your rules because each artifact doesn't have a unique name on the file system (i.e. debug and release versions of the same object file.)

Likewise, Make isn't exactly flexible about how the blog directory structure is laid out. I hit the multiple outputs progblem -- I have Jekyll-style metadata at the front of each post (title, date, tags), so each .md file is split into 2 files. The index.html file depends on all the metadata, but not the data.

All of them have dynamic dependencies too:

1. I generate dependencies using the Python interpreter

2. I add new blog posts without adding Make rules

3. I add new web log files without adding Make rules

Make does handle this to an extent, but there are definitely some latent bugs. I have fixed some of them, but without a good way of testing, I haven't been motivated to fix them all.

I wrote up some more problems in [3], but this is by no means exhaustive. I'm itching to replace all of these makefiles with something that generates Ninja. It's possible I'll hit some unexpected problems, but we'll see.

My usage is maybe a bit out of the ordinary, but I don't see any reason why a single tool shouldn't handle all of these use cases.

[1] Rewriting Python's Build System From Scatch http://www.oilshell.org/blog/2017/05/05.html

[2] http://www.oilshell.org/site.html

[3] Build System Observations http://www.oilshell.org/blog/2017/05/31.html


> A pet peeve of mind is having to "make clean" between a debug and a release build, and nearly all usages of Make have that problem, e.g. Python and bash's build system. You could say they are violating your rules because each artifact doesn't have a unique name on the file system (i.e. debug and release versions of the same object file.)

There are at least two ways that this problem can be addressed. One is to support out-of-tree builds, one side directory per configuration. Builds based on the autotools do this by default.

The other is to use a separate build directory per configuration within the build. My current project uses local in-tree directories named .host-release, .host-debug (including asan), .host-tsan, .cross-release, and .cross-debug. All of them are built in parallel with a single invocation of Make, and I use target-scoped variables to control the various toolchain options.

The engineer's incremental work factor to add another build configuration isn't quite constant time, since each top-level target needs to opt into each build configuration that is relevant for that target.

> I hit the multiple outputs problem

I wouldn't really classify that as a problem in GNU Make, as long as you can specify the rule as a pattern rule.

I hear you on the testing problem. Make certainly behaves as if the Makefile's correctness isn't decidable. Even if you levied the requirement that a Makefile's decidability was predicated on the recipes being well-behaved, I'm not sure that the correctness is decidable.


> offers you virtually no help in writing correct parallel and incremental builds.

The key problem there that Make has no idea about the semantics of the shell code that appears in the build recipes. It has no idea how two build recipes interact with each other through side effects on objects that are not listed as targets or prerequisites.

I think ClearCase's clearmake (GNU-compatible) actually intercepts the file system calls (because the build happens on a ClearCase mounted VOB). So it is able to infer the real inputs and outputs of a build recipe at run-time. For instance, it would know that "yacc foogrammar.y" produced a "y.tab.h" even if the rules make no mention of this. So in principle it's possible to know that one rule is consuming "y.tab.h" (opens it for reading), that is produced by another rule (that wrote it), without there being any dependency tied to this data flow.

The interception could be done by injecting shared lib wrappers, I suppose. Our friend LD_PRELOAD and all that.

Of course, if we fix the parallel build with proper dependencies, a fixed incremental build also pops out of that.


There have been systems that take this approach such as fabricate.py (https://github.com/brushtechnology/fabricate) and tup (http://gittup.org/tup/).


GNU Make makes it easy to write a makefile that works well for a clean serial build, but has bugs once you add -j or when your repository is in some intermediate state.

So true. I wrote about this and solutions here: https://www.cmcrossroads.com/article/pitfalls-and-benefits-g...


Thanks, I bought your GNU Make book a couple years ago and read pretty much the whole thing! Along with reading the GNU Make manual a few years before that, I made an attempt to "give Make a fair shake".

I liked it, but it's honestly weird to me that a book published in 2015 can improve on the state of the understanding of a tool with 40 years of heritage :) I also get this "groundhog day" effect after about 10 years of seeing new GNU make tutorials / best practices on Hacker News every couple months. Everybody who learns Make has to go through this same thing.

-----

I read that you reimplemented GNU Make for Electric Cloud and I was impressed by that :) My project Oil [1] is a similar sort of project. It can run thousands of lines of unmodified shell/bash scripts found "in the wild". I got big distro scripts working last year, and I got thousands of lines of interactive completion scripts working recently, which I need to blog about.

For awhile I thought it would be nice to replace GNU make too, although (1) that's a lot of effort, and following Oil's strategy isn't possible since Makefiles can't be statically parsed and (2) I think it's happening anyway.

Major build systems are now split up into high-level and low-level parts, i.e. autoconf generating Makefiles, CMake generating Makefiles/ninja files. Android used to be 250K lines of pure GNU make (including GMSL), but now it's a high level Blueprint DSL generating Ninja too.

There aren't many pure Make projects anymore, at least in open source code. The exception I can think of are embedded ones like buildroot.

I like how Ninja focuses on build execution only, punting logic to a higher level (CMake, gyp, Blueprint). So my pet theory is that you can replace Make with a DSL that generates

1) A ninja file for fast, parallel, incremental developer builds

2) A shell script for portable builds for distro packagers/end users. This is just a clean serial build, so it can be a shell script rather than a Makefile.

I plan to test this theory by throwing out the modest 700 lines of Make I've written from scratch and replacing it with a Ninja/shell generator :)

I've debugged and read enough Make to be able to identify and fix most problems. But it still feels like like whack-a-mole to me. You can fix one problem and introduce another, since there's no real way to test for correctness (and efficiency). Problems can be reintroduced by seemingly innocuous changes.

[1] http://www.oilshell.org/blog/2018/01/28.html


  > 1) A ninja file for fast, parallel, incremental developer
  > builds
  >
  > 2) A shell script for portable builds for distro
  > packagers/end users. This is just a clean serial build, so
  > it can be a shell script rather than a Makefile.
If the developer isn't regularly using the same build that downstream users are, the build for downstream will be perpetually broken.


It's no different than CMake generating Makefiles or Visual Studio projects, or autoconf generating Makefiles, which is the state of the art in open source.

I'm not saying you write them by hand -- you generate them from the same build description, and the generator can preserve some invariants. It should basically do a topological sort ahead of time rather than at runtime.

The more likely source of breakage is that the user's environment is different, i.e. they don't have a particularly library installed). So even if you choose pure GNU make, you still have that source of breakage, and you generally should test for it. I test my shell in a chroot with different versions of libc and without GNU readline. I need that even though I'm using pure GNU make at the moment.


Yes. It is highly recommended that you read that book (buy it to support the FSF). If you want more I wrote a book on GNU Make (https://nostarch.com/gnumake) which takes things further than the GNU Make manual. A large amount of the content of the book came from a sequence of blog posts on GNU Make by me: https://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-art...


Your book is awesome, thanks for writing it! I converted our whole build at $DAYJOB from recursive to non-recursive, and your book helped immensely. It was a lot of work, but in the end it was worth it.


That's very kind. Weirdly, I have never done a conference presentation about GNU make. If I did what would people want to hear about?


I think the main point nowadays is how to write Makefiles for highly parallel builds. It's surely the thing that gave me the most headaches. In the end such a talk will probably mostly be about how to avoid calling Make recursively. Another talk I'd like to hear, which would be more meta: does it still make sense to bother with Make at all, aside from maintaining legacy systems? Are there features or paradigms which still make it worthwhile, compared to other popular build systems?


I don't think the FSF is currently selling physical copies of the Make manual.


Ah. I guess I am old and like books. Give them some money instead: https://my.fsf.org/donate/


Can you humor a dumb question and tell me how you got the manual in your terminal? I'm on Debian, and when I `man make`, I just get a typical GNU CLI util manpage that's five paragraphs of intro and a line-by-line explanation of each command line option. Same for `info make`.


Have you installed the make-info (https://packages.debian.org/jessie/make-doc) package? It adds info documentation as well as pdf and html formats.


Brilliant, didn't even realize it was a separate package. Thanks!


I really like the GNU Make documentation, as well.

Links to the docs in various form:

https://www.gnu.org/software/make/manual/


What if I want to use makefiles on a mac? Make of macs is not gnu make.

/edit: bugger, after a make --version on a mac, i've got:

GNU Make 3.81 Copyright (C) 2006 Free Software Foundation, Inc.

Back to the begining, why is this bloody thing not working...


FYI, homebrew installs version 4.2.1. Invoke it as gmake.


GNU Make is an awesome task-runner. I use it all the time, for all sorts of things. It's my shell-script replacement, and I write one-off Makefiles frequently. It's got a nice macro-processor, some helpful string manipulation tools, and expresses task dependencies perfectly. The manual is also very well written.

But if you're writing a software package, consider using CMake, Meson, Autotools, or something similar. Unless you're a superhuman, any of them will handle the corner cases better than you can. Especially the cross-compilation corner-cases. It's extra work, but the people who need to build and package your software down the road will thank you. Your artisan Makefile might be an elegant work of art, but what does it do when somebody wants to build from a different sysroot? Does it handle DESTDIR correctly?


I haven't found a reason to not just use Make yet when starting from green fields, including some very big projects. In fact, I have often been annoyed at some projects picking something like CMake for no reason beyond it's "more advanced," but which ends up just being an extra dependency I have to fetch and install.

If I were to pick one of the above for building large projects on linux variants, which one would you pick?


For me, I mostly work in embedded Linux - that's Linux for routers, custom hardware, sometimes a Docker image, etc. That means lots of packaging work. Sometimes it means build machines that can't even run the compiled binary. I compile libraries for platforms that the original author would never have imagined.

In that space, there are lots and lots of details that need to be "just so". Lots of specifics about the cflags and linker flags. Requirements on where 'make install' puts files. Sometimes restrictions where an output binary wants to search for config files (spoiler: use sysconfdir).

Out of all the libraries/programs that I've had to package, autotoolized projects are the easiest to handle by far, followed by CMake projects.

Hand-written Makefiles are usually painful to a packager in one way or another. The most common ones being hard-coded CC/CXX/CFLAGS/CXXFLAGS, non-standard target names, and non-standard/incomplete usage of directory variables.

I know that Autotools is crusty in a lot of ways, and that the learning curve is steep. It's a nasty mix of Perl and m4, and runs lots of checks that don't matter at all. It's also what I use for all of my own libraries and programs, because the result is worth the pain (for me).

So for any program that I expect more than two humans to ever compile, I'd recommend Autotools if you're a perfectionist and CMake if you're not. Within Autotools, I'd recommend Automake and Autoconf if the language is C/C++, and just Autoconf/Make otherwise. (But be sure to follow the Makefile conventions: https://www.gnu.org/software/make/manual/html_node/Makefile-...).

I wouldn't recommend a hand-rolled Makefile unless you're the only one using it, the build is really weird for some reason, or unless it's a wrapper around some other build tool.


Thank you for the detailed response!


If simpler is better, then why not Redo over Make?

https://redo.readthedocs.io/en/latest/


> awesome task-runner

I do this too. I have been trying to find examples of people doing something similar on the web but had no luck. I've replaced so many 5-line shell scripts in my user /bin by just having a single makefile in the root of my home directory and having aliases to the various commands defined in the makefile. I even have a rule that shows a rofi menu that allows me to run the rules easily although it's a bit hackish at the moment. Maybe the way to go would be pulling out the parts of Make that are good for a task runner into a separate application and add some nice extensions like .ONESHELL being implicit.


Thirded - in a slightly different context. (See also: https://news.ycombinator.com/item?id=5275313#5276744)

I’m currently managing data reduction for a bunch of space mission simulations using make. A simulation dumps a bunch of files as python pickles, which are reduced into csv, and then made into plots, and then into webpages. Each phase of this is in the makefile. None of the transformation rules are standard compilation macros like you expect to find in a makefile.

The setup makes it easy to update all the summary webpages when more sims are run. Just “make html”. It re-accumulates all the summary metrics, redoes graphics, remakes webpages.

This is the kind of processing pipeline that is often done in shell or with driver scripts, but then you always have to remake everything. I like the setup a lot.


I do the exact same thing with oddball compilation tasks like this, such as compiling a Latex document or batch-processing pictures.

I also use it to run code-formatters / static analysis tools while programming, sometimes with an secondary Makefile that has a name like 'maintainer.mk'.


I really wish blog posts like this would at least mention that they are going to use GNUMake extensions. There's some good stuff in here, but just calling it "make" and leaving it at that is misleading. It's not going to work on my minimal (non-GNU) Linux boxes that mostly run NetBSD make (bmake in many distros) or busybox style tools. I know a very small minority of people do something like that (or maybe not, what does Alpine run in containers?), but it seems worth at least a quick footnote to say that most things in here won't be portable.


  / # make --version
  GNU Make 4.2.1
  Built for x86_64-alpine-linux-musl
  Copyright (C) 1988-2016 Free Software Foundation, Inc.
  License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
  This is free software: you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.


Good to know; thanks! I'm kind of suprised; given how tiny Alpine is supposed to be and how they're muscl based I assumed they would have taken that philosophy into the userspace tools and used busybox or something. Not that I really know anything about Alpine except that it's nice when you just want something small.


I doubt most people even know that there are different versions. (I didn't!)


Yah, this is probably true. Another good reason to mention it if you're writing a blog post! :)


It's pretty common knowledge that there are differences from the GNU package set and linux package sets.

I've been exposed to this multiple times: from downloading GNU packages using Homebrew on OSX, downloading different packages on Android, the variety of options available to ArchLinux users, etc.

Anyone with basic Linux knowledge should be well aware of this. From a minimum of following tutorials and having the basic command flags not work on common terminal programs.


I don't agree that "basic Linux knowledge" requires knowing which commands have different versions. You can be intimately familiar with the Linux kernel and never step outside of the GNU environment.

I've never used OSX, and have no desire to compile things on my phone.

And I know that there are non-GNU versions of some tools, like grep, but whenever I've attempted to use them, I find them lacking some key feature that I always use. I can't be bothered to learn the entire set of GNU-improved tools, since I'm always just going to be using the GNU version anyway.


For those interested, there's also this writeup of making portable makefiles on nullprogram[1]

[1]: https://nullprogram.com/blog/2017/08/20/


Please add to this list "If you use GNU Make specific syntax, use the name GNUmakefile".

https://www.gnu.org/software/make/manual/html_node/Makefile-...


> Using the assignment operator = will instead override CC and LDD values from the environment; it means that we choose the default compiler and it cannot be changed without editing the Makefile.

People can use `make CC=clang` to override assignments. That's a pretty common use.


Came here to say this. The "strength" of user specified variables depends on where they come from: the environment is weaker than those specified as command line arguments - which can lead to confusion.


There are so many gotcha with makefiles yet we still use them regularly. I'm considering moving on to something like Taskfile [1], though I haven't tried it yet.

[1]: https://taskfile.org/


I decided a long time ago that it's not worth having to install and learn a different tool for every ecosystem just to avoid make's quirks. Yes, make isn't perfect (it can even be quite annoying at times), but neither is everything else so it's worth it to me (from a personal and business perspective) to just make my developers use one thing for all projects, especially since it's already installed (or easily installable if not, there are packages for everything which may not be true of all the less popular alternatives) on most of their systems.

Not that I couldn't ever see making an exception to that, I'm sure there are some things out there that don't work with the make model and really do need their own build system, but in general make is "good enough" and it's not worth using anything else.


I pretty much went down the same path, though with all of Make's quirks, my generic Makefiles end up being nothing more than an index of commands that instead run bash scripts. But then the problem is shifted into bash's error prone syntax. So then for non-trivial logic, I end up having the bash script call a Python script, especially if dependencies are involved.

It would be nice to have something like EditorConfig for project commands, i.e., my Sublime or your VS Code could parse one common Makefile (or similar) and map its commands to Build, Run, Debug, etc in the UI.

In some sense, I think a different real solution is to Dockerize everything and then just docker build / docker run. Of course this can get complex for some projects, especially where the local setup is different than the production setup, e.g., Docker Compose vs Kubernetes for instance.


I've been using invoke [1] which has allowed me to give my projects nice UIs while remaining in the primary language. If I were working on Ruby I would stick with Rake, even though I think invoke is better.

[1] http://www.pyinvoke.org/


I really like Invoke / Fabric for Python projects, though it feels weird to use in non-Python projects when it comes to dependencies. [Now I have to explain to non-Python people things like virtual envs, pipenv vs pip vs pipsi, pyenv, etc.]

Just curious if you install Invoke individually for each project or keep one global install?

I suppose I could always git exclude the tasks.py Invoke files.


I usually keep Invoke globally installed that way I can use it for setting up and interacting with pipenvs externally. Most of my calls beside the initial setup make use of 'pipenv run <command>' that way I never actually have to navigate inside of the virtualenv for most cases.


I'll second invoke. I use it for all of my python projects now and I love working with it. It makes it very clean to manage more complex tasks that have a lot of conditionals involved.


A particularly tricky task with GNU make is automatically adding target dependencies on header files and handling updates to them (gcc's -M and -MMD switches). It would be great if the article explained those best practices, too.



Thanks John, your book is the most-referred to on my office shelf ;)

Edit: https://nostarch.com/gnumake (via https://blog.jgc.org/2015/04/the-gnu-make-book-probably-more...)


+1 for a great book, I bought that book too!

I use the patterns in @jgrahamc's article "Escaping: A Walk on the Wild Side" pretty often. Especially the \n definition, which works great to put on the end of a $(foreach) loop inside of a recipe.

@jgrahamc: Thanks for all you've written on Make over the years.


I also recommend this book. The series on “meta programming make” is also quite good: http://make.mad-scientist.net/category/metaprogramming/


Ooh. That's great. I had not seen that!


That'a great. Glad it was helpful. If you have any of your own GNU Make tips and tricks please do blog about them!


Here's all you need to track header file dependencies. Basically, when compiling a .cpp file to generate a .o file, a corresponding .deps file is also generated. And at make startup, we include all the .deps files we find in the bin (output) directory, if there's one.

$(BIN)/%.cpp.o: %.cpp

     $(CXX)  -c $(CXXFLAGS) "$<" -o "$@"

     @$(CXX) -c $(CXXFLAGS) "$<" -o "$@.deps" -MP -MM -MT "$@"
-include $(shell test -d $(BIN) && find $(BIN) -name "*.deps")


This tiny Makefile should convey the idea.

    CFLAGS=-MD -MP

    prog: <list of .o files>
        $(CC) -o $@ $^
        
    -include *.d


I've been using makefiles for decades, and finally decided to try something more modern.

It turns out that CMake has come a LOOOONG way, and is almost nice to use, aside from the fact that it's horribly complicated. But with some basic templates to work from, it's actually pretty easy to get a project started.

https://github.com/kstenerud/modern-cmake-templates


Take a look at Meson, too.


Or just use Rust. :3

Cargo handles makes with very simple TOML scripts, and it's the more modern language, to boot.


I've been planning to, but unfortunately it still lacks 128 bit float and decimal types, which I need.


You may be interested in this crate which adds decimal types https://crates.io/crates/rust_decimal. Here is another that I have not used but appears to add a f128 type https://crates.io/crates/f128.


Every time I start a little thing in C or C++ and I write a corresponding Makefile I get annoyed by it and write the thing in Rust. Cargo is really nice. I've considered using a Cargo.toml and build.rs file just to build C and C++ files before.


Make is a very underappreciated tool outside C/C++ circles.

I'm currently using it in an environment which uses Concourse as the CI tooling - Concourse takes care of version and dependency management at its level, and Make fills in the gaps for fetching the latest versions for local development environments. Because Make won't re-download files for already-made targets, local build cycles are fast after initial setup. Concourse can then re-use these Makefiles by symlinking the resources it manages before calling make, and Make won't try to re-download the resources. If you're not concerned with portability, you can even use make to fetch (and clean) Docker images since they live in predictable directories on Linux (but not OS X).

More people ought to approach the tool with an open mind.


    CFLAGS := ${CFLAGS}
    CFLAGS += -ansi -std=99
What's the point of the first line? Why not just:

    CFLAGS += -ansi -std=99


If turns CFLAGS from a recursive variable to a simple one. Hence it will not be re-expanded every time it is used. This is a speed optimization.

    $ cat Makefile
    CFLAGS = -Wall
    $(info $(flavor CFLAGS))
    CFLAGS := $(CFLAGS)
    $(info $(flavor CFLAGS))
    $ make
    recursive
    simple
But TFA says that the author is using := to prevent a recursive definition. It's unclear why the author doesn't just += without doing = or := (unless they want the recursive to simple conversion I talk about above). My guess is the author sligthly misunderstood the handling of the environment inside a Makefile.


Speaking of, why both -ansi and -std=99 [sic[1]]?

[1] should be -std=c99


I've used these comments[1] as source for writing the CFLAGS. Did I misunderstood?

[1] https://stackoverflow.com/a/2193647


Because CFLAGS may not be set in the environment, resulting in concatenation to an undefined variable error.


No such error occurs.

    $ cat Makefile
    FOO += foo

    all: ; @echo $(FOO)
    $ make
    foo


My bad, I felt like I've seen that error pop up before in my Makefiles, but I should have double-checked before commenting.


There's no such thing as an undefined variable in Make. The variable expands to text. Variables which are not defined by definition expand to no text.


My bad, I felt like I've seen that error pop up before in my Makefiles, but I should have double-checked before commenting.


My guess is that the left hand side is a make variable while the right hand side is an environment variable. However the make manual says “Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value.“ so maybe it’s not necessary?...


This reminds me why I stopped visiting HN. People down vote you for trying to help! Screw whoever asshole that downvoted me even though I prefixed my answer with “My GUESS is...”.


I have never worked much with compiled languages but I recently started creating gnu makefiles in all my projects. I read someone's article, maybe from here, that gave me the idea.

I have been putting everything in it. From the terminal commands needed to spin up the cloud infrastructure to all the docker build commands and of course all the build tasks like gulp and composer.

Even my docker compose commands have been replaced with "make up" and "make down". It is really handy and has really streamlined a lot of processes when it comes to local dev environment issues with different developers.

I basically just use it as an easy way to organize shell scripts.

The next step is to convert my ci/cd pipeline to just use specific makefile commands from the same makefile so that everything is more portable and I can easily execute the exact same process locally when desired.


Next up: Automake, best practices

Publication date 2025

5000 pages


Or a single page with a single word:

No


Best practice is to keep it simple. For example, if the Makefile is written in such a way that lazy set or immediate set doesn't matter, then it is not complex at all. But these facilities are there for achieve logic. Complex logics are complex, especially when they are entangled. For example, we often use build tools to generate Makefile. Some of the logic is implemented in the build tool; some of the logic is implemented using Makefile -- of course the result is complex. There is no rule of thumb on how to organize complex logic, that is what programmers are paid (so much) to do. Respect your job and treat complex logic carefully, then you won't make too much a mess -- at least do not complain about it after making one.


If you're using GNU make, as the author appears to be doing, makefiles should be named GNUmakefile instead of Makefile so as to not confuse the user or the tools by conflating GNU make with BSD make or other make dialects.


Does the "?=" operator behave like ":=" or like "=" ?


Both

    $ cat Makefile
    $(info FOO $(flavor FOO))
    FOO ?= foo
    $(info FOO $(flavor FOO))

    BAR :=
    $(info BAR $(flavor BAR))
    BAR ?= bar
    $(info BAR $(flavor BAR))

    BAZ =
    $(info BAZ $(flavor BAZ))
    BAZ ?= baz
    $(info BAZ $(flavor BAZ))

    $ make
    FOO undefined
    FOO recursive
    BAR simple
    BAR simple
    BAZ recursive
    BAZ recursive
If undefined then becomes recursive, otherwise the flavour is preserved.


Make variable expansions are always recursive.

For what you're describing I think the proper term as used by the GNU Manual is deferred. In other languages the phrase lazy evaluation is common.

The one thing I've always had trouble remembering with GNU Make is precedence. Variables defined as command-line arguments (make FOO=bar) override assignments, but environment variables (FOO=bar make) only override ?= assignments. != is both immediate (the shell expansion) and deferred (the result of the shell expansion).

I feel like there's some inconsistency when variables are inherited across recursive invocations, but in a simple test with make 3.81 (macOS) I couldn't find any. Maybe my suspicion is just a byproduct of my weird inability to remember the precedence rules.

I normally try to stick to portable make these days, anyhow. Between the ancient version of GNU Make on macOS, OpenBSD Make, and NetBSD Make, you're mostly stuck with POSIX syntax. If you add Solaris' default make, and especially if you add AIX' make, you can't rely on any extensions at all.

For me sticking with portable make is easier than installing and maintaining GNU Make on every flavor of operating system I test on, and definitely easier than installing autotools or installing and maintaining the most recent version of CMake.


The GNU Make manual defines two variable types: recursively expanded and simple (see: https://www.gnu.org/software/make/manual/html_node/Flavors.h...).


This is pretty cool summary. Thanks.


What about using Python (with system calls) to build the project? Much more understandable than Makefiles.


OK, so I am as much of a hard core Pythonista as you will find. But I don't think it makes any sense to build C projects (or many other languages) with Python instead of makefiles. C programmers know make, the makefile idiom has been evolving for 40+ years. A seasoned C programmer is going to look at an idiomatic makefile and find it much more readable than some rando doing some one-off Python script to control a build. And getting build dependencies resolved correctly so that you can only build what is need is not trivial.

What I truly hate is all the IDE's that think they can do a better job than make. This is the curse of the embedded world. Building an embedded project requires calling particular tools, with peculiar flags, and I hate with a passion IDE's that obscure all of that in some crappy XML file that is git-unfriendly. A well-structured makefile is your friend in many ways.

make is a powertool. Learn it and your life will be better.


My experiences with Makefiles are different. I can understand mine up to 3 months after I write them. And if I read someone else's Makefile, it's a binary blob, full of hacks, optimalizations, walkarounds and compatibility quirks between GNU make, BSD make, with sometimes different Makefiles for FreeBSD, Linux, OpenBSD, DOS, and Visual Studio uses its own project file in the win32 directory. It's a mess.


As a C developer I have to say there exists no idiomatic makefile - configuring header file dependencies with -MMD/auto rebuild targets affected by FLAGS change already makes your Makefile look like magic. As the project grows, eventually you will need some kind of flexibility that makefile cannot do well. In this case, explicit Python scripts become more useful that a bunch of possibly implicit makefile rules.


Perhaps we need some kind of make8 tool to check for best practices and prevent us writing these magic make files.


Is this a joke? Why would anyone ever do that?

Except if the only language you know (and ever want to know) is python?


Users of SCons (https://scons.org/), and Waf (https://waf.io/) disagree with your point of view.


Incidentally, I have not ever met a single user of either of those that does not hate working with them.


Maybe they're simply not from your social circle?


Despite the downvotes, it is reasonable to write the project build script/workflow in an easy-to-read, cross-platform language. Makefiles often devolve into shell scripts or use of autotools in very slow and not-cross-platform ways. This becomes more spaghetti as you build dependencies between/across projects.

Lately with projects that do several things on build, I include a build.go and ask users to just `go run build.go [command]`. I get many features built in, it's quite maintainable and cross platform, etc. Even if it calls out to other makefiles or build scripts, at least you can centralize it and do different things per OS or whatever.


The answer is that Make only rebuilds what is necessary. Duplicating this logic in Python is nontrivial.


There is also Snakemake[1]. I did not decide for myself yet if I prefer it over make.

[1] https://snakemake.readthedocs.io/en/stable/


I bet Makefile has far more understandable and needs far much less lines than Python for its purpose. Its declarative syntax and "rules" are just suitable for task runners.


Makefiles, Best Practices: Don't write Makefiles, use a higher level language to describe your goal and some tool to execute it (either directly or through generating a Ninja file).

The Make language is the assembly language of build systems. Do you really enjoy writing assembly all the time?


Make is simple.. it gets me started. I can do multiple targets with dependencies and incremental compilation..

Sure, you should be careful of scaling it far. BUT, please enlighten me on a sane high-level build system?

cmake, bazel and the like all seems to rely on unintuitive macros. And let's not talk about auto tools :)

What other mainstream build system will get you started quickly without unintuitive macros. (I won't claim that makefiles are often intuitive, hehe)


What's more simple than "add_executable(foo bar.cpp)"? It's concise, cross-platform and hides all the boilerplate for you.

Sure, everything in CMake is not super intuitive, but I've led workshops were people got the hang of it pretty quickly for simple to advanced cases (including multiple targets and some scripting). Keep it simple, most of the time, you don't need the advanced features at all!

Can't say the same for Make, apparently, people need to write about it still today! And most of the Makefiles are just plain wrong and won't make any reproducible and reliable builds. That's a big no in my book for a build system!


Its here I look to KISS to guide me. A Makefile is simple and you can expect it to be on most systems, so it's a good place to start stringing things together. It helps you start building a CLI UI with relative ease, once you know you should mark all ephemeral targets as .PHONY.

As a project grows and gains users or requires more system support, the growth of Make usage to declare interdependence should always be discussed. At some point the team will need to start using a different tool, perhaps isolating Makefile usage to bootstrapping the bigger tool.

The reason I reach for Make is I know it, and it's rather intuitive for someone to glance at my Makefiles and get an overview of where to look next.


It is rather intuitive for someone knowing Make. Even then, which version of Make are you talking about? GNUmake? WHICH VERSION again? 3.82? 4.0? Some features in 4.0 aren't backward compatible of course, so you have to be precise about what you are exactly targeting.

If you are using anything advanced, then it will become unreadable for beginners too. Try to explain delayed and immediate variable interpolation to beginners, that their "VAR = foo" doesn't always work the way they think it does.

On the other hand, yes, the CMake sample above is self explanatory, works on all platforms (more than Make itself), all compilers without having you to write low level compiler invocations, supports IDE for easier debugging (sure, you can also tell your users they have to learn how to use GDB by hand, just... good luck). It will automatically rebuild the right files when their dependencies or command line changes, you don't have to code that by hand for each compiler (and probably have many subtle bugs in your implementation too).

Make has a place for simple things, not for build systems, we have way better ways to do them now, even if you refuse to use them.


I don’t think I ever saw (GNU)make version problems. Most Makefiles I have seen in the wild were using pretty basic features.

On the other hand, trying to get Kitware’s stuff to compile was a real horror. I ended up adding Docker to the project just to get the right CMake version. And debugging the buildsystem was hell. I hope I never have to touch CMake again in my life.


I'd like to understand where in my comment I stated that I "refused" to use better tools. I even stated that it's important for the team to discuss tooling as a project gains users.

Learning new tools is something I enjoy doing, but as my career continues my focus is changed. I haven't once walked into a project where I've felt the need to re-tool the build system because it wasn't living up to the task. And for your information, my company has a monorepo and we build using pants, which I'm starting to grow quite fond of.

I may have some bias towards learning new tools, likely why I like go so much. All the tools(now that we have modules and MVS), none of the hassle.


You forgot -Wall. Also, does this enable C++11? And don’t forget to set that NDEBUG macro.

All the things above is still a 2 line Makefile. And it is intuitive, too - you jusy copy your command-line g++ invocation and paste into Makefile.


That has nothing to do with the compilation of your program. This is a specific option to use in a specific configuration and specific compilers.

So if you want a configuration with some flags like that, then you can have a separate file for it. Each compiler is different after all, some will need different options.


> What's more simple than

CMake 3.6.0 or higher is required. You are running version 3.5.1


"GNUmake is required, but you are running FreeBSD".

Not saying it's not available there, not all platforms have all software available right away or up to date.

Even then, upgrading CMake is the easiest thing ever: "pip install cmake". Or fetch it from the Github releases page quickly, it's always available as static binaries for all major platforms.


If it was only that simple.. there are packages out there which require specific older version of cmake. So sometimes you have to downgrade, too.


Something like:

include "rules.mk"

foo: bar.cpp

But if the syntax for cmake was more sane, I would like it a lot more... Why does cmake use a DSL, and let me define targets as first class objects..


I have yet to find a build system I prefer over Make. I use Maven at work and use a Makefile to run mvn. It lets me easily run a wide variety of commands locally as part of my build.


Sure, if you're running commands, but you're not building Java directly with it. That seems reasonable.


Any chance you can share that here?


"Makefile, Best Practices: Use CMake to generate them for you"


You joke, but I've personally never seen a hand made make build systems, in any company I've worked at, that could handle "fuzzing" by deleting or modifying random files in the build system and having the expected output.

They all required a "clean" to get back in order.

For those that maintain handmade make based build system of reasonable complexity, I challenge you to "fuzz" it before each build. It will not be a comfortable experience.

On the other hand, I've seen many generated makefiles that behaved correctly that are totally unreadable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: