
An opinionated approach to GNU Make - DarkCrusader2
https://tech.davis-hansson.com/p/make/
======
jgrahamc
For one-line recipes you can use semicolon

    
    
        foo.o: foo.c ; $(CC) $<
    

If you find $(@D) hard to remember then do $(dir $@) instead.

I wrote a book about GNU Make stuff
([https://nostarch.com/gnumake](https://nostarch.com/gnumake)). If (and only
if) you've read the GNU Make Manual from the FSF my book may help you. It's
not for newbies. Don't want the book? All the articles are here:
[https://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-
art...](https://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-
articles.html)

~~~
mbrock
With backslashes before newlines you can do “multiline oneliners” too.

~~~
jgrahamc
Yep. One of the reasons I recommend the FSF manual is that it's a great
introduction:
[https://www.gnu.org/software/make/manual/make.html#Splitting...](https://www.gnu.org/software/make/manual/make.html#Splitting-
Recipe-Lines)

------
cies
Learned some good tricks here. There's one that I use but did not find in the
article. This one:

    
    
        # Borrowed from https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
        .PHONY: help
        help: ## Display this help section
            @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {printf "\033[36m%-38s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
        .DEFAULT_GOAL := help
    
        .PHONY: load-terraform-output
        load-terraform-output: ## Request the output JSON from Terraform 
            some commands
            some more
    

It makes Makefiles a little more self documenting.

Now I issue a `make` (or `make help`) to get a listing of the documented
tasks. Very helpful.

~~~
manwe150
On the subject of good tricks, I like to put this debugging snippet in all my
Makefiles. It supports typing `make print-VAR1 print-VAR2` and it'll dump the
values of VAR1 and VAR2 (this could be extended to also show `$(flavor $star)`
and `$(origin $star)` and other little details, but usually I just simply want
the final value).

    
    
        define newline # a literal \n
    
    
        endef
        # Makefile debugging trick:
        # call print-VARIABLE to see the runtime value of any variable
        # (hardened a bit against some special characters appearing in the output)
        print-%:
                @echo '$*=$(subst ','\'',$(subst $(newline),\n,$($*)))'
        .PHONY: print-*

~~~
oso2k
I like to use a `showconfig` rule so that `make showconfig` will display all
public vars and give their default values. It ends up being a little like
`./configure --help`. But I see now that I could tighten that up a bit with
the above rule and something like `showconfig: print-VAR1 print-VARX`. Thanks!

~~~
manwe150
That sound pretty cool!

I eventually did something similar to that and built something alike to `make
-p`, by hand, that was also a bit like `./configure` or CMakeCache.txt. It
could then be read back into `make` and bypass a fair bit of configuration
testing work (it's much faster on cygwin anyways). I got a bit too nervous at
the automagical complexity though and never made a pull request (plus I only
just fixed it to work on the old version of make that ships with macOS). But I
thought perhaps you might find this fascinating for the levels of make hackery
to try to untangle:
[https://github.com/JuliaLang/julia/compare/master...vtjnash:...](https://github.com/JuliaLang/julia/compare/master...vtjnash:jn/make.inc.cache)

------
sirn
If you're using GNU-specific Make features, please, please consider naming the
file GNUmakefile instead of Makefile and use $(MAKE) for recursion. GNU make
will happily pick up GNUmakefile before Makefile, and using $(MAKE) will
remove a lot of headaches when people try to build your project outside of
author's $PREFERRED_PLATFORM.

~~~
swiley
Do you have a good guide for writing portable makefiles? I’m always using GNU
make so I don’t notice it with mine but I always feel a little bad.

~~~
sirn
I don't think there's a guide, the only resource I'm aware of is The Open
Group Base Specifications[1] which can be reference when in doubt. Almost
other times I just install bmake alongside with gmake and try to get it work
with both (it's still not perfectly conform to POSIX, since GNU supports some
of BSD's make feature, and vice versa).

[1]:
[https://pubs.opengroup.org/onlinepubs/9699919799/utilities/m...](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html)

~~~
gumby
I'm a big fan of "be strictly conformant" (e.g. use no C++ extensions) for
maximum portability but I always install Bash and GNU Make. The extra power is
worth it.

------
calpaterson
Terrible clickbaity title that he walks back throughout the entire post -
"this is not dogma". There are a couple of valid remedial notes in here (like
using files as targets) but the part about changing the default shell and
inserting hacks that allow you to use spaces will just baffle anyone else and
probably make your Makefiles non-portable (eg macs don't have a recent bash).

~~~
OskarS
The "replace the tab with magic character" thing particularly baffling. I
agree that it was probably a dumb idea in the beginning for make to force you
to use a literal tab character, but using these kinds of hacks is a terrible
idea. It will just make it harder to read makefiles for normal people and it
will make them less portable. Every sensible programming editor knows that
make is particular about spaces and tabs and will make sure that it uses tabs
to indent the recipes, so it's not an issue any more. If anything, using some
other character will make it more confusing, because I'm guessing most editors
will indent/syntax highlight it incorrectly.

It's not dissimilar from early C programmers who wanted C to be more like
Pascal and put things like this in headers:

    
    
        #define BEGIN {
        #define END   }
    

and then never used curly braces. NO! DON'T DO IT!

~~~
panpanna
I really don't understand his justification for such a significant
modification:

> Make leans heavily on the shell, and in the shell spaces matter. Hence, the
> default behavior in Make of using tabs forces you to mix tabs and spaces,
> and that leads to readability issues.

I agree with you in that replacing TAB with > makes things much less readable.

~~~
a-nikolaev
Yes, his justification is nonsense.

1) The original creator of Make plainly said that his original choice of tabs
over spaces was accidental, and honestly a mistake. He just learned about (at
the time) new tools lex and yacc for writing parsers and chose TAB as a line
prefix for shell commands. Not much thought given really.

2) Tab in the _very beginning_ of the line determines that the rest of the
line is a command. However, the rest of that line goes into shell pretty much
verbatim (modulo a few substitutions). Because of that, you may have tabs in
the middle of that line, no problem, and they will be passed into the shell to
execute!

3) To my knowledge, shells treat '\t' and ' ' the same way as whitespace. If
anyone knows such differences, please let me know tho, I will correct myself.
On the other hand, ">" is an important shell operator.

~~~
wahern
The shell performs field splitting according to the characters in the IFS
environment variable; if unset it splits on <space>, <tab>, and <newline>.

There are many other places in Unix where tabs are special. The "<<-" heredoc
operator will strip leading <tab> characters, so if you want to nest embedded
content neatly in your script you have to use tabs. Similarly, programs like
cut(1) default to using <tab> as the delimiter.

The fact of the matter is that until very recently the vast majority of
programs and programmers on Unix used tab indentation, and to a lesser extent
tabs for intra-line alignment. So while the requirement of leading tabs in
Make may have been a mistake, it was a benign mistake that would have gone
mostly unnoticed. And the alternative surely would been a more permissive
syntax allowing both <tab> and <space>.

------
majewsky
> And you will never again pull your hair out because some editor swapped a
> tab for four spaces and made Make do insane things.

 _steps up to you and whispers in your ear_

Or you could, you know... use a real text editor.

~~~
donatj
I am really curious about how this would happen.

I have been working with Makefile’s, with a team, for close to ten years and
have never encountered anyone accidentally using spaces. I guess someone
editing it via GitHubs web ui or something.

~~~
GordonS
Happened to me just the other day.

I added a makefile to a project that didn't yet have one, and I didn't have
makefile support added in my IDE.

Got some weird, unhelpful errors when trying to run make, and eventually
realised spaces were being used instead of tabs.

The solution was to configure my .editorconfig file to use tabs for makefiles,
and as a nice-to-have, to also install a makefile plugin in my IDE.

~~~
account42
> The solution was to configure my .editorconfig file to use tabs for
> makefiles

The real solution is to not have your editor silently convert tabs to spaces,
ever.

~~~
simias
The real solution for dealing with spaces vs tabs is that there's no real
solution that will work all the time. Or maybe we could go back in time and
murder both hitler and the person who decided that tabs should be a separate
character (which I wouldn't be surprised if they were the same person).

If you don't autoconvert you risk ending up with a file mixing tabs and spaces
which can range from a mild annoyance to generating very perplexing errors
that are a pain to debug if the file's format is sensitive to indentation
(such as python scripts or... Makefiles). I suppose you could argue that the
conversion should require user interaction but frankly that sounds like a
hassle since 99% of the time autoconverting is absolutely fine.

Furthermore it makes total sense to use an editor config to force a certain
standard and avoid problems in the future, especially if you work with many
third party libraries with different coding styles. At work I deal with python
files indented with 4 spaces, the Linux kernel indented indented with 8-space
tabs and a bunch of other projects and language who may or may not be using
other variants. You'll have to pry my editorconfigs from my cold, dead hands.

~~~
bhaak
> Or maybe we could go back in time and murder both hitler and the person who
> decided that tabs should be a separate character (which I wouldn't be
> surprised if they were the same person).

You got the timeline almost right. A tab and a space are not only different
characters, they are not even in the same category of characters.

A space is a whitespace character whereas a tab is a control character that
moves the cursor to a specific position. Its name is an indicator for what it
was used in the beginning, to arrange data in a tabular form.

The real problem is that many editors don't show tabs differently from spaces.
In vim you can do a lot with the options "listchars".

My settings there are: set list listchars=tab:>-,eol:$,nbsp:~,trail:X

------
clarry
Ok, since we're sharing controversial opinions...

If you're going to rely on a fancy shell anyway, why not just throw make out
of the loop altogether? That is, unless you're working on a big project where
incremental builds really make a difference. (But IME, with a few exceptions,
these are the projects that usually outgrow and abandon make anyway)

You can run cc via shell via make, or you can just run cc via shell. In the
latter case, there's one less program (with quirks) to fight with, and more
flexibility to do stuff that you can't easily bend make to do.

~~~
lloeki
Because the core proposition value of make is the dependency graph and
incremental rebuild, which then trivially allows for parallelisation with
-j<n>, and doing that with shell scripts is basically reimplementing that part
of make, only badly.

Instead of pushing for makefiles to become shell scripts for the part you
mention, I usually implement such things as support scripts (in shell, python,
whatever) - who often come in useful on their own - and call those from make
targets. IOW using the best tool for each job, with make as the orchestrator.

~~~
clarry
> Because the core proposition value of make is the dependency graph, which
> then trivially allows for parallelisation with -j<n>

My core argument is that this rarely matters, and in the cases it does, it is
often the case that the project's needs surpass what make provides anyway. Or
they're using a very special version of make with very special supporting
infrastructure, like bsd.mk.

I'm compiling some 100k lines of legacy C (for a custom operating system) plus
compat shims that make the thing run on Linux and my very very trivial build
script (plain posix compatible shell) takes less than 5 seconds to run in
debug mode. Optimized release builds are a bit slower, but this hardly matters
in day to day development. It would be trivial to add some parallelism with
xargs.

Fwiw I can compile the same thing with CMake, and it's faster only sometimes.
It often happens that a header changes and everything needs to be built
anyway. And it often happens that the parallelism bites me back when one file
gives an error which scrolls out of sight as a dozen other files are still
being compiled in parallel.

I've wasted far more time fighting subtly broken makefiles (that don't pick up
some dependencies and thus fail to recompile the things that need to be
recompiled), and build systems with fancy declarative languages that don't
present an obvious way to do what you do in one line of shell when you don't
need to worry about a dependency graph.

~~~
MaxBarraclough
Personally I see CMake as the best way to work with portable C/C++ these days,
as it supports Linux/Mac/Windows, including library-detection and everything
else a traditional autotools "./configure" script does.

That said, I'm not a fan of the CMake scripting language, which is quirky and
error-prone. It should be easy to write a new 'Find' module, but it's not.
There should be officially documented design patterns to make the task
trivial. There _still_ aren't.

CMake gets the job done, but it's surprisingly difficult to work with.

</rant>

~~~
irishcoffee
I really want to like CMake, but having spent the past 5 years using qmake
daily, CMake seems very voodoo-ish and archaic.

~~~
pferde
Too bad, Qt project itself is moving to CMake as its main build tool:
[https://bugreports.qt.io/browse/QTBUG-73351](https://bugreports.qt.io/browse/QTBUG-73351)

~~~
irishcoffee
I'm aware.

------
bogwog
This is a good list, but I have to disagree on the tab thing.

> And you will never again pull your hair out because some editor swapped a
> tab for four spaces

How many editors out there in 2019 will automatically replace tabs with spaces
by default? Unless you're editing makefiles in Microsoft Word or something, I
don't see why a _working_ code editor will do something you didn't tell it to
do.

Vim, Emacs, Nano, Sublime Text, Kate/Kdevelop, Visual Studio/Code, Atom,
Brackets, Text Mate, Scintilla/SciTE/Geany/etc, Programmers Notepad,
CodeBlocks, Eclipse, and JetBrains all know not to mess with your tabs.

Rather than switch away from tabs so you can keep using a broken editor, the
correct solution is to switch an editor that works. And if you configured your
editor to replace tabs with spaces, and it doesn't give you an option to
handle Makefiles differently, then that's a broken editor.

~~~
e12e
I'd add: knows not to mess with your tabs _in makefiles_.

I generally use expandtabs in vim - but I still don't break makefiles. I
suspect it's quite possible to _force_ vim to expand tabs in makefiles.. But
why would I want to?

------
wyldfire
> MAKEFLAGS += --no-builtin-rules

> This disables the bewildering array of built in rules ...

Oh, wow, I totally disagree. It's good to be opinionated -- that is, unless
you're wrong. ;)

In all seriousness I find it terribly useful to quickly create simple
Makefiles and follow idioms like CFLAGS/CXXFLAGS/LDFLAGS/LDLIBS/etc.

~~~
alxmdev
I've recently been lightly toasted by GNU Make's implicit "old-fashioned"
suffix rules, where it was looking for YACC files as prerequisites for my own
generated C code. The manual suggests a workaround that clears the implicit
suffixes and also allows you to add back the ones you want. For me, clearing
all of them with an empty ".SUFFIXES :" rule was exactly what I needed.

[https://www.gnu.org/software/make/manual/html_node/Suffix-
Ru...](https://www.gnu.org/software/make/manual/html_node/Suffix-Rules.html)

~~~
microtherion
If you think implicit yacc rules are "old-fashioned", let me introduce you to
GNU Make's… implicit SCCS rules!

------
rwmj
I wonder if anyone has ever generalized "make" so that it can operate on
general dependencies and tactics rather than always on files? The sort of
thing I mean is that you'd be able to write:

    
    
        url_exists(http://example.com/file): file
            rsync file example.com:/html
    

I've had a few attempts at this (most recently in 2013:
[https://people.redhat.com/~rjones/goaljobs/](https://people.redhat.com/~rjones/goaljobs/))
but never really arrived at anything satisfactory.

~~~
skore
Well, kind of.

I'm using make to build a content graph, part of that is [http://](http://)
dependencies. The underlying concept is that you can define protocols (https
would be one) and then teach my make pre-parser about building them. An
example would be:

    
    
        myfile.js: https://code.jquery.com/jquery-3.4.1.js local.js
            uglifyjs $^ -o $@
    

The '[https://'](https://') is rewritten to map to a local cache directory (so
it's little more than a glorified variable, but still more intuitive since
protocols are so familiar from our use of the web), so the above code is
actually something like

    
    
        myfile.js: cache/https/code.jquery.com/jquery-3.4.1.js local.js
            uglifyjs $^ -o $@
    

and that in turn invokes a rule that makes a HEAD request to the URL - if that
is outdated against the local copy, that gets renewed. So, in regular make
code, something like this:

    
    
        cache/https/%: cache/https/%.head
            curl -Ls https://$* > $@
    
        cache/https/%.head: cache/https/%.head.force
            rsync --checksum $< $@
    
        cache/https/%.head.force: FORCE
            curl -Ls -I https://$* > $@
    

I could see a way to turn this around, though. The concept of "protocols" is
currently only GET in my setup, but I do plan on supporting the idea of POST,
eventually. So what you're asking for is that, I guess.

I must say, though, that in my now three years of using (or abusing) make,
"just make it another file" is surprisingly often the right next step. In the
POST situation, it would simply be a local file that has direct or side-
effects attached that do what you're expecting it to do.

~~~
lloeki
> "just make it another file" is surprisingly often the right next step

This way it also often turns out that the "fetch"/"cache" part gets naturally
separated from the rest, and that's super useful to track down any permanent
or transitory issue.

~~~
skore
Yes! It's basically a built-in stack trace! Permanently written straight to
your disk!

It does get unwieldy after a certain size, but that's what I'm building
tooling for.

------
bilekas
$(CXX) -MM $(CPPFLAGS) $< | sed 's,\\($ _\\)\\.o[ :]_ ,\1.o $@ : ,g' > $@

My makefiles are beautiful... To me

And they're sure as sh!t not wrong, or they wouldn't work! :)

~~~
iforgotpassword
I ragequit as soon as he said not to use tabs. The article is invalid.

~~~
loriverkutya
He is completely right about that, sadly.

------
mvaliente2001
Another trick, create a `help` target:

    
    
        help:                      ## Show this help
            @fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##//'
        
        foo:         ## foo help
            # make help will print the names of the targets with 
            # the help message following ##
    

I don't remember the source, I think I saw it in a stackoverflow question.

------
anon9001
Experimental, but this exists:
[https://github.com/mrtazz/checkmake](https://github.com/mrtazz/checkmake)

------
KerrickStaley
My 2c on GNU Make is that you should not use it for new projects. There are
better build systems, like Bazel, that are faster and produce consistent,
reproducible results.

Make's lack of hermeticity means that it's easy to accidentally craft a
Makefile where there are dependencies between targets that are not explicitly
listed in the Makefile. When this happens, you can't be sure that when you
change an input file and run `make`, all the dependent targets will be
rebuilt. This means you often have to `make clean; make` to make sure that
things get correctly rebuilt.

Bazel also supports things like caching of artifacts so that when you build,
switch branches, build, switch back, and build, it can re-use built artifacts
from build #1 so that build #3 becomes a no-op. With remote caching, this can
happen across computers and users; when user 1 builds at SHA 123, and then
user 2 later checks out SHA 123 and does a build, user 2's build will simply
copy the cached artifacts over the network and do no local work. (There is
some operational overhead to maintaining this shared cache however).

That said, guides like this are really helpful for maintaining and extending
existing make-based build systems!

~~~
nrclark
Agreed that a pure Make approach is less than ideal for new software projects.
It's great as an orchestration tool though. I use Make for all kinds of
wrappers, because it gives built-in dependency ordering and tab-completion.
Make is also a good backend for systems like CMake or Autotools.

I manage a couple of Yocto Linux-based firmware builds for my employer. Each
project has a top-level Makefile that wraps the underlying build tools, so the
build process is as easy as cloning the repo and running "make".

Make is also great for all of the post-build orchestration steps like "grab
these files, rename this one, generate a little report, and make a tarball of
the result". Steps that are each a little snippet of shell-script, and have
some ordering dependency.

A 'build.sh' type shell-script would work too, but wouldn't include all of
Make's built in features and dependency management. And you'd need to write
stuff if you wanted it to parse specific target requests. At which point - why
not just use Make?

So I guess I really like Make for the role of "be the top-level glue around
other build systems". It includes dependency-management and file-generation
rules, works out parallel ordering where possible, and comes with tab-
completion on every desktop Linux.

Plus every developer who uses the command-line understands that a Makefile
means "stuff can be built from inside this folder". They also probably know
that "make" or "make all" is likely to run the build, "make install" will
probably install it, and "make clean" will probably clean the folder. These
conventions have deep staying power. And builds that follow them can be used
by a lot of engineers with no further training or documentation required.

~~~
alwillis
I’ve never used Make in my life until a few weeks ago.

It’s surprisingly good for managing the build process for a web project that
various build tools like Gulp, Grunt, etc. have been created to handle.

I finally ended up using npm scripts and it was easy to convert them into a
Makefile.

------
morelisp
"Opinionated" sure is right.

I love make, but a lot of this advice is targeted at being able to write
"better" shell scripts in make. I don't recommend it. If you find yourself
writing a shell for loop, you probably want instead to build a list of
targets. If you find yourself wanting complex shell variable preparation, you
probably instead want target-specific variables. ONESHELL is a good way to
accidentally build some invisible dependencies into a recipe, and make it
difficult to use custom functions or canned recipes.

If you do find yourself really wanting a shell, you'll probably also want
"advanced" features like error traps, and you'll probably want to work with
tools like shellcheck (imo critical for any shell script longer than one
pipeline). Both are thwarted by baking the invocations into your make recipe.
And the recipe still looks great - probably better! - if you extract any logic
into a separate script (which then also opens up further possibilities, like
including that tool itself in a shell pipeline in a make recipe).

------
schnable
I liked Make due to my perceived simplicity of it. Thanks to this article, I
think I'll just use Rake next time.

~~~
megous
For real simplicity I like ninjabuild.

I use it in cases where Makefiles would be a real mess to maintain and in
general where I want to paralelize a set of file transforming jobs and have a
nice progress indicator and ability to cancel and continue.

I just have a nice PHP generator class for ninja files and I build ninja files
programatically in a real programming language based on whatever I like.

With this approach it's possible to fully parallelize builds accross however
many different toolchains and SDKs with relative ease and without need to deal
with quirks, or limitations of languages of various build systems or build
tools that try to be semi-universal (like GNU make) but are not, or have a lot
of non-obvious magic integrated.

And I also live how ninja tracks changes in its own build rules and
automatically rebuilds targets that would be affected, which is what I always
disliked about makefiles. If you change makefile rule you basically have to do
`make clean ; make`.

~~~
sounds
I'll second you on ninjabuild.

I just wish [https://github.com/ninja-
build/ninja/pull/1578](https://github.com/ninja-build/ninja/pull/1578) would
get merged, so I can compose ninjabuild repos seamlessly.

------
anticristi
I hope I'm not carried away by the example, but I would definitely not build
Docker images using make. A multi-stage Dockerfile can take care of building,
testing, packing, etc. source code a lot more predictably than any amount of
make, by using known images of the tools involved (e.g., node, webpack, etc.).

Moreover, Docker's caching implicitly allows to declare dependencies with a
lot fewer headaches than make. On the downside, it is all to easy to write
something that busts Docker's cache early in the build process, rendering
`docker build` super-expensive.

So my opinionated tooling: docker, git (to extract project versions and inject
them into the image), bash (to glue everything together). Everything else
belongs _inside_ the Docker build.

~~~
GordonS
I like to use a simple makefile that runs `docker build` and `docker push`
commands, automatically tagging images based on the current solution version
and git head. That means I can simply do `make build|rebuild|release`, and get
images all properly tagged.

But yeah, all the actual work is in the Dockerfile.

~~~
m_mueller
I do just the same, but replacing docker by docker-compose. I like its Yaml
files for managing settings around docker and plugging multiple containers
together, you'd reinvent a lot of that by omitting it.

------
bobbyi_settv
The problem with this:

    
    
        out/image-id: $(shell find src -type f)
    

is that if you delete a file from src (without also modifying or adding a
file), make won't consider image-id to need rebuilding

~~~
microtherion
Another problem is that if one of the input files itself is generated, it
might not get rebuilt after a "make clean", because the ultimate target no
longer knows it's dependent on it.

------
chrismorgan
Concerning creating directories:

    
    
      out/foo:
      > mkdir -p $(@D)
      > …
    

I prefer to use this pattern, which lets you have many fewer `mkdir -p …`
lines in the make output when you use it a bunch of times:

    
    
      out/foo: | out
      > …
    
      out:
      > mkdir "$@"
    

(Here using the > recipe prefix from the article, though you won’t find me
doing that in real life—I like my tabs.)

~~~
jgrahamc
My take on making directories: [https://www.cmcrossroads.com/article/making-
directories-gnu-...](https://www.cmcrossroads.com/article/making-directories-
gnu-make)

~~~
chrismorgan
That looks to be a good article, explaining the reasoning behind the different
alternatives. What I wrote matches your Solution 4—that pipe is indeed a
subtle beast! I’ve definitely combined this with fancier patterns and second
expansion as well.

------
knorker
So force GNU make and bash as shell?

No thanks.

------
Annatar
This is chalk full of bad advice like "use GNU make and bash" that it's
obvious to me it was written by someone who only knows GNU/Linux and doesn't
know UNIX (or much else). Blind leading the blind again. Don't write GNU-
specific constructs unless you have no other choice; use ksh instead of bash,
it's portable and a standard and has been long before the GNU/Linux
abomination. What nonsense I get to read...

~~~
nrclark
If somebody is targeting inter-Unix builds, Autotools is probably a better
pick than hand-rolling a Makefile.

But realistically, Linux has captured 75% or more of the entire *nix market. A
lot of developers spend their entire careers in Linux these days, and that
trend isn't likely to swing back towards other Unix flavors anytime soon.

As long as a developer understands what users will be using their Makefile, it
shouldn't matter if they target platform-specific features. The BSD ports tree
uses a bunch of BSD Make features that aren't in GNU Make. And that's totally
fine, because they know their target users.

Like let's say I'm writing a Makefile that wraps tools which don't run on
anything other than Linux. Why would I care about portability across Make
implementations?

If a dev doesn't know their users' environments (other than being POSIXy),
probably they shouldn't be writing raw Make of any flavor. And if they do know
their users, then it isn't a big deal to rely on some platform-specific
features.

------
Tyr42
I 120% recommend MAKEFLAGS += --no-builtin-rules

If I could rewrite history, I would make those builtin rules nicely,
explicitly, importable. No magic when I'm trying to teach make, please.

------
Porthos9K
I'm glad the author specified that this is mainly for GNU make, because I
suspect that if I tried most of these using OpenBSD make I would have a bad
time.

------
einpoklum
To be honest - I don't believe I've had to write a single Makefile since I
started using CMake.

At most I've had to tinker with existing ones - in which case I wasn't
motivated to make them more elegant, as opposed to replacing them with CMake
generation.

------
dima55
FYI, there's a patched Make providing (among other things) an interactive
debugger:

[https://github.com/rocky/remake](https://github.com/rocky/remake)

~~~
kevingadd
Remake is wonderful, but note that many bad makefiles invoke 'make' directly
instead of using '$(MAKE)', so remake won't work properly until those
makefiles are all corrected.

------
donpdonp
.ONESHELL is a huge help, and replacing the recipe tab is very interesting.
Great post!

------
juped
More derangements to undo when you want it to build on a different system.

------
jrockway
I think I have decided that I'm done with Makefiles. They are very tempting,
because they follow naturally from interactive exploration. You see yourself
writing a command a lot, and think "I'll just paste that into a Makefile". Now
you don't have to remember the command anymore.

But the problem is that building software is a lot more than just running a
bunch of commands. The commands represent solutions to problems, but if the
solutions aren't good enough, you just make more problems for yourself.

The biggest problems I've had with Makefile-based builds are getting everyone
using the repository the right version of dependencies, and incrementality. A
project I did at work involved protos, and it was great when I was the only
person working on it. I had a Makefile that generated them for Go and
Typescript (gRPC-Web) and usually an incremental edit to a proto file and a
re-run of the Makefile resulted in an incremental update to the generated
protos. Perfect. Then other people started working on the project, and
sometimes a simple proto change would regenerate the entire proto. Sometimes
the protos would compile, but not actually work. The problem was that there is
a hidden dependency of the proto compiler, protoc-gen-(go|ts), and the
language-specific proto API version that controls the output of the proto
compilation process. Make has no real way to say "when I say protoc, I mean
protoc from this .tar.gz file with SHA256:abc123def456..." You just kind of
yolo it. Yolo-ing it works fine for one person; even if your dev machine gets
destroyed, you'll probably get it working again in a day or two. As soon as
you have four people working on it, every hidden dependency destroys a day of
productivity. I just don't think it's a good idea.

Meanwhile, you can see how well automated dependency management systems work.
Things like npm and go modules pretty much always deliver the right version of
dependencies to you. With go, the compiler even updates the project definition
for you, so you don't even have to manage files. It just works. This is what
we should be aiming for for everything.

I have also not had much luck with incremental builds in make. Some projects
have a really good set of Makefiles that usually results in an edit resulting
in a changed binary. Some don't! You make a change, try out your binary, and
see that it decided to cache something that isn't cacheable. How do you debug
it? Blow away the cache and wait 20 minutes for a full build. Correctness or
speed, choose any 1. I had this problem all the time when I worked on a
buildroot project, probably because I never understood what the build system
was doing. "Oh yeah, just clean out those dep files." What even are the dep
files? I never understood how to make it work for me, even after asking
questions and getting little pieces of wisdom that seemed a lot like cargo-
culting or religion. Nobody could ever point to "here's the function that
computes the dependency graph" and "here's the function that schedules
commands to use all your CPUs". The reason is... because it lives in many
different modules that don't know about each other. (Some in make itself, some
in makefiles, some in the jobs make runs... it's a mess.)

Meanwhile, I've also worked on projects that use a full build system that
tracks every dependency required to build every input. You start it up, and it
uses 300M of RAM to build a full graph. When it's done it maxes out all your
CPUs until you have a binary. You change one file, and 100% of the time, it
just builds what depended on that file. You run it in your CI environment and
it builds and the tests pass, the first time.

I am really tired of not having that. I started using Bazel for all my
personal projects that involve protocol buffers or have files in more than one
language. The setup is intense, watching your CPU stress the neighborhood
power grid as it builds the proto compiler from scratch is surprising, but
once it starts working, it keeps working. There are no magic incantations. The
SHA256 of everything you depend on is versioned in the repository. It works
with traditional go tools like goimports and gopls. Someone can join your
project and contribute code by only installing one piece of software and
cloning your repository. It's the way of the future. Makefiles got us far, but
I'm done. I am tired of debugging builds. I am tired of helping people install
software. "bazel build ..." and get your work done.

------
floor_
If you're doing anything other than a unity build (single compilation unit)
you're doing it wrong.

