
The Language Agnostic, All-Purpose, Incredible, Makefile - tannhaeuser
https://blog.mindlessness.life/makefile/2019/11/17/the-language-agnostic-all-purpose-incredible-makefile.html
======
teddyh
Make is great, and I wish more people would use it in place of whatever
monstrosity is en vogue this week.

However, there is _one_ thing which Make absolutely cannot handle, and that is
_file names with spaces_. If you have any risk of encountering these without
any possibility of renaming them, you’ll sadly have to give up on using Make;
it just won’t work.

~~~
enriquto
I agree, and the solution to this problem is to forbid filenames with spaces.
The convenience of make and similar tools is much more important than spaces
in filenames. File names with spaces should not be allowed in modern
filesystems. When the user types a filename with spaces, the GUI should encode
the space as a non-breaking space character, that does not cause havoc in
scripts.

~~~
colordrops
I can't tell if you are being sarcastic.

~~~
enriquto
i am dead serious about that. Allowing the separator character in filenames is
a nightmarish error.

~~~
pjmlp
Only for tools that are stupid for not being able to handle any kind of
filename.

~~~
enriquto
Yes, like filenames with multiple dots, for example.

I agree that filesystems must allow almost arbitrary filenames. But some
characters should definitely be forbidden, like '/', '\n' and ' '.

------
goblin89
Unlike a target, .PHONY can be populated incrementally. For example, this:

    
    
        .PHONY: serve live-reload
    
        serve: init deps compile db-setup db-migrate
            rails server
    
        live-reload: yarn
            ./bin/webpack-dev-server --host 127.0.0.1
    

can become this, making things a tiny bit easier to maintain when there are
many targets:

    
    
        .PHONY: serve
        serve: init deps compile db-setup db-migrate
            rails server
    
        .PHONY: live-reload
        live-reload: yarn
            ./bin/webpack-dev-server --host 127.0.0.1 
    

Recently I joined an environment that uses Makefiles as the facade in front of
pretty much everything, from git submodule update shortcuts to building code
and running local development servers.

Surprising myself, I’ve quickly grown to appreciate _working_ Makefiles. That
said, since the syntax somewhat encourages terseness, when I need to fix a
non-trivial target it tends to look like black magic—nothing reading a few man
pages can’t fix, but it takes extra time.

It’s not my first choice overall, I prefer to leave out the extra layer and
document direct command-line calls in a README. If a commonly used tool
changes its call in a new version, with README it’s a documentation issue, but
with Makefile it’s broken software.

~~~
unhammer
> If a commonly used tool changes its call in a new version, with README it’s
> a documentation issue, but with Makefile it’s broken software.

ie. the Makefile will be kept up-to-date

~~~
goblin89
Perhaps I did not express that well. This is how things may go:

(1) I need to build this, but the Makefile is broken. (2) I invoke the build
directly by asking colleague for help. (3) Can I be bothered to update the
docs? Maybe. Can I be bothered to fix Makefile targets? Much less likely.

------
thiht
What I love about Makefiles is that they just use the CLI tools. Full build
tools like Gradle or Bazel require installing specific plugins and learning a
new inferior syntax, making them a nightmare to use if you need to use a non
implemented feature of the underlying tool. The biggest pain point is also
that they don't even bother to print the actual command being executed!

I recently used make in a side project[1] to implement a "full" continuous
delivery pipeline and it really was refreshing, despite the syntactic quirks.

[1]:
[https://github.com/Thiht/smocker/blob/master/Makefile](https://github.com/Thiht/smocker/blob/master/Makefile)

~~~
boris
make works well when you are targeting a single platform with a decent shell
and the project is not too complex (e.g., no auto-generated source code that
requires its own automatic dependency tracking). Once that no longer holds,
make becomes a real liability.

------
Jeff_Brown
I rely heavily on makefiles, but the gotchas lurking in make syntax are many
and severe:

[http://www.conifersystems.com/whitepapers/gnu-
make/](http://www.conifersystems.com/whitepapers/gnu-make/)

~~~
zwegner
Note that the author of that paper (a friend of mine) wrote another build
system, the dead-simple-but-awesome make.py. I have a mirror/fork of it[0],
since it's been unmaintained for a while (but it mostly doesn't need any
maintenance).

The entire build system is a single Python script that's less than 500 lines
of code. Rather than trying to fit complicated rules into Make's arcane
syntax, rules are specified with a Python script, a rules.py file (see [1]).
But the script should be thought of more as a declarative specification: the
rules.py file is executed once at startup to create the dependency graph of
build outputs, and the commands to build them.

Yet, despite the small size, it's generally easier to specify the right
dependencies, do code generation steps, and get full CPU utilization across
many cores.

At some point I'd like to write more about make.py and try to get it used a
bit more by the public...

[0][https://github.com/zwegner/make.py](https://github.com/zwegner/make.py)
[1][https://github.com/zwegner/make.py/blob/master/example/rules...](https://github.com/zwegner/make.py/blob/master/example/rules.py)

~~~
wwright
If you haven’t seen Bazel, you should take a look.

It’s definitely not as minimalist, but it has a very, very similar model for
specifying the build. In me experience, it’s pretty easy to get going, and it
makes it pretty hard to screw up any of the important features of the build.

~~~
zwegner
Yeah, I know about Bazel, but only at a high level--I haven't used it.

I generally think the hermetic build concept is a very good one, but IMO Bazel
goes about it the wrong way, and is overengineered. Rather than needing
custom-built infrastructure for every type of language supported, I'd prefer
build systems to use lower level OS facilities for discovering dependencies
and controlling nondeterministic behavior. That is, build rules would use
something like the rules.py files of make.py, specifying any arbitrary
executables to run, but without needing to specify the input dependencies of
each rule. Each command run would get instrumented with strace (or the
equivalent for non-Linux OSes), and filesystem accesses detected. If a file is
opened by a build step, that path would be checked for other build rules. If
one exists, and it's out of date, the first build step gets paused while the
input file gets built, then resumed. All of this happens recursively for the
whole build graph, starting from the first requested build output. Other
potentially nondeterministic system calls (timestamps, multi-
threading/-processing, network access, etc) would be restricted/controlled in
various ways yet to be determined.

That said, I haven't actually built anything like that (or know of anyone else
that has). Maybe there's some complicated issues that this couldn't deal with
but Bazel could. For example, there might be sources of nondeterminism that
don't involve syscalls, like vDSO, I don't know for sure though. Portability
between OSes would definitely be an issue. But overall I feel that, barring
any major unforeseen issues, something like this could be built in a fairly
minimalist fashion; maybe a few thousand lines of Python, possibly a small C
module.

~~~
robochat42
There are build systems that use strace to find dependencies. For instance tup
[1][2] and Fabricate [3]. Also see this post by Waf which discusses some
issues with this approach [4]

[1] [http://gittup.org/tup/](http://gittup.org/tup/) [2]
[https://news.ycombinator.com/item?id=12622671](https://news.ycombinator.com/item?id=12622671)
[3]
[https://github.com/brushtechnology/fabricate/wiki/HowItWorks](https://github.com/brushtechnology/fabricate/wiki/HowItWorks)
[4] [https://waf.io/blog/2015/02/using-strace-to-obtain-
build.htm...](https://waf.io/blog/2015/02/using-strace-to-obtain-build.html)

~~~
zwegner
Ah, thanks for the references. Now that you mention them, I realize I
definitely knew about tup and fabricate before (and possibly waf?), but had
forgotten about them. I haven't really thought much about trace-based build
systems in years, until this subthread.

And looking through that waf blog post, I realize that I meant ptrace instead
of strace--I want full fine-grained syscall interception, not just a text
report afterwards. That gets around a lot of the overhead/parsing problems
mentioned, and is required for the "pause build command so its input file can
be built" case.

------
ur-whale
Make is great as a dependency resolution engine.

For everything else, it is absolutely horrible.

What I typically do is use make only for what it is good: as a dependency
resolution back-end.

All the build logic for my projects is written in Python, in an executable
file stored in the project root directory and called "make" (I have "." in my
PATH).

The Python script, when it runs, generates on the fly a clean, lean, readable,
unrolled, Makefile and feeds it directly to /usr/bin/make via a pipe.

Works like a charm:

Python (a sane and expressive programing language) to express the high-level
logic needed to build the project.

Make as a solid back-end to solve the "what needs to be rebuilt" problem
(especially the parallel version with -jXX)

~~~
m_mueller
There's at least one more usecase IMO: definition of common development
lifecycle steps in a shared Makefile across services. At my current workplace,
instead of having a bunch of bash scripts in every service, I just give every
service repo a Makefile that usually is a oneliner where common.mk is
included. this just wraps docker-compose and gives us commands like make, make
run, make stop, make lint, make test, make help etc.

This way we can e.g. have repos using completely different technology stacks
but the interface to them is the same - whether it's our database, a node.js
webservice, a python data analytics tool etc. and the definition of these
lifecycle commands in common.mk are totally trivial, they're just .phony. one-
liner rules.

~~~
williamdclt
That works, but isn't particularly a Make feature. That'd be just as easy to
do in bash, python, ruby, JS...

~~~
TOGoS
Yes, but a Makefile is a sensible place for documentation on how to build and
run everything (that happens to be executable). If there's a bunch of scripts
all over, I'm not going to know which one I'm supposed to run. And it's
probably not going to resolve dependencies for me.

------
peterwwillis
Makefiles are great entry points for ci/cd pipelines. It's easy to pass
arbitrary environment variables at runtime, targets to build, define basic
dependencies, and have clear steps to execute that can include some minimal
inline shell. And since it's pretty dependency-less, I can run the same make
commands locally to test the pipeline as I'd use in a remote CI system.

I often use them as a wrapper for Terraform weirdness, where you may want to
call an ADFS-enabled AWS login tool or not, depending on if `aws sts get-
caller-identity` returns. Or assume a role before running all targets. Or
extract values from a terraform.tfvars.json, to pass to the above two steps.
Or bootstrap a remote backend if it doesn't exist. Or remove stale module
symlinks. Or properly run init, get, and validate before running a plan or
apply. Or document weird _-target_ usage. The end result of just running _make
prep_ and _make apply_ with no further knowledge required is exactly the
experience I wanted out of Terraform initially.

------
jstrong
I like make, but I had a lot more luck with just/justfile, which is similar to
make conceptually but with less idiosyncratic syntax/execution.

~~~
rhn_mk1
It doesn't look similar to me at all. It's a tool for running commands, while
Make is a build system. Make won't always execute all dependencies.

------
solidsnack9000
As a facade, Make is great. Anything more, and it's not.

You can express so much in Make, and so quickly; but the expression is
horrible and basically confusing.

~~~
blowski
This describes my Makefiles - they all run some other Bash or Python script.

~~~
solidsnack9000
Too bad shell does not have a “tasklet mode” where you can readily define some
commands to call.

------
bxparks
I sometimes use GNU Make to fire off custom code generators, before the files
are handed off to other parts of the toolchain which can have their own
complicated dependency management. This works quite well. The one annoying
problem that I often encounter is that Make does not handle multiple targets
(i.e. the code generator generates multiple files, e.g. 'file1.h',
'file1.cpp', 'file2.h', 'file2.cpp', 'test.cpp'). I usually end up inserting a
bunch of .PHONY targets, which causes unnecessary evaluation of the dependency
graph, but at least it works, instead of breaking in seemingly random ways.

My other use of Makefiles is to capture small (< ~5 lines) of bash, python or
such scripts for doing certain things within a directory. I find that to be
more efficient than documenting that sort of info in a README.md file.

~~~
gumby
Not sure what you mean by "Make does not handle multiple targets".

You can definitely do `make foo bar` and it will run the recipes for both foo
and bar. You can also write a recipe with multiple prerequisites (which could
be the result of a variable expansion).

Curious about the limitation; could be there's a way around it or that I never
ran into it.

~~~
aidenn0
The poster is talking about a single command that generates multiple outputs.
e.g. a codegen tool that generates "foo.c" and "foo.h"

~~~
Ambroisie
Isn't the solution to that problem simply to use stamp files

You declare foo-stamp as a dependency of both foo.c and foo.h. foo-stamp
itself is dependent on the file used to generate those foo.* files. The recipe
for foo-stamp invokes the code generator which updates the files, then you
`touch foo-stamp`at the end of the recipe.

~~~
pstuart
> Isn't the solution to that problem simply to use stamp files

TIL a new make trick. Thanks!

------
gumby
Make is very programmable; here's something from our code base:

    
    
      # Yes I am aware that this looks like TECO and Prolog had a baby.
      
      $(foreach prog,${3P-nonboost-packages},$(patsubst %,3P-build-%/${prog},${MAKE_CONFIGURATIONS})): 3P-src/$${@F} $(patsubst %,toolchain-%/_env,${CONFIGURATIONS})
       @echo Building ${@F} for $(subst 3P-build-,,${@D})
       @mkdir -p $@
       @if [ -f "$</CMakeLists.txt" ] ; then \
         ${env-$(subst 3P-build-,,${@D})} cd $@ ; \
           cmake -DCMAKE_MODULE_PATH='../../3P-$(subst 3P-build-,,${@D})/lib;../../toolchain-$(subst 3P-build-,,${@D})/lib' ${${@F}-cmake} -DCMAKE_INSTALL_PREFIX=../../3P-$(subst 3P-build-,,${@D}) ../../$< && cmake --build . ; fi
    

This is after extensive simplification.

~~~
saalweachter
Make was what really drove home the concept of "developer time" for me, when
after spending a quantity of time learning more Makefile tricks than I knew
before and fixing the Makefile for our C/C++ code to handle header files
correctly so that I would stop having to run make clean constantly, I realized
that the Makefile cost the company ~$1000 for me to write. I'd like to think
it saved the company more than that in future dev time, but it was still
startling to me to realize that by deciding to rewrite the Makefile I had
effectively decided that the company should spend $1000 on a build config.

------
kazinator
Using a plethora of disconnected, non-build targets in a Makefile to provide a
"make <command>" language sometimes seems like such an anti-pattern. Those
commands just want to be simple scripts, right?

Why does that pattern persist? I believe it is for these psycho-technical
reasons.

1\. The current directory "." is usually not in PATH for security reasons. But
make ignores that; it reads a Makefile from the current directory.

The psychological hypothesis here is that people somehow like typing

    
    
       make bundle
       make yarn
       make db-reset
    

compared to the no-Makefile alternative scripts:

    
    
       ./bundle
       ./yarn
       ./db-reset
    

Something always feels off about running a program as ./name.

2\. If there are any shared make variables between the non-build utility steps
like "make bundle" and actual build steps, then it's easier for those utility
steps to be in the Makefile so they can interpolate the make variables. The
scripted alternative would be to have shell variables in some "vars.sh" file
that is sourced by all the commands. But then somehow the Makefile would have
to pick those up also in some clean way, probably requiring a ./make wrapper:

    
    
       #!/bin/sh
       . ./vars.sh
       # propagated needed subset of vars to make
       make FOO="$FOO" BAR="$BAR" "$@"
    

So I think these are some of the main sources of the "pressure" for various
project-related automated tasks to go into the Makefile.

Another source of the pressure is that the "<command> <subcommand>" pattern is
present elsewhere, like in version control tools "quilt push", "git blame",
...

It has the technical advantage of namespacing. If you have a make target
called "ls", then "make ls" doesn't clash in any way with /bin/ls.

------
MisterTea
There's also mk from bell labs which has been ported to *nix and is available
in p9p (Plan 9 Port.)
[http://doc.cat-v.org/bell_labs/mk/](http://doc.cat-v.org/bell_labs/mk/)

~~~
henesy
I generally prefer mk to make, I just find the way multiple mkfiles compose
and some of the syntax changes pleasant.

A small change, but being able to just do $foo instead of $(foo) is so nice.

------
kragen
It's surprising not to see more examples of implicit rules and dependency
resolution in this document. Here's a simple example extracted from
[http://canonical.org/~kragen/naturaleza/Makefile](http://canonical.org/~kragen/naturaleza/Makefile):

    
    
        frames=1-80x160.pgm 2-80x160.pgm 3-80x160.pgm 4-80x160.pgm 5-80x160.pgm 6-80x160.pgm 7-80x160.pgm 8-80x160.pgm 9-80x160.pgm 
    
        intercalated.pgm: $(frames)
                ./intercalate.py $(frames) > $@
    
        %.pgm: %.jpg
                convert $< $@
    
        %-80x160.jpg: %.jpg
                convert -geometry 80x160 -colorspace Gray $< $@
    

This uses ImageMagick commands to massage the various image files into the
desired form without me having to manually invoke the commands image by image.
Admittedly, on looking at it, I don't think I got a great deal of dependency-
tracking mileage out of make in this case, because the source images weren't
actually changing—only the build process was changing, and make doesn't track
that (although redo, for example, does.) But in cases where you're dynamically
adding new input files, make is super helpful for generating thumbnails or
whatever from them. As long as the filenames don't have spaces.

My most immediate work task for the morning is helping a colleague figure out
why SCons is failing to build the JNI binding for our project, although the
old makefile builds it fine. Sigh.

------
gravypod
Make has very serious problems with it's design in my opinion. It's builds are
not hermetic. There's no way go distribute/include another person's make file.
The language it uses is extremely complicated and focuses on being compact
instead of easy to understand.

I with we all dropped makefiles and decided on a single build system I'm the
bazel-lineage to lean on. The world would be a better place if everything came
with BUILD files.

------
wojciii
Gnu make build systems can horrible to debug when they get complicated. Cmake
or some of the other build systems can generate makefiles in addition of
checks executed prior to building the project which are useful for finding all
dependencies. I find it easier to work with cmake than with pure makefiles.

~~~
renox
That's funny: I (and several other co-workers) hate cmake but make is OK. Not
great but much, much better than cmake..

~~~
wirrbel
IMHO make is not a great target for code generation. I.e. cmake should
probably compile to something like ninja (the way meson does, actually I
haven't checked if cmake could potentially compile to ninja).

Cmake is a weird tool, it's kind of ok, one can see why it was built, but it
is astonishing how little benefit it provides over autoconf+automake as a
makefile generator.

The thing about pure Makefiles is: they don't scale well. They usually start
simple, and with project-growth the accumulate parameters, features and tasks,
until they become an unbearable maintenance burden. I remember working on a
project where I deleted a Makefile, and rewrote it from scratch, just because
it contained the tinkering of about 10 devs, most of which were not actively
developing the project anymore. In the end it was a magnitude smaller and
faster. But: I am pretty sure it has either grown massively again over time,
or it has fallen out of use.

------
ashton314
I use Makefiles when I'm learning the ropes of a new system build tool. E.g.
"I want to do <foo>, so I run `make <foo>`", and the make target named <foo>
has all the commands to build what I want. I did this when I was learning how
Docker worked. I put the incantation to build a new image into a Makefile, as
well as how to run the container and exec into it. Not the _best_ system, but
works for me as a kind of living notebook.

I run Makefiles in other places too. <3 Couldn't live without it.

I found this video helpful in learning how to (ab?)use Makefiles:
[https://www.youtube.com/watch?v=fkEz_oVh0B4](https://www.youtube.com/watch?v=fkEz_oVh0B4)

------
kstenerud
Make is like the Lisp of the build world. It's powerful and you can build
anything with it, but it won't be compatible with anyone else's stuff the way
it would be in a more opinionated system, so you can't leverage other peoples'
work much.

I used make for decades, then switched to CMake, got burned too many times,
and now I've moved on to Meson. There really isn't a good build system for
C/C++, which is a shame :/

------
crucialfelix
I've been using [https://taskfile.dev/](https://taskfile.dev/) lately.

It's sane and simple. Written in go.

------
contingencies
100 language-specific package managers / shell scripts / make / scons / custom
systems. What to use? AS LITTLE AS POSSIBLE.

------
gigatexal
I used to be daunted by the complexity of Make until I saw it used and used
well and now I love it!

------
arein2
You know what's simpler and incrediblerer? Shell functions.

------
xwdv
I use make to build out my static site and it works fine.

------
beagle3
Obligatory mention: do / redo [0]

Super simple, uses your favorite language for specifying the build (usually
bash, but ... anything goes), much more robust than make, parallelizes builds,
and can be included in your project as a 800-line bash script so that your
users don't have to install it.

It's not blaze/bazel (et al) -- no hermetic builds, for example, but it
doesn't put the '.o' files out there unless the compilation is successful, and
it does verify file contents rather than just time by default - most build
systems fail on the last two.

[0]
[https://redo.readthedocs.io/en/latest/](https://redo.readthedocs.io/en/latest/)

------
0xDEEPFAC
Yea but, according to make file syntax whether you use tab or spaces
matters...

Everything about it just seems half-baked

