
Make: Theory and Practice - ploxiln
http://www.ploxiln.net/make.html
======
jgrahamc
Not a bad introduction. I spent years working on GNU make stuff (including a
complete emulation) at Electric Cloud and wrote an "Ask Mr Make" column.
Everything I wrote about GNU make can be found here:
[http://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-
arti...](http://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-
articles.html)

[blatant advertising] And also I have a new book on GNU make coming out this
month: [http://www.nostarch.com/gnumake](http://www.nostarch.com/gnumake) if
you really want to get into make deeply.

~~~
teddyh
If anyone wants to get into GNU Make deeply, one would think that the obvious
thing to do would be to _read the official manual_ for GNU Make.

Anyone who writes their own, _not_ freely available, manual to a free software
program, is dividing the world into two halves: Those readers who _can_ afford
this new book, and will pay the author (not the writers of the software, mind
you) for a nice manual, and those readers who _can’t_ afford to pay for this
new manual, and will have to do with the inferior¹ official manual. Those
readers who can afford the new manual will not be incentivised to improve the
official manual, since they have the new, better, one. Therefore, this act (of
creating new manuals which are not freely available) is reducing the number of
readers who can be expected to help improve the official manual to those
readers who cannot afford the new manual. These effects, i.e. reduced funding
_and_ reduced help to the original software authors, is not, to me, worth it
to have a new, even if better, manual.

① Presumably, by their own standards, since they wrote a replacement manual.

~~~
belorn
On the subject of unofficial manuals and guides, people has been sued under
copyright and trademark for writing and selling them
([http://news.cnet.com/Warcraft-maker-sued-for-blocking-
sales-...](http://news.cnet.com/Warcraft-maker-sued-for-blocking-sales-of-
unofficial-guide/2100-1043_3-6053716.html)).

Since the title is "the GNU make book", a consumer could easily mistakes the
book as made by the author of _GNU make_ or/and published by the _GNU
project_.

Edit: Missread the article. It was blizzard that got sued for filling the
DMCA, not the other way around. A more relevant article is the Harry Potter
Lexicon case where a guide to the harry potter works was deemed not protection
by fair use. ([http://writers.stackexchange.com/questions/8243/can-
anybody-...](http://writers.stackexchange.com/questions/8243/can-anybody-
write-an-unauthorized-guide))

~~~
DrJosiah
Correction: Blizzard+Vivendi filed DMCA takedown notices to prevent sales. The
writer of the guides sued Blizzard+Vivendi for filing invalid DMCA takedown
notices and _won_ [http://lawvibe.com/world-of-warcraft-ebook-seller-wins-
case-...](http://lawvibe.com/world-of-warcraft-ebook-seller-wins-case-against-
blizzard/) .

Title-wise on jgrahamc's book, that's a different discussion, and completely
unrelated to the link you provided.

------
rspeer
I'm currently using Make as part of a data-building pipeline. It's nice that
it's a build system that doesn't assume my goal is to build a binary or a JS
file, and that it's remarkably easy to parallelize.

One deficiency I've come across, which seems to be well-known, is the M:N
problem -- where one step takes M input files and has N output files. Make
rules seem to expect to have only 1 output, and the workarounds like
.SECONDARY prevent some features of Make from working correctly.

I've also seen this limitation in many of the fancy new build systems that get
posted here on HN.

Is there a build system, a modification to Make, or anything out there that
_does_ keep track of builds with multiple outputs? Not an I/O-guzzling
MapReduce framework, please.

~~~
evmar
FWIW, my tool Ninja (was just on here again this week here:
[https://news.ycombinator.com/item?id=9282539](https://news.ycombinator.com/item?id=9282539)
) was designed to be more or less the same idea as make with some semantic
fixes, and that includes supporting multiple outputs.

You can see the brief list of improvements here:

[https://martine.github.io/ninja/manual.html#_comparison_to_m...](https://martine.github.io/ninja/manual.html#_comparison_to_make)

(Within Ninja this manifests concretely as needing to represent the build
graph as a bipartite graph between files and commands. Though I don't know
make's internal representation, I imagine the straightforward implementation
is as a graph between files and I can imagine why it'd be difficult to make
multiple outputs work in that case.)

~~~
JoshTriplett
Is there some fundamental reason why the approach used in Ninja couldn't be
used to accelerate a substantial subset of Makefiles? If you dropped Make's
built-in rules, more complex functions, VPATH, conditionals, and similar items
the Ninja manual describes as slowing down Make (all of which are non-portable
features of GNU Make), while mostly keeping compatibility with the portable
subset of Make, is there some fundamental reason _that_ couldn't be
accelerated to the same degree as Ninja?

~~~
evmar
No, they should be equivalent. The Ninja manual goes so far as to encourage
you to continue using Make if Make is fast enough for you.

However, another way of writing your question is "if you removed the features
of make that ninja doesn't have and then optimized the remainder for
performance, wouldn't they match?" and that is maybe tautologically true --
that description is roughly what Ninja is, after all.

Because we cared more about performance than other things we maybe were a
little crazier about shaving off milliseconds than other systems, so it might
be hard to make make fast enough. But (as the Ninja manual suggests) this may
only matter for very large (Chrome-sized) projects. You can read a chapter
about some of the performance work done on Ninja here:
[http://www.aosabook.org/en/posa/ninja.html](http://www.aosabook.org/en/posa/ninja.html)

~~~
JoshTriplett
> However, another way of writing your question is "if you removed the
> features of make that ninja doesn't have and then optimized the remainder
> for performance, wouldn't they match?" and that is maybe tautologically true
> -- that description is roughly what Ninja is, after all.

Not exactly. I'm asking if an implementation of the portable subset of Make
people actually use (pointedly excluding GNU Make extensions) could be made
nearly as fast as Ninja. That would have the advantage of running existing
makefiles, as long as those makefiles weren't written specifically for GNU
Make.

~~~
evmar
I think we are maybe talking in circles. :)

When you write "could be made nearly as fast as Ninja", I thought you meant
you would change the code of Make or write a new tool. If you don't write a
new tool and just use a subset of Make's functionality for your own Makefiles
I'd expect those to be faster than your standard Makefiles, sure.

But "existing makefiles" typically use a mixture of GNU make-isms and other
things that are not GNU-specific but still slow (e.g. from a skim of the BSD
make manual I see lazy variable expansions and control structures). So if
you're talking about implementing a subset of Make you're talking about
something that's unlikely to be compatible with Makefiles seen in the wild.
And then if you're not compatible with Makefiles seen in the wild you've
effectively written an incompatible faster subset of Make, which brings us
back to what Ninja is. (In case it's not clear, in Chrome's case the same code
was used generate Makefiles and Ninja files with some relatively minor output
differences -- the tools are that close.)

Perhaps there's some "common" subset of Make that covers some high percentage
of build files seen in the wild that could be made faster. That could be
valuable to organizations who want faster builds without changing anything. I
vaguely recall seeing some commercial software that did this even -- as I
recall, their value add was that they had tooling that would figure out how to
run your Makefiles in parallel; though Make supports parallel execution,
apparently this business found enough customers by just targeting those with
underspecified Makefiles that weren't parallel safe and then fixing them!

Anyway, all of this is kinda moot because for 99% of projects Make is plenty
fast. I even use Make myself for all my personal projects! In Chrome's case
(what we wrote Ninja for) there's so much code that the build files themselves
are over ten megabytes of text, so to parse that quickly you're at the level
of worrying about avoiding memory allocations in the lexer, which is beyond
what most people would care about.

~~~
JoshTriplett
There are plenty of shell scripts out there that are written for /bin/sh
without using bash features; similarly, many makefiles just use the portable
subset of make, such as POSIX make. I agree that you'd likely need to write a
new tool rather than attempting to optimize GNU make or BSD make. But you
wouldn't need to introduce a new syntax; you could just use the POSIX subset
of make.

I was partly asking because a fast make-compatible build tool seems easier to
switch to, and partly because I wondered how much of Ninja's performance
depends on dropping the slow features of make versus adding new capabilities
or new syntax.

------
Animats
make is a good idea gone bad. It's supposed to be dependency-based, but it has
no mechanism for finding dependencies. It has to be told about them manually.
So kludges have been written to find dependencies, but they have to overwrite
makefiles. Its analysis of what has changed is purely based on time ordering,
so it can get confused. This leads to too much "make clean". Although it
wasn't intended as a macro processor, it's turned into one.

Then, of course, there's "./configure".

The alternatives tend to be bundled with some giant IDE, or are language-
specific. The trend seems to be towards the latter; Go has "go", and rust has
"cargo".

------
osivertsson
This seems like an excellent and concise piece on how to best use Make, and
one I will share with other developers.

Thank you for writing it and posting it here!

------
jhallenworld
Should read this along with it:
[http://www.conifersystems.com/whitepapers/gnu-
make/](http://www.conifersystems.com/whitepapers/gnu-make/)

In my current very huge (takes hours to build) project at work there is a lot
of voodoo around the amount of parallelism you can use. "make -j 4" seems OK,
but "make -j 20" fails.

Of course, nobody wants to work on improving the build system.

~~~
jgrahamc
See also: The Pitfalls and Benefits of GNU Make Parallelization

[http://www.cmcrossroads.com/article/pitfalls-and-benefits-
gn...](http://www.cmcrossroads.com/article/pitfalls-and-benefits-gnu-make-
parallelization)

~~~
hyc_symas
What's sad is that I wrote about this about 25 years ago when I first wrote
gmake's -j feature.

[http://highlandsun.com/hyc/#Make](http://highlandsun.com/hyc/#Make)

The Chrome/Ninja discussion reminds me of the old mess of X11 IMakefiles.
Apparently nothing has really improved since then.

------
thinkingkong
The reason I love make so much isn't because it's a super reasonable way to
build projects, it's because it has the best interface for building projects
in a way that can apply to all languages.

I wrap lots of other build tools in make, so that the way things happen always
follows

make setup && make build && make (deploy || install)

No matter the underlying tools. It just makes getting started with one of the
many dozens of repos we worth with easier.

------
q2
Thinking of make as an example of "declarative programming" helps in
appreciating its functionality.

This aspect needs to be highlighted to those who have knowledge of
imperative/object oriented programming paradigms only.

Otherwise, understanding how those makefiles actually work can become
confusing and painful.

------
amelius
Incremental and parallel computations, like building software in a development
environment, is better done using some kind of purely functional system, imho.

~~~
cafebeen
I'm curious--any pointers to example packages?

~~~
ambrop7
Nix & NixOS - [http://nixos.org/](http://nixos.org/) But, in practice it is
used more on the level of packages and configuration than as a build system
within a package. I remember reading an article demonstrating its use for
building C code but I can't find it.

------
jayridge
makes my day

