
Why Use Make? (2013) - jeffreyrogers
http://bost.ocks.org/mike/make/
======
chimeracoder
Make is an absolutely wonderful, wonderful tool.

Most of the common criticisms of Make are actually criticisms of autoconf,
which I agree is a hideous tool. (On the other hand, autoconf is a tool
intended to address a hideous problem, so perhaps that's inevitable).

A lot of the more recent build tools I see are largely reinventions of Make.
Make is very widely supported (most basic projects can make do[0] with
portable Makefiles, though GNU Make is also available for most systems), and
its syntax is actually very easy to grasp and manipulate[1].

[0] no pun intended

[1] Most projects only need a very small subset of what Make has to offer,
anyway, and that can be learned in a matter of minutes.

~~~
lifthrasiir
Make is a great tool for the intended uses. Unfortunately, many uses of Make-
like tools also require the unintended uses; my favorite example is the auto-
dependencies, which is required by most compilation tasks and still tedious to
get it right [1]. It also (mostly) lacks modern programmable interfaces, which
led to Makefiles laden with hacks and mumbo-jumbos. At least for the
compilation tasks, I see Make is clearly being outdated.

[1] Something like [http://mad-scientist.net/make/autodep.html](http://mad-
scientist.net/make/autodep.html)

~~~
haberman
The auto-dependency problem is a perfect example of how there is room for
improvement in make-land. When something that basically everyone wants to do
takes a long web page to describe a complicated method that sort of works,
that is a compelling indicator that there must be a better way.

Incidentally, I am working on a make-replacement that is intended to address
this and similar needs. It's still in very early stages, but I think I'm onto
something.

The idea is to write a very low-level tool that leaves out most of Make's
higher-level abstractions. In my tool there are no recipes, no implicit rules,
no variables or variable substitutions, no conditionals, etc. The input to my
tool is just the precise specifications of the commands you need to run, their
inputs and outputs (so a precise dependency graph can be calculated), and with
everything fully-expanded already.

The idea, then, is that whatever higher-level abstractions you want (if any)
you build into a higher-level tool. The higher-level tool just spits out a
file describing the list of tasks. Then the higher-level tool can worry about
the higher-level structure, policy, configuration, etc. of your project. So
instead of writing things like implicit rules in Makefiles, you just write
some code that explicitly generates tasks.

For example, with Make, you might write an implicit rule like this:

    
    
         %.o : %.c
                 $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@
    

Then Make magically decides which output files match this implicit rule. My
idea is that, instead of this, explicitly apply your rules to your inputs.
Your build system could instead be a Ruby/Python/etc. script that looks
something like this:

    
    
        for c_file, o_file in files:
          tasks.append(Task(
            target=o_file,
            source=c_file,
            command="gcc -c %s -o %s", c_file, o_file
          ))
    
        print(tasks)
    

I think this is much more convenient than having to program in Make and learn
its quirky abstractions.

I have an elegant solution for the auto-dependency problem (unproven, but I
think it's promising). The idea is that your dependency-calculating tasks are
still just tasks, but the tool knows how to integrate the calculated
dependencies back into the overall dependency graph. These dependency-
generating tasks depend on the files they are generating dependencies for, so
it is all part of the unified dependency graph.

If you're interested, star my project and follow my blog (where I will make
any announcements about it):

[https://github.com/haberman/taskforce](https://github.com/haberman/taskforce)

[http://blog.reverberate.org/](http://blog.reverberate.org/)

~~~
ihnorton
Are you familiar with Ninja? (which is absolutely excellent IMHO)
[http://martine.github.io/ninja/](http://martine.github.io/ninja/)

> Ninja is a small build system with a focus on speed. It differs from other
> build systems in two major respects: it is designed to have its input files
> generated by a higher-level build system, and it is designed to run builds
> as fast as possible.

Obligatory: [https://xkcd.com/927/](https://xkcd.com/927/)

~~~
e12e
I recently played a bit with ninja on a tiny c project -- as I was already
using cmake, the transition was seamless, and even for such a very tiny
project the speedup was tangible (but absolutely not relevant in any way,
everything was building fast enough :).

Also played a bit with tup, and it's quite nice too.

------
noir_lord
I write my own automation/build tools in Python

Envoy pynotify and fnmatch pretty much do the heavy lifting.

I do it this way because

    
    
        Since I know Python no extra syntax knowledge is required
    
        Python is going to be readable to me in 6 months when I've 
        forgotten the syntax of whatever build tool I just used
    
        *All* my machines desktop and servers already have Python 
        installed
        
        *It's fast, merging 20-30 largish text files takes around 
        10-15milliseconds
    
        * It's very very flexible (it's Python)
    

I used make extensively when I was at Uni and quite frankly if I never have to
touch it again I would be a happy bunny.

It's an incredibly powerful and clever tool hiding behind an interface (that
second to Git) is the worst I've ever had to use.

EDIT: I've actually been toying with the idea of writing a python library to
simplify things as lots of the code I end up writing is very similar across
projects, it's been one of my "think about in shower" projects for quite a
while.

~~~
TillE
> a python library to simplify things

That's basically scons. Give it a try, I've been using it for years to manage
a fairly complex, unusual build process with minimal effort (less than 200
lines of Python code).

[http://www.scons.org/](http://www.scons.org/)

~~~
noir_lord
Thanks for the link :).

Scons is awesome but for what I use Python for (minification of web assets,
automatically running image optimisations) it's taking a sledgehammer to a
walnut.

~~~
TillE
Fair enough, though I've found scons is extremely flexible and lightweight, in
the sense that it's not very insistent about imposing a certain style or
process on you. Unlike the vast majority of other build tools.

------
agwa
Make is a very under-appreciated tool. I think it gets a bad rap because many
people's only exposure to it is in large projects (where Make has some issues)
or when coupled with autotools (which is rather ugly).

If that has been your only exposure to Make, you should take another look at
it, and consider using it for your next small- or medium-sized project.

------
rout39574
If you read down the comment threads here, you'll notice something. Many many
folks get frustrated with some aspect of Make, and then turn around and say "I
can do better myself in [language]". And then they go off and do 30% of Make
in their favorite idiom. Sorry, but as an old fart, these all read as "Wah,
make is haaard, and inconvenient".

This isn't to say that I think Make hung the moon in radiant perfection, but I
haven't seen any tool that has a clearly superior set of compromises and
tradeoffs. Gorgeous example: automatic dependency generation. Does anyone
really think the identical dep generation codebase will work on 20 year old C
and node? It's not hard to glom whatever dependency mapping you want into your
make workflow and drop it into separate included makefiles.

It feels easier to write your own. But what that really leads to is re-
learning all the shit the make maintainers have learned in the last (look it
up.. DAMN.) nearly 40 years.

Or maybe _not_ learning it, and duplicating mistakes which have been solved
for decades.

~~~
vinkelhake
The reason for all these alternate build tools isn't that Make is hard. The
problem is that it's too generic and low-level. It comes with a very limited
set of built in rules, after that you're on your own.

For example: If I'm working on a C++ project then I'd like my build tool to be
aware of concepts like header files, shared libraries, include paths, compiler
options. For me that tool is CMake.

Can I do that in Make? I'm sure I could, but I'd end up with the same problem
you're complaining about. I'd essentially be writing my own build system, just
implemented in Make. Life is too short for that.

~~~
ihnorton
Yes: CMake is a little weird, but it's less bad than all the other options for
cross-platform projects, and CMake+Ninja is wonderful. (that said, I do hope
something Lua-based like Premake catches on).

------
scrollaway
> The ugly side of Make is its syntax and complexity; the full manual is a
> whopping 183 pages. Fortunately, you can ignore most of this

I cannot take this seriously.

I like the concept of make but it doesn't make up for its own warts.
Unfortunately, there is no single build tool I can blindly recommend to people
without being extremely familiar with their project and how it builds. No
solid and lightweight build tool that pleases more or less everyone without
having 183 pages worth of manual and a repugnant syntax.

CMake/Lua gives me hope but we're not quite there yet. Make it pretty decent
for small projects though... I see it as the HTTP of build tools, though. It
has serious issues but when tools come up they are built on top of make
because it's ubiquitous.

~~~
Rusky
djb redo captures the concept of Make but is a lot simpler and no extra syntax
(just uses shell)

------
georgef
POSIX make really does make auto dependencies hard, but GNU make actually
solves this fairly cleanly by adding an include directive. As long as you have
a script or program to process a file and spit out its dependencies in make
style (e.g., GCC does this for C and C++ with its -M flags), you can just
include the outputs of that process and make will do the rest.

For example, here's how I handle a simple C project.

    
    
        CSRC := [list .c files here]
        DEPS := $(CSRC:.c=.d)
        
        [standard statements for building go here]
        
        %.d: %.c
                gcc -MM -MF $@ $<
        
        -include $(DEPS)
    

Poof. Automatic dependency handling. You can see how any sort of dependency
that can be detected by some sort of preprocessor can be plugged in here.

~~~
itayperl
Some problems I have encountered with this approach:

1\. dash before include. It will cause make to ignore any errors when
generating the .d files. It can be pretty difficult to figure out what went
wrong. Removing the dash will show a (harmless) error message for every .d
file generated, which is pretty annoying.

2\. At one time a.c includes b.h and you run make. a.d is created with "a.o:
a.c b.h". If at this point you add to b.h an include to c.h, the new
dependency of a.o on c.h will never be updated in a.d unless you manually
delete it. This can cause some pretty nasty bugs! This can be solved by adding

    
    
        -MT $*.o -MT $*.d
    

to the gcc command line, which causes the dep file to regenerate when one of
the dependencies changes (in fact the make manual suggests a similar solution
using sed). However this creates another problem: if you remove a header a.h
included by a.c (and the #include line to it), a.d still depends on a.h, so
make will fail looking for it until you _manually delete_ a.d (and any other
dep file depending on the removed header).

tl;dr: this solution leaves much to be desired, and can be dangerous in some
conditions.

------
gyepi
I completely agree with the sentiment of the article. Makefiles are a form of
documentation. However, I no longer use make in new projects and instead use
redux [1]: my implementation of djb redo. It manages the dependencies and you
write your scripts in shell. Simple, straightforward and easy. No more 'make
-B', no more make contortions.

[1] [https://github.com/gyepisam/redux](https://github.com/gyepisam/redux)

~~~
andreypopp
I thought with make you also write scripts in shell.

~~~
gyepi
Yes, you're right. However, in redo, you don't have a Makefile equivalent and,
instead, specify the dependencies in the shell script that generates your
target. Basically, the configuration language is shell (or any other program
that can be invoked from a shell script).

------
hyp0
This _specific_ problem would be more simply solved with a script, listing the
commands. Repeatable, testable and documents the process.

It doesn't require differentiating between identifically rendered tabs and
spaces. It doesn't need dependency hand-holding, with explicit _touch_ or
checking _Last-Modified_. The simplest script just builds it all from scratch,
like _make clean_.

------
uptownhr
If syntax is bad, why not build something with better syntax? Why we sticking
with it?

------
roeme
I don't quite see the benefit of using make for arbitrary workflows (opposed
to building software) over using plain old shell scripts.

But it sure isn't that much more complex nor requires a lot more of your
system, so I'd say it's more a matter of preference.

~~~
sheetjs
Make has more natural built-in dependency management.

To write your workflow purely in shell scripts, you will invariably use lots
of tests.

Also note that they are not necessarily mutually exclusive. You can write
Makefiles that call shell scripts (and shell scripts that call make)

~~~
epistasis
I use a ton of inline shell in make for data munging. I often put

SHELL=/bin/bash

at the top of the Makefile when I use bashisms to construct complex pipelines.

And once, in a fit of mad genius, I made a particularly complex and useful
Makefile executable by putting #!/usr/bin/make -f at the top. (It was a bad
idea, obviously, and I won't be doing that again.)

For large file based datasets, the shell is just a REPL.

------
acqq
I never understood why make has to use tabs. Even if that had sense in 1977,
why it remained so until now. Why not let any whitespace, even more spaces,
have the same effect of starting the command line.

------
je42
Make wouldnt be the first build tool i would try. I would go with something
like gradle first. Eventhough, in Mike's case make looks like a good match for
his requirements.

~~~
fidotron
Gradle looks to me like it's full of good intentions, of exactly the sort that
in a few years will have led them straight to hell, at which point someone
will reinvent ant or make with different syntax and we'll be back where we
started.

I worked on a proprietary build system remarkably similar to gradle for many
years. The cleverer a build system tries to be the more likely it will become
a self sustaining beast that consumes ever more of your time and destroys
productivity. They look great with relatively simple systems, but a few years
into production with multiple deployment target configurations and you will
want to murder people. Build systems should be stupid, simple, and trivially
predictable.

~~~
twic
It's been going for five years so far, and has yet to get noticeably near the
infernal realms.

I know Gradle fairly well, and i wouldn't describe it as particularly clever.
The basic ideas in it are quite simple (and actually fairly similar to make!).
Most of the complexity comes from specific build tasks for specific purposes,
which doesn't complicate the core.

------
sigmonsays
I often convert makefiles to shell script (bash) when I outgrow my knowledge
of make. However, it's where I always start, and the makefile never goes away,
just the complex pieces get rewritten in bash!

------
PythonicAlpha
I am glad, that make exists. Guess what would happen, if every open source
tool would use its own build environment. Open Source Software would be much
more difficult to build from sources (sometimes, I can't do without).

So it is rather straight forward. Type make, sometimes autoconf and most of
the time you are done.

Also in my own projects I prefer make to many other build tools. In software
development, command line still rules (if you want to be really productive)!

------
Mister_Snuggles
I've used make as the basis for a system that works out the dependencies for
nightly batch processing in an ERP system and coordinates the execution of the
required processes across multiple applications that make up the system.

There's some shell script to glue the pieces together, and a Python script for
invoking processes within the ERP applications, but make is the secret sauce
that figures out what order to run everything in.

------
LukeHoersten
[https://github.com/ndmitchell/shake](https://github.com/ndmitchell/shake)
Shake is an awesome alternative to Make written in Haskell. It fixes a lot of
the weird dependency tracking issues Make has ("my build broke and I don't
know why!" "did you make clean?")

------
rstep
Surprised that there is no single comment here about other options like ant
and gradle. I personally hate anything where tab has a different meaning from
4 spaces, but maybe it's just me. Anyway, my personal choice for any kind of
automation would not involve make if I can use the tools I mentioned.

------
CraigJPerry
I've been using ansible for complex build environment setup. Inline
documentation. Can Factor in software dependencies or environmental setup as
required.

It's working pretty well so far. It wasn't an intensional thing. I had a
fairly complex stack for a particular project that I needed to share.

~~~
maxerickson
I was looking at Ansible in the context of this thread:

[https://news.ycombinator.com/item?id=7487202](https://news.ycombinator.com/item?id=7487202)

exactly because I like the idea of capturing the setup (compared to a more
haphazard approach). My initial reaction was that for the computer setup use
case, it did too much to hide away complexity.

I guess I don't have a deeper point, but I wonder if it is overkill for
capturing simple workflow like is discussed here.

------
fla
You might be interested in Drake, a kind of ‘make for data’.

[http://blog.factual.com/introducing-drake-a-kind-of-make-
for...](http://blog.factual.com/introducing-drake-a-kind-of-make-for-data)

------
billwilliams
I used to do this until I found drake. Drake is the truth and the light. Use
it.

------
nawitus
Make uses disk I/O, which is really slow, and for complex projects become
difficult to maintain. Also, why not use a proper programming language to
define the build process? Well, a modern tool like Gulp[1] does that. It's
streaming/asynchronous and scripts are written in JavaScript. I don't see why
one would prefer Make over it.

1\. [https://github.com/gulpjs/gulp](https://github.com/gulpjs/gulp)

~~~
davexunit
There are many complex projects that use Make to build source code. Grab just
about any software release tarball and there will almost certainly be a
Makefile in it.

This is yet another example of the JavaScript community being completely
ignorant of what came before them.

~~~
cwmma
I wish I could use make to build my JavaScript code, but I need it to also
work on windows (easily).

If there was a version of make I could point to that would work easily on
windows, preferably as a non-global variable, I would use that, until then I
probably will have to use something else, through likely just bash scripting
with pipes (which does work in windows).

~~~
maxerickson
What do you mean when you say "as a non global variable"?

The one here:

[http://gnuwin32.sourceforge.net/packages/make.htm](http://gnuwin32.sourceforge.net/packages/make.htm)

works fine if you do set path= and then call it using a full path into Program
Files (I've just checked to make sure, but I used whatever version I had
installed, not sure it's the latest one there).

