Hacker News new | past | comments | ask | show | jobs | submit login
One line you should add to every makefile (jgc.org)
322 points by jgrahamc on Apr 10, 2015 | hide | past | web | favorite | 96 comments

I firmly believe that there is only one production-quality makefile that has ever been written from scratch. Every other makefile has been copied from an earlier makefile, dating back to the one Ur-makefile, and modified to suit the author's purpose.

When I started with make, I copied working files to a safe place, copied from the safe place to a scratch place, tweaked the scratch, got it working, etc., etc. Managing the make file was itself part of managing the project.

Then I read and thoroughly grokked the O'Reilly book on make (which some douche stole from my desk; twenty plus years on, I am still bitter about that one... ...don't miss the Tannenbaum OS book or the Stevens networking books, but the make book, it burns, it burns....)

After that all I wrote all of my make files from scratch. Doing so made sure I understood my project, its dependencies, etc., and made sure I wasn't going to accidentally corrupt a build with some edge case from another one.

make is super straightforward once you get how it works. Sort of like git rebase, I suppose (still working on that one... :->).

git rebase I pretty much understand. Make will forever be a mystery to me. Mostly just how after the third or fourth version of it they didn't take a step back and think, "Hey, wait, maybe we should just make a language that spits out a shell file and ditch having it sort of kind of look like a shell file itself."

Because a shell file would be an inefficient and imprecise way of describing the state and logic that Make uses.

Really? I mean, it pretty much just runs commands in sequence and/or parallel, right?

Sorry, like I said, I don't think I really understand make.

I mean, I can understand that it's doing some checks to determine what commands to emit, but doesn't it all end up falling out as pretty much a straight "Yes? Do this. No? Do that. Do this and then this and then this. Do all these at the same time."

That's my mental model at the moment anyway.

Ultimately, make represents a dependency graph; Something that a shell language is not well prepared to represent. Yes, it's doing X, Y, and Z at the same time, but when X is done it can launch A and B, when Z finishes then C can also be started, but you don't want to run them all at the same time because of a max subprocesses limit. You don't know if X or Z will finish first, so you can't just launch C as soon as Z is done, nor do you want to waste time waiting for A and B to finish to launch C if Z is done. How do you express that in pure shell?

    x: y
x depends on y (meaning, if x is missing, or is a file that is older than y where y is also a file, it needs to be recreated).

z is how to recreate x.

Makefiles are programs that construct a dependency graph.

The rest of makefile syntax is mostly about ways of generating variants of the above without being overly verbose, or reducing gruntwork when e.g. adding new header includes to existing source files. But this is the heart of it.

Root the dependency graph somewhere, and as long as the dependencies are complete and correct, the minimal set of commands, and potential parallelism, can be calculated.

The point of make is to avoid doing what does not need to be done. Don't recompile binary whose sources didn't change, don't copy file which is already there, and so on. Because those operations add up to hours as the project grows.

If we didn't mind waiting for hours each time we build, then of course, a shell script that does `rm -rf target` then assembles everything from scratch would be significantly more straightforward and readable.


In my EE/Physics department, lab reports from previous years were known as "newtons" (as in: "do you have a newton for the preservation-of-momentum experiment?"), the explanation being that Isaac Newton wrote a lab report from scratch, and everyone else has used an older lab report as reference to make sure they got the right result - the transitive closure of which having Newton as a boundary.

I guess the equivalent Makefile term would be a Feldman[0], as in: "Do you have a feldman that can combine ghc and dmd output? I need that for a project"

[0] http://en.wikipedia.org/wiki/Stuart_Feldman

To be honest, a simple Makefile is very simple to write and efficient once you know the syntax fairly well. Then you add more and more things as the time pass, and it starts to morph into a giant huge monster that eat puppies.

> a giant huge monster that eat puppies

Not puppies, but brains of the poor programmer who is confronted with a problem of building your bloatware on a non-supported platform (typically Windows).

Most of the time I've blown on wrestling with other people's makefiles has been on ostensibly supported platforms.

And now I have this visual of programmers carrying skull-encased puppies on top of their necks.

Based of our ANT file that seems to be a common feature of build systems, moreso than an issue with Make.

If I had more time, it would be interesting to run a phylogenetic analysis of makefiles. Could we find the most recent common ancestor? Can we add spatial information to track their geographic dispersal?

Not exactly true. I tend to start with a new makefile for a good amount of projects. Know your tools, know your code.

I wrote a god-awful Perl script to gen (and update) Makefiles for me: https://github.com/wspeirs/makemake

It seems every competent programmer makes that mistake at least once in their carreer :)

Hey, I wrote my own build system, and then used it to build a 300kloc compiler. (It's at http://primemover.sourceforge.net/ should anyone care.) It was an astonishingly good learning experience and I'm never doing it again because it sucked.

The main thing I learn was that build systems are really, really hard. Simply coming up with an intuitive way of letting the user write down what problem they're trying to solve is painfully hard, let alone actually solving the problem. I have yet to see a build system that wasn't horrible.

GNU make is a hideous pile of recursive terribleness, but it is at least a ubiquitous hideous pile of recursive terribleness. That doesn't make it any less pleasant to use but it does at least make it more likely that people will be able to use the result. But for any non-trivial project make requires you to implement a build system in make, and as there are no debugging tools and the language is cryptic and inconsistent beyond words, the results are always buggy and hard to understand.

I dunno. I think what I'd really like is a reinvented GNU make that doesn't have all the many, many bad design decisions from the past in it. ninja, maybe? I've yet to actually try to use that...

The mistake of writing your own makefile generator or using perl? My first experience with perl read in all of the files in its directory and output to another file also in the directory, it managed to read in both itself and the output. I accidentally implemented `rm *`.

Heh, I'm currently in the positive phase of my oscillating love-hate relationship with Perl, and acknowledge its right to exist.

No, I was referring to implementing a replacement/frontend to Make.

In my case it wasn't a mistake, it was a university assignment. That was the assignment where I learned (by omission) how important it is to have source control...

I said much the same thing when Google released Sawzall 4 years ago: https://news.ycombinator.com/item?id=1866364

That said, I have written 2-5 Makefiles from scratch (move companies, need to create a new product, don't have access to old Makefiles). The rest are definitely copy/paste/modify.

The simplest Makefile is ... no Makefile at all!

e.g. if you create C program your-program.c with a main(), all you have to do is invoke

    make your-program
and it is done.

Not exactly production quality (no clean target) but can't beat it for simplicity :-)

No, make files are quite often written from scratch.

But the linker script for ARM Cortex and the GNU linker is usually copied verbatim. You can find it everywhere. Open source projects, commercial development. Everywhere. Because STM ships it with their standard peripheral library.

Nobody cares that this linker script is (c) Atollic and only licensed for use with their IDE:


Someone should combine Make with SQL so we can get the best of both worlds.

autoconf yes, but Makefiles, no. I'll write a new Makefile from scratch maybe once a year or so.

If you use autoconf, why not use also automake?

If you've already shot yourself in the foot, why not also shoot yourself in the face?

Because if you write your own Makefile instead of using automake, you'll get shot in the face by a distro package maintainer when you forget to obey DESTDIR, CC, CFLAGS, etc. like every packaging tool expects to use.

New keyboard etc etc


I don't use autoconf myself. If I did, I'd definitely copy it from an existing project.

You're wrong. I wrote at least 4 Makefiles from scratch in the last 10 years. :)

i couldn't agree more, that's why i started collecting snippets in https://github.com/andreineculau/util.mk

We wrote several Makefiles from scratch at work.

And a dependency-graph-of-thing-we-work-with to Makefile generator, like the output of gcc -M does.

In fact its a whole build system using gnu make, written from scratch.

You can avoid editing the Makefile to add the rule with GNU make 3.81 or older, which does not support --eval. Create a new file in your home named ~/Makefile.debug with the contents:

  print-%: ; @echo $*=$($*)
  include Makefile
Now you can use it the following way from any source directory:

  $ make -f ~/Makefile.debug print-SOURCE_FILES

You're missing the final tail call which includes Makefile.debug again. With this, we can make a REPL:

    $ cat Makefile
    FOO := bar


    $ cat Makefile.debug
    ifeq ($(MAKEFILE_INCLUDED),)
    -include Makefile

    REPL_COMMAND := $(shell read line; printf "%s\n" $$line)

    $(eval $(REPL_COMMAND))

    -include Makefile.debug
Now, here we go:

    $ make -f Makefile.debug
    $(warning $(FOO))             <--- typed by me
    Makefile.debug:8: bar
    ABC := xyz                    <---
    $(warning $(ABC))             <---
    Makefile.debug:8: xyz

This is the kind of idea for which the phrase 'genius crazy' was invented.

I might actually use this some time, god help me. I won't thank you, but I will remember you and curse a little every time I do.

I prefer to use memoize.py whenever possible instead of Make. I think it is much simpler to let it manage the dependencies rather than having to code them explicitly. Almost every other make system I have ever used eventually devolves to the point where 'make clean; make' is the only reliable way to use it.


Very nice.

tup[0] is a another implementation of the same idea, that does work on Windows (unlike memoize) and has a few other goodies.

I would also recommend having a look at djb's "redo", implemented by apenwarr - it is much easier than to do right than a makefile, but unfortunately you still have to get the dependencies right yourself (which tup and memoize do for you automatically).

[0] http://gittup.org/tup/

Neat. Any reason it doesn't appear to be on pypi? I didn't see it at first glance.

Or, if you're using BSD Make:


Oh man I've missed -V! I've done something like this in gmake, basically using the : in sh and -n of make to print whatever I wanted:

  $ foo() { echo 'foo: ; @ : ${'"$1"'}'; echo include Makefile; }
  $ foo SRCS
  foo: ; @ : ${SRCS}
  include Makefile
  $ foo OBJS | make -n -f- foo
  : foo.o bar.o baz.o

I think this article is a great reference for others wanting to write a small informational or how-to post. It's concise and easy to follow for those who are familiar with the subject while including enough details for beginners.

Back to the subject, I would also be interested to know a way to print out the value of a variable every time it changed in the course of a make system's execution.

Ohh, that is a great trick. I've added it.

I go back and forth on Make. It was where I started so there is some bias there but generally it has always been possible to do what I want with it. The places where it bites me were things that gmake added which added, to me, features which could be cleverly exploited but made things much more complicated and error prone. I flirted with SCons and other build systems, I was amazed at how flexible Google's was (and had to be given the complexity embodied in it) but for small projects (where small is perhaps a couple of hundred source files, and a half dozen libraries) it still is my goto build tool of choice.

  print-%: ; @echo $*=$($*)
  .PHONY: print-%
This way the rule continues to work even if such a file suddenly exists in your repo.

No, implicit rules cannot be marked as .PHONY this way. You need to declare an explicit pseudo-target to achieve this.

  print-%: FORCE ; @echo $*=$($*)

I touch make files rarely enough that whenever I do, I have to re-learn most of what I need to know - and when I see phrases like 'explicit pseudo-target', I ask myself if it really has to be this way (but never have the time to answer myself.)

I was copy pasting a lot myself and then stumbled into this guy's modular setup - I built something similar from then on. it's pretty sufficient for almost 90-95% of my needs...


There is a remake[1] project, allowing you to debug Makefile like a usual script language.

[1] https://github.com/rocky/remake

This heading looked like some buzzfeed type headline.

So I'm not the only one...

I'd say that most people should not write makefiles by hand. You should use some higher level build system, that knows how to deal with source files in your language, track their dependencies, etc. A "hello world" makefile with one source file looks simple, but once the project gets more complicated, you quickly end up building such a high level build system yourself, with the additional disadvantage of restricting your build system to only work on systems with GNU make, even though there might be other native build systems available for the platform.

Makefiles, once you have learned the syntax, are very simple to write, as they can be naturally decomposed into individual steps, and rules are easily generalized for similar file types.

Their reputation has been tarnished by autotools, but in that case you're using autotools as your build system, not make.

The problem is that C/C++ files have internal depenedencies which make needs to know about. Yes, for gcc you can use a simple trick to extract the dependencies to makefile rules, but in case you use a different compiler, you need to do it yourself. If you are compiling a different language, you need to do it yourself again. Using something like CMake or even automake, which can automatically track the dependencies, saves you a lot of work.

How do they track the dependencies automatically?

With gcc/clang it's a set of flags starting with -M, and with MSVC it's /showIncludes. They're discussed a bit here:


What if we're using a different compiler, then?

The comment that I was replying to sounds like CMake or something has more magic thus power for a future unknown compiler.

The point I was trying to make is that when you use CMake on any not-completely-unknown compiler, it will do this for you. If you are writing your own makefile, you either need to take the different compilers into account or implement it just for gcc ("screw the other guys") or simply resolve to "make clean && make".

Back in the (not so) good old days we used makedepend.


Once you start to need loops, conditions, functions to factor common code, switches for production/staging/development builds etc. I don't find Make elegant or easy to maintain even when knowing the syntax personally.

It may be my inexperience speaking, but every time I come across code that was built not by hand I visibly cringe and search for something else. It's not that it's bad, it's just that if I don't understand it then there is nothing for me to fix or play around with.

I am old enough to remember when assembly programmers used to use that complaint as a reason not to program in C. And when C programmers would use that complaint as a reason not to program with templates in C++. And when C++ programmers used that complaint as a reason not to program in languages with garbage collection.

The truth is that no matter what level you program at, you are depending on a long toolchain that you don't understand the details of. You are explicitly aware of not understanding autogenerated stuff at your level. But are ignoring how little you understand of what is underneath the level you are used to programming at.

Get used to it. A few years back I remember an article that started by diving into what actually happens between pressing a key on the keyboard and a letter appearing on the screen. I wish I could find it for you. It was a long article. And repeatedly got into too much detail, then narrowed down the scope and got into more detail. Over and over again.

You don't actually understand how your computer or code works. Instead you create a useful working model and proceed with that. Said working models can include both lower levels than your usual, or you can build up higher levels. Get used to it, take advantage of good tools, and you will accomplish more. The alternative is to get stuck in what you know, refuse to go outside of that boundary, and be less productive. The choice is yours.

I interpreted the comment a bit more generously. The admonition wasn't to avoid using higher level tools, but not to try editing the low level output of a high level tool. I prefer writing in C to writing in assembler, but I would never want to modify the assembler produced by a compiler. Similarly, I prefer scheme to C, but I would prefer handwritten C to editing the output of the Chicken-to-C compiler.

That comes back to the complaint about the auto-build tools. Using the tools, by themselves, is perfectly reasonable. On the other hand, writing Makefiles by hand isn't particularly onerous. What is frustrating, however, is when I'm expected to edit a Makefile that was generated by a tool without having access to the script that generated said Makefile.

I've seen cases where the low level output is perfectly reasonable. And other cases where the low level output contains a section explicitly meant so that people can insert into it. And other cases where it is unreasonable.

That said, if you find autogenerated "stuff", that is not evidence you can't work on the project. It is evidence that you need to understand something before doing so.

I'm sure that, when those skilled engineers cringed at the next higher layer of the stack, that higher layer wasn't reliable yet, it probably had some stability issues and serious performance issues. When it was fully solid they moved on.

I have a degree in electrical engineering, though these days I work on server / infrastructure software. Yeah I have a pretty good idea how stuff works at the c/c++ level and below.

I am not sure of that at all. My experience is that early negative opinions of the next level up don't readily get updated for a very long time, if ever. And the longer you spend holding on to your opinion, the harder it becomes to learn better.

Here's the post you were thinking of, be sure to bookmark it this time! :)


Thank you, that is the link. :-)

It's best to treat it the same way you treat assembler when compiling a C++ program. Normally, you wouldn't inspect the generated assembler. If you do, you will find it very hard to follow, but that's the disadvantage that comes with the benefit that you don't have to write it all by hand. I consider generated makefiles the same. I don't commit them to my repository, most of the times I don't look inside them, it's just an intermediate product of my build system.

Everyone seems to talk about make files for generating complex c projects. What about make files used to store bash commands of medium complexity. In my python projects I have make clean-pyc, make test, for example. In my music folder I can make find-big-dirs, make find-non-mp3, etc

Advantages are it is easy to edit, cat, and the locality of the commands: I don't want to pollute my bashrc with commands that are useful only in a given context.

I often use make as a task-runner too, which is super useful. In fact, I currently have a makefile that will build and test both the backend (PHP/Hack) and the front end (Browserify, less, etc.). The best part about make is that it handles new languages without issues: if it outputs files, make can handle it.

It seems Make is a popular topic on HN lately, so it's timely to mention "How to write vaguely acceptable makefiles"


jgc is a great resource when you got a sticky issue to resolve. For my money, the suckless guys are the guys to emulate when you start fresh with a Makefile [0][1].

[0] http://git.suckless.org/sbase/tree/Makefile

[1] http://git.suckless.org/ubase/tree/Makefile

Better article from years ago, that this blogger is probably rehashing anyway: http://blog.melski.net/2010/11/30/makefile-hacks-print-the-v...

that this blogger is probably rehashing anyway


Eric (who wrote that blog you are referrring to is a friend), but I wrote this little trick up years before him: http://www.cmcrossroads.com/article/printing-value-makefile-... and now I'm rehashing my own writing 10 years later.

I shouldn't be surprised that you published that before I did -- to be fair, it's quite a challenge to find _any_ topic related to make that you _haven't_ written about. :)

The real question is who wrote the original line? My bet's on it being Usman.

I'll take any excuse for a little code spelunking -- according to Perforce it was Scott, in October 2002.

Nice. Thanks for looking that up.

Your supposition about rehashing proved to be wrong, but the subsequent trick in the article you linked about injecting the extra line without modifying the original Makefile is an interesting twist that I had not seen.

Aww..Happily RIP $(warning)

Debuggers won't like you :)

never use makefiles. its not 1975 and we have learned a lot since then


(not to mention the shameful use of timestamps to detect modifcation in make - leading to years and years of wasted time as people get confused by it when working with others)

cmake, visual studio, eclipse, xcode, scons... basically every modern tool chain.

The mere fact that such a thing is necessary disqualifies make as a build tool for me.

That being said, this trick could be handy for existing projects.

> The mere fact that such a thing is necessary disqualifies make as a build tool for me.

It's a very simple debugging addition. Hardly necessary, just convenient. The "built-in" echo works just fine as well.

That's one huge advantage with Rake, just add `p VARIABLE_NAME` anywhere in your Rakefile, and you'll see the textual value of it at that point.

That misses the point; you can already echo a variable wherever you want in a makefile similarly. This gives you a way to do it without editing the makefile, without knowing before hand you want to see that variable.

Echo and p aren't exactly the same, there's a difference during debugging between:

    1 2 3 4 (echo)

    ["1", "2 3", "4"] (p)

that's a strawman though. what about

  printf ' "%s"' $(numbers)
to output

  "1" "2 3" "4"

You can add echo commands to Makefiles just as easily. Rake is cool for Ruby programs, but that's about it.

Personally I find Rake awful for Ruby work too because you often need to read through a Ruby program to figure out what they do, and so many tools include custom Rake integration that makes you hunt down documentation elsewhere.

I usually use Makefiles for my Ruby projects.

The best quality of life change I ever made was turning off email notifications. I just have have the numeric counter on, and check it every so often like I always did anyway. I only have notifications for things that people expect quicker replies to like texts, calls, voicemails, etc.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact