Hacker News new | comments | show | ask | jobs | submit login

Absolutely do not use make for any new project. If you love make, it's a big, red, burning flag that you're not demanding enough of your tools and that you're not keeping up with changes in your ecosystem.

There are many, many way better alternatives to make. Which one is better depends on the platform you're on. The majority of them throws in automatic dependency management for free.

Yes, I know that the essence of the post is "use a build system". I agree completely. In fact, script everything. Then script your scripts. Then refactor your scripts because they are getting messy. But don't give impressionable souls the idea that "make" is anywhere near an acceptable (generalised, default) choice today.

I could not disagree with you more strongly. Make is powerful, ubiquitous, and extensible. There's a reason it's stood the test of time. If you must, use something that will generate makefiles for you, like CMake or GNU autotools, but even with these tools, you'll still be using make, and if you understand how make works, you'll be far better equipped to understand the actual operation of your build system.

To me, blatant avoidance of make is a big, red, burning flag that whoever made that decision values novelty over value and that he's likely to be a bandwagon-jumper or a NIHer in other aspects of his professional life too, and such people are best avoided.

>GNU autotools

Autoconf is similarly an abomination that should be put out of our misery. The syntax is so opaque that 99% of people copy-and-paste the configuration file into their project, leading to 10 minute ./configure runs that check for 200 things the project doesn't use in addition to the one that it does.

Not to mention that 99% of what it's checking for is OBE: A simple check for whether you're trying to build on a modern Linux, one of 2-3 Windows build flavors, or OS X, is sufficient to set the 5-10 typical flags most projects need.

Look at LuaJIT's build process, for example. It builds practically everywhere and digs into OS internals, and yet doesn't need anything complicated to build.

I agree that Make should be retired, and yet I'm typically the "Makefile expert" where ever I work. I've worked with a lot of smart people, and most don't know anything other than the basics of Make. I have to suspect that most people who are defending Make haven't had to REALLY use it to do anything complex, because when you do, it sucks.

> I agree that Make should be retired, and yet I'm typically the "Makefile expert" where ever I work.

I'm the Makefile expert in my software shop. I still use GNU Make simply because I haven't found anything "better enough" to justify switching a toolchain. CMake was probably the closest--and that mostly because I know that it works for KDE, so I should be able to learn by example.

What sort of complexity do you think shows Make's problems? It's just a big dependency graph, so IMHO the hardest part is defining the dependencies in a way that's accurate (for parallelism and incremental builds) without being redundant. Multi-versioned builds shouldn't really add any complexity beyond a single variable per input dimension. Complex serial sub-processes can easily be factored out into scripts. Platform detection, likewise, can easily be factored out into a combination of scxripts and multi-versioned builds.

So where have you seen it suck the most? I like to think I've done reasonably complex things with it, but maybe not.

>So where have you seen it suck the most?

When people (myself included) start taking advantage of the fact that Make is Turing-complete and writing arbitrary "programs" in their Makefiles.

It typically starts simple; you want to do something like build ALL the files in a folder, so you use a wildcard. Then you want to add dependency checking, so you use the wildcard to convert between .o to .d, keeping the same folder structure.

And I don't want the .o and .d files to be generated where the .c files live, so I need to add this code here that converts the paths.

OOPS, this project uses a slightly different folder structure, and so I need to add a case where it looks in DIFFERENT relative paths.

Oh dear; I just realized that I need this code to work ALMOST the same in a different project that needs to be built with the same Makefile; that means I need to include it twice, using different options each time.

And it turns out that it DOESN'T work the way I expect, so now I have to use $(eval), meaning some of my Make variables are referenced with $(VAR), and some with $$(VAR), depending on whether I want them to grab the CURRENT version of the variable or the calculated version.

But now, now I have all of my code to create my project in one convenient place, and creating a new project Makefile is quite trivial! It's all very clean and nice. But the next person to try to change the folder structure, or to otherwise try to get this now-crazy-complicated house of cards to do something that I didn't anticipate has to become just as adept at the subtleties of $(eval ...) and Makefile functions (define ...); error messages when you get things wrong tend to make early C and C++ compiler errors look straightforward and useful by comparison.

For a far more complicated example, take a look at the Android NDK Makefile build system. 5430 lines of .mk files that make your life very easy...right up until you want to do something they didn't anticipate, or until they ship a version with a bug (which they've done a several times now) that screws up your build.

Here's one small excerpt for your viewing pleasure, just to get the flavor:


> some of my Make variables are referenced with $(VAR), and some with $$(VAR), depending on whether I want them to grab the CURRENT version of the variable or the calculated version.

Hah, my latest Makefile work has been a set of functions which generate Make-syntax output, which then gets $(eval)ed. I hear you on the debugging nightmare that this can be: does a given variable get resolved when the function is first $(call)ed, when the block gets $(eval)ed, or when the recipe is invoked? But IMHO it's not too bad to do printf-style debugging. Replace $(eval $(call ...)) with $(error $(call ...)), then work backwards from there.

It also helps to be very disciplined about immediate assignment (`var := stmt`) and to always use recipe-local variables, rather than global variables.

I do feel like all of this aspect would be cleaner in Python or Lua... but the problem is, the _rest_ of the build, which more people interact with on a daily basis, gets more complex when that happens. Because there are always the ancillary targets and recipes where normal Makefile syntax works just fine.

Thanks for the NDK reference, I'm interested in seeing other "ugly" Makefile support infrastructure for comparison :)

I also use "printf debugging"; I have to.

The worst problem I had, though, was REALLY annoying; I was getting an inscrutable error in the middle of a function, and I could delete large parts of the code to get the error to go away, but putting ANY of the code back brought the error back -- it didn't matter which parts I put back.

It turned out that git had changed LF to CRLF in the file, and some end-of-line character was screwing up the spacing. Tweaking .gitattributes and fixing the files made everything work.

I SO hate significant white-space. I never really forgave Python for that "feature" either. But I could totally get behind Lua for the logic. :)

Actually, if it were my job, I would use LuaJIT to write a make replacement; the dependencies could all be specified in tables or extended strings, and any more complicated logic could be explicitly outside of the "rules".

>but the problem is, the _rest_ of the build, which more people interact with on a daily basis, gets more complex when that happens

I think a good design would NOT have that problem. You could have it say "these files get built by default rules" separately from "these rules trigger this bit of Lua code, which can spit out warnings, add dependencies dynamically (oh wouldn't THAT be nice!), or do this other bit of complicated build processing that doesn't fit well into the rule-based system".

If you're doing it in Makefiles, then yes, you could make everything more complicated that way. But I think a fresh design could really do a good job in killing make. I'm just so busy with other things right now, though...

Another reason I would STRONGLY choose Lua over any other scripting system is that the entire tool can embed Lua trivially, while Python or Ruby or Perl would each bring an entire ecosystem with it. You can have a dozen different Lua installs on your system without requiring a separate infrastructure for managing Lua installs.

> Another reason I would STRONGLY choose Lua over any other scripting system is that the entire tool can embed Lua trivially, while Python or Ruby or Perl would each bring an entire ecosystem with it.

Oh yeah, I like that idea. I'm just not so thrilled when I hear about modern build systems when they require me to install recent versions of relatively bulky scripting languages. I'm not a big fan of Lua in general, but this sounds like a perfect application.

So what? you are doing it wrong.

there are lot's of people that use PHP for data crunching and bash for GUI and C without caring for managing memory properly.

should we retire all languages that can be abused?

also, your idea of how $ and $$ is wrong. but it could be that you messing up with = and := before that point :) so i guess your point stands. but again, all languages can be abused.

blame the bad coder, not the tool.

> also, your idea of how $ and $$ is wrong

No, he's spot-on about that. If you are using a function from within a Makefile to generate Make code which then gets $(eval)ed, then then inner function must output $${variable} so that the outer function sees ${variable} and does not immediately resolve it.

It's hairy. Hairier than macros in C. But like any other specialization, it can potentially save an immense amount of time for the rest of the tam.

> messing up with = and :=

Sorry, but I wasn't messing up those two. That's Makefile 101 knowledge; I'm talking about crazy advanced stuff, where := doesn't work the way you expect.

Even := doesn't do what you want if, after the Makefile has been loaded and you've used := three times on the same variable, ALL the instances of that variable are replaced by the last assignment. Here's an example:


    rule1 :
        echo $(FOO)


    rule2 :
        echo $(FOO)
make rule1 and make rule2 both echo 2. $(FOO) is evaluated in both cases AFTER the Makefile is loaded.

Target specific variables do this job:

    rule1 : FOO:=1
    rule1 :
        echo $(FOO)

    rule2 : FOO:=2
    rule2 :
        echo $(FOO)

Interesting. Didn't know this trick.

Turns out it wouldn't work for the usage pattern I needed (my example was simplified -- typically the variable settings would all happen in another file, and they couldn't happen on a target line because there wouldn't be a single target to use, in addition to just being ugly for that use), but it's good to know.

> I still use GNU Make simply because I haven't found anything "better enough" to justify switching a toolchain.

For what it's worth, as much as I rag on Make, I also find myself using it most of the time. To paraphrase Churchil, it's the worst build system imaginable, except for all the others.

I would just really, really love a system like Make crossed with an imperative language for everything that doesn't fit well with the auto-dependency-tracking model. There's a product in there somewhere.

Can't mod this up enough.

Microsoft's crazy-ass xml build mechanism is a giant pain in the ass, even compared to their old nmake (I think) tool.

The new system is a great example of a system where someone presumably said "Make is garbage, a big red flag, let's build something better."

Yep: the parts of the Windows tree still built with nmake are much more pleasant to hack on than the parts that use msbuild.

I'm confused. Are we implicitly talking about C(++) here, or would you accuse, say, a Clojure developer using Leiningen of valuing novelty over value?

Lots of "Don't do this" without suggesting alternatives doesn't do anyone much good.

If you're on a *nix system, building C/C++ programs (or automating one-off builds like the example), what would you recommend in make's place?

Assuming you're building your own code and are willing to arrange things in a compatible manner, then I have a solution to offer. It figures out the dependencies by reading the source code. You can create objects and binaries just by asking it to build a certain target.

I recorded an example of installing and using it here:


You can get a copy for experimentation here:


It isn't for everyone, but only you can know if it'll meet your needs. Hopefully this helps.

Whoa, your terminal playback thing is pretty neat. Did you use GNU Screen to record the session?

Thank you! I used script. Certain implementations have an option to emit timing and byte count data on stderr. Then it's just a matter of saving it and building something to honor those delays during playback.

That's awesome! I had no idea about `script -t`; I'd written my own a few years back. Thank you!

You can rip it off like this: https://github.com/ysangkok/terminal_web_player

Pretty majorly uncool to do that and not give credit or even a link to rachelbythebay.

> willing to arrange things in a compatible manner, then I have a solution to offer.

So you're proposing that people give up screwdrivers because figuring out what bit to use is hard, and that they should instead use your potato peeler, never mind that it doesn't actually drive screws?

Nope. Not at all. But I like that you think that.

That is very cool, and it's something that will only get better if the Apple-proposed modules get widespread.

Thank you! I'm not sure what you mean by "Apple-proposed modules", though. Please let me know more and I'll see what I can do.

Is the source available?

If you mean the source to my replay stuff, well, it seems you have already established a way to snag that. The only original part of that was my wrapper to the terminal to fetch byte streams and replay them while honoring the timing data.

Regarding the build tool source, I haven't decided what to do about that just yet. It could be particularly valuable in a corporate environment.

I debated whether to start namedropping build systems, but I decided it was going to be counter productive and devolve into a debate of the relative merits of the systems I'd picked out. In the end it depends heavily on the environment you're in what makes sense for your project.

In other words: if you're not sure, use make.

No, really not. If you pick a build system at random it's probably better than make. If it's a big project it's worth taking 30 minutes to actually look at what's available and pick one.

I'd suggest tup[1] and Shake[2].

[1]: http://gittup.org/tup/ [2]: http://community.haskell.org/~ndm/shake/

Thanks for the links! I will definitely use tup for some next project. I try from time to time different make-like systems but I always come back to GNU Make. I also don't fear the GNU Make Reference. But tup looks, again, quite promising. It even has those little context sensitive one-special-char one-letter variables :) But they do something better by design, I see http://gittup.org/tup/make_vs_tup.html

> This page compares make to tup. This page is a little biased because tup is so fast. How fast? This one time a beam of light was flying through the vacuum of space at the speed of light and then tup went by and was like "Yo beam of light, you need a lift?" cuz tup was going so fast it thought the beam of light had a flat tire and was stuck. True story. Anyway, feel free to run your own comparisons if you don't believe me and my (true) story.

The completely unprofessional tone here really turns me off to the entire system. If you write like a typical teenager, you probably code like a typical teenager, and I don't want a typical teenager writing my goddamn build system.

Besides: who the hell is bottlenecked on the build system? The compiler and linker (or the equivalent for your favorite language) do all the work. Anyone who believes this article makes a different is completely ignorant of Amdahl's Law.

You might find my paper more informative and less unprofessional: http://gittup.org/tup/build_system_rules_and_algorithms.pdf

Many projects are bottlenecked on the build system. You can benchmark this by timing a null build (running 'time make' after building everything). Some examples from my machine are the Linux kernel (28 seconds), and Firefox (1m 23 seconds). Some of this time is from unnecessarily recompiling things, but that is a separate issue from the inherent lack of scalability in make.

Suppose I want to change a single C/C++ file in one of these projects - the total turnaround time from when I type 'make' to when the build finishes can be described as:

T(total) = T(build system) + T(sub-processes)

Ideally T(total) would be zero, meaning we get an instant response from when we change the file to when we can test the result. Here, T(build system) is the null build time, and T(sub-processes) is the time it takes to run the compiler and such. Using the Linux kernel as an example again, compiling fs/ext3/balloc.c takes 0.478 seconds. In comparison to the null build of 28 seconds, there are significant gains to be had by optimizing T(build system).

Amdahl's Law is a little tricky to apply since tup is not parallelizing T(build system), but rather changing it from a linear-time algorithm to a logarithmic-time algorithm. So you can set P easily based on the relative values of T(build system) and T(sub-processes), but S is not a simple "count-the-cores" metric. The speedup is effectively N/log(N), where N is the number of files. This is much better than simple parallelization - T(build system) for tup with these projects is only about 4ms. The total turnaround time for the balloc.c file in the Linux kernel is 1.1 seconds (which includes compilation and all the linking steps afterward), in comparison to make's total turnaround time of 29.5 seconds.

For very large projects the build system can quite easily become the bottleneck when just changing a single file, which also happens to be the most important use-case for developers. In extreme cases a no-op build with make can easily get to 15+ seconds.

> In extreme cases a no-op build with make can easily get to 15+ seconds.

I have never seen cases so extreme, but my opinion on the matter is that this is a "build smell". If the Makefile has to resolve a DAG this large, that means that developers have to worry about compile- or link-time interactions this large, as well. 100k source files all linked into a single executable is more complex than 10k source files split across 10 executables, and a handful (say <100) of headers which represent "public" APIs. Because if you have 100k source files and your developers haven't all killed themselves already, then there are some firewalls separating various modules already. Formalize it at an API level and split apart the builds, so that it's _impossible_ for anything outside of the API itself to trigger a full rebuild.

Typically this shows up in recursive make projects with lots of sub projects—it doesn't take that much time to stat every file in question but reinvoking make sixteen times can be quite slow.

I don't deal with this by not using make, I deal with this by not writing recursive makefiles.

> it doesn't take that much time to stat every file in question but reinvoking make sixteen times can be quite slow.

Yes, reinvoking Make repeatedly tends to force redundant `stat` calls. But I have worked in environments where heavily-templated code was hosted over a remote filesystem, and every `stat` call was something like 10msec. That adds up _extremely_ fast, even with non-recursive make. Ugh.

> In extreme cases a no-op build with make can easily get to 15+ seconds.

Most developers will never see such a system. Optimizing for that kind of scale at an early stage has all the problems of any other premature optimization. It's most important to just get the build system out of the way so you can get your real work done, and you do that by writing makefiles, since makefiles are universally understood.

Now, when a project does grow to the proportions you mention, you can start looking at alternatives --- but I'd argue that these alternatives should amount to more efficient ways to load and evaluate existing makefile rules, not entirely different build paradigms. Make's simplicity is too important to give up.

You dislike his humor, that's fine. Calling it unprofessional is subjective. Some professional environments with great professional output appreciate humor.

Also, I am bottlenecked on my build system at my workplace, which takes ~45 seconds to realize nothing needs to be done (It isn't "make", because "make" does not support our build process).

I do enjoy reading through tup's site

I like how he uses Sauron's All Seeing Eye as a drop in replacement for gods algorithm.

also excellently documented... not like shake, at first


Personally I'd suggest Premake because it uses Lua instead of rolling its own scripting language. The world would be a better place if we could all agree on one scripting language for stuff like this so no one has to look up the syntax for things like creating arrays for every individual tool.

Feature-wise premake isn't entirely caught up to CMake but it has everything important. Also the premake files look a lot cleaner and more readable compared to cmake files.

> The world would be a better place if we could all agree on one scripting language for stuff like this

It's called bloody "make". If Prolog is a language, so is make. Make is still a scripting language even if it's not boneheadedly sequential and imperative the way, say, Python is.

Make is simply not up to the job. It make writing correct build systems very difficult (hard to not under-specify dependencies). It does not support auto-generating code and then scanning it for extra build dependencies. It is a crappy tool and we should standardize on something better.

If what you want is a standardized language Scons has that and is a bit more established.

CMake is pretty ugly in its own right, though I will grant it's hard to be worse than GNU make.

CMake is an abomination

Mind pointing out any constructive arguments against it? I recently switched over from hand-built makefiles to cmake for one of my projects, and it's been a breeze.

I prefer autotools to CMake. Maybe I didn't really give CMake a fair shot, but in a few hours of trying, I couldn't out figure out the CMake equivalent to a bit of custom glue code in configure.ac that checked for a Lisp compiler.

I get the feeling CMake works fine as long as you color within the lines, but that it's much harder to extend than autotools is.

Yes, this is something I have thought through, though I may have been lucky/unlucky with my pick of projects. But generally speaking, installs I have installed based on GNU autotools, seems to install with less fuzz than those with C-Make. So, that, and the fact that there is full free documentation of GNU Autotools, even books out there, have made my personal choice easy.

For me the overall factor is the ease for any of my users, installing something I have written.

Using a macro language was already bad in the 90ies (autoconf/automake using m4), but using one in 2000 is just tragic.

It is also impossible to debug, partly because of the abomination the cmake language is: even understanding where a variable is defined is hard, and because the language is so unexpressive, the Find*.cmake modules are often in the 1000s lines count.

For all its suckiness, I take autoconf/automake over cmake.

I'm sorry, but your comment does not make much sense to me. autotools is mostly written in m4, with some shell snippets. Those are macro languages. If you don't like macro languages, logically you should not like autotools.

CMake is not "impossible to debug." In fact, it is a lot easier to debug than autotools-- partly because you end up writing so much less code. Also you don't have the three levels of "generated files generating other file generators" that you do in autotools.

For all its suckiness, I take autoconf/automake over cmake.

Well, we agree on one thing. autotools does suck.

My point was that autotools had the excuse of being written in the early 90ies, cmake doesn't.

Your experience with debugging autotools vs cmake does not match mine: cmake is not an improvement over autotools (if only because at least with autotools, there is some decent doc out there and google knows a lot about autoconf insanity). It took me hours to debug trivial issues with cmake, because you can't easily trace where variables are defined.

I have never been able to figure out how to cross-compile a project that uses CMake. GNU Make lets you just set some environment flags and drop in a different compiler.

CMake honors the same environment variables as Make. Just set them before running CMake.

CMake also supports cross-compiling. www.vtk.org/Wiki/CMake_Cross_Compiling


You probably don't want to write ninja files yourself. CMake + Ninja is a nice combo.

If you are on a unix system, "redo" (designed by djb and implemented by apenwarr) is excellent. It's refreshingly simple and robust.

And, there's a minimalist version called "do" which is a hundred-or-so lines of shell, that does a complete rebuild (no dependency tracking) - so you can package that with your project, and not have to worry about your users having to install yet another build system.

Other alternatives I'd recommend, with some degree of success:

SCons - cross platform, but slow (every big enough project eventually abandons it)

waf - a unix-only (AFAIK) SCons derivative that's a bit more limited, but much faster

CMake - cross platform, very complete, on par with Make on every level (including ugliness and complexity)

Premake - cross platform, makes IDE file but can also write makefiles for you.

I'm also writing a redo-inspired build tool, but less devoted to djb's vision, and more intent on taking advantage of some nice features of Inferno/Plan 9. For example, the /env filesystem allows dependencies on environment variables. The <{} syntax allows parallelism while grouping output.

In a nutshell, credo tries to advance the build-tool state of the art, by replacing *ake files with shell-scriptable commands to build a system from bits of library code. More at http://github.com/catenate/credo See especially the first literate test, which has a more complete introduction.

waf is cross-platform (Python) https://code.google.com/p/waf/

I don't know if waf is really faster than other alternatives, but having used waf in production, I can safely say that deploying waf is at best hellish. A single-file script with zipimported package that extracts itself to a world-writable hidden directory? Crazy.

For Unix-only and smaller projects another great option is fabricate.py

Another vote for Fabricate; here's a comment of mine from a while back explaining in detail why I prefer it (or Tup) to Make.


I use it on Windows too with an strace replacement I wrote which isn't online but if anyone's interested then just ask.

Thanks for the tip on fabricate, it appears to be just the tool I was looking for. A light weight dependency based "build" system in python that is not focused on distutils or making python packages. We shall see how well it works for data processing.

'Drake' for that use was on here too, not too long ago. http://blog.factual.com/introducing-drake-a-kind-of-make-for...

I agree that make's syntax is hard to pick up and often unintuitive, but in my experience, if you reject make, it's a big, red burning flag that you don't fully understand the problems that it solves. The ability to define arbitrary dependencies and actions (including overriding built-in ones) is the defining aspect of a build system. All of the build systems I've used besides "make" (mostly scons, waf, ant, and gyp) have attempted to first-class certain types of dependencies, and the better ones allow you to define some of your own, but they all make assumptions about what one might want to do that make things I do all the time impossible (or at best, way off the beaten path). Examples include post-processing object files before linking, post-processing a binary after linking, or generating object files from something other than the compiler.

Syntax is hard? Somewhat, but that's not the problem.

Syntax is not intuitive? It's not bad once you've read the docs.

Complex Makefiles with huge included boilerplate libraries (e.g., the Android build system) are completely impossible to debug in any reasonable way? THAT is the problem.

As soon as you do anything non-trivial in your Makefile, you've created something that, when it fails, the reason will be COMPLETELY opaque to anyone who isn't intimately familiar with the ENTIRE program.

The implicit connections that a Makefile makes for building are great for JUST that part. As soon as you try to use them to write a program, you have to commit all manner of write-only-code abominations. I know because I've both DONE this and tried to debug OTHER people's code.

There is NO good way to debug Makefiles. And that alone means that they should be consigned to historic projects only, to be replaced by SOME kind of a better system.

> There are many, many way better alternatives to make. Which one is better depends on the platform you're on.

There's your answer: there are many alternatives, but only one make. Ok maybe 2 o 4. :) But a build system should for the most part be platform agnostic. I have such a hard time understanding why so many programming languages seem to need their own Make alternative (Rake, Cake, Fake, ...?)

Sure, make is a generic tool, and you can do anything with it; from parsing json files, to compiling C code, to formatting documents.

...but so are bash scripts.

I wouldn't advocate actually using either of these though, unless the situation is appropriate.

Every system and platform has different requirements, and its a bit of a big ask to want make to be 'the right tool' for all of them.

Certainly, I'd never use make to look after a ruby project, or a c project. That doesn't mean I think make is horrible and I'd never use it for anything, but you can see, perhaps, how something as low level as make might not provide all the tools (dependency management, downloading resources from git hub, automatically tracking system information, a templating system for generating system-dependent headers) you might need for some things.

You could think about it this way: If make did all of these things and was easy to use, there wouldnt be all of these other clones and different attempts at the same thing.

...but there are. So there's certainly something about it isn't making people happy.

I just want to inform you that GNU make indeed is very good for c projects as it can read the dependency files made by gcc (and clang), so you get very small very readable makefiles, that takes care of tracking the dependencies for you. http://wiki.osdev.org/Makefile


There are a lot of people who use makefiles for C projects, but most of those people don't write them by hand.

They use a frontend that generates a vastly complex and arcane makefile using either automake, cmake, qmake, etc.

These makefiles are utterly unmaintainable and deserve a place next to 'goto' in the section labeled 'considered harmful'... but they serve a purpose; correctly collecting build settings, templates and metadata and using those to construct the correct makefile.

That doesn't count as 'using make'.

There are a vanishingly few projects that actually use make; google for it's NDK builds (and a few other things; but these are massive recursive makefile monsters that you have a tiny safe api to work with), LUA with its 15 makefiles, one for each platform. There are a couple of other examples, but not many. I can't think of any big ones off the top of my head.

I think we can safely say that writing a Makefile to build your C code is a bad idea.

Linux kernel? Uses its own automation. See also *BSD make, which is the basis for ports among other things (unless that's changed since last I looked); entirely in-make. Plan 9 and the userspace port uses mk, which is a slightly cleaned up variant of the BSD make.

Fact is, a makefile for building C code is typically smaller than the autoconf files required to get autotools to work on the same code. Most horrible makefiles are written by terrible build automation software (autotools being among the worst offenders here) or by people who don't understand the dependency graph model. A ten line Makefile that automated something repetitive is fantastically important, even if it is just writing down something in an executable fashion such that you don't have to remember it later. Almost every build automation tool out there either doesn't scale up (too simple) or is too hard for small work, or occasionally both, like Ant.

If I need to do something simple, a Makefile is only a very little bit more complex than a command line—I frequently crib the command I just ran to start off the Makefile. If I needed to do nontrivial logic in a Makefile and couldn't avoid it, I wouldn't use jam or tup or redo or SCons—I couldn't, because they're less useful than make! I would probably end up using Rake, which is the only build automation tool I've seen so far that isn't a make clone and can do implicit dependency generation.

> I think we can safely say that writing a Makefile to build your C code is a bad idea.

I don't think that's a safe assumption at all, I think that's a dangerous overgeneralization. You're only considering open-source projects... and as you point out, both NDK and Lua (projects in the embedded space) use Make. I would not be surprised to find hordes of non-OSS embedded developers using Make natively precisely _because_ it is the "assembler" of build systems.

"There are many, many way better alternatives to make"

Can you give an example with the ubiquity of make and better expressiveness?

No, probably not. But I think the metrics you've chosen are irrelevant. Make was not only the best, it was the only player in the game until quite recently, so of course it's ubiquitous. As for expressiveness, yes, it certainly is expressive. It's also very hard to learn and very difficult to maintain. But build systems such as Gradle or SBT, are also incredibly expressive, by virtue of being configurable in actual programming languages - with the added benefit of not having to learn a new language.

"But I think the metrics you've chosen are irrelevant."

Nothing is more frustrating than discovering that you need to install a new component or build system just to build a specific component. It gets worse when dealing with multiple external components in different build systems.

"It's also very hard to learn and very difficult to maintain"

I agree insofar as most people go into make without trying to learn it properly. There's a manpage and pretty good documentation for the GNU extensions. But in my experience, with custom build setups using stuff like Maven, a lot of time is spent fighting the build system when a simple Makefile would suffice.

"with the added benefit of not having to learn a new language."

You've shifted the burden from "learning a new but very simple language" to "learning a framework atop your language", which (based on my reading of gradle's docs) is not very concise.

> Nothing is more frustrating than discovering that you need to install a new component or build system just to build a specific component.

If you're talking about a FOSS component that random people will have to rebuild: Yes, absolutely. (Go listen to the Linux From Scratch community complain about CMake!)

If you're talking about a proprietary project that will only be distributed as binaries, or maybe even not distributed externally at all, why not pick the best tool for the job?

Indeed, mseebach's answer is really saying that those metrics are irrelevant to him.

No, what I'm really saying is that if you get to cherry-pick the metrics, you can win any debate.

Of course there are tradeoffs in anything, and if the expressiveness of your build script is the deal-breaker for your project, by all means use make. On the whole, I will still argue that more modern alternatives provide a better total experience. Also, note that the OP is directed at beginners.

"Ubiquity" and "expressiveness" are hardly cherry picking in my book

For OP's particular use case... bash? More expressive and a standard on pretty much all *nix systems.

Indeed. The metric is exactly wrong; you don't want a more expressive alternative to make, you want a less expressive alternative - one in which builds are more constrained, so that a newcomer to your project has a chance of figuring out what's actually going on.

Use the best tool for the job. Your advice is probably correct for whichever apps you've written in the past, but it's definitely not correct in the general case. Appropriate tools are highly dependent on what you're building. In my area of experience make has been more than adequate (crossplatform games and Erlang servers with rebar + make). Yes the syntax is arcane and ugly but it does a fine job.

While my advice might not be directly applicable in the general sense, I maintain that the headline advice of the article is wrong in the general sense.

Also, I quite specifically didn't claim that make is never the right choice.

> If you love make, it's a big, red, burning flag that you're not demanding enough of your tools and that you're not keeping up with changes in your ecosystem.

I wouldn't say that I love make, but it is my tool of choice. I have tried out SCons and CMake, and both seemed to require more hoop-jumping for customizations. That is, Makefiles are very close to being raw shell scripts. That, and SCons had performance issues (IIRC due to the md5sum it was doing on every input file). I have casually perused bjam files and even _building_ boost seemed awkward to me, with the weirdo string-parsing within individual flags. I am happy with autoconf as a package user, but I've never developed any software packages of my own with it.

I typically work with C and C++ on Linux server environments. I'm quite happy with non-recursive Makefiles, timestamp-based change detection, and GCC-generated dependency graphs. This setup does incremental builds correctly for me, and parallizes linearly (and occasionally superlinearly due to I/O). I'd genuinely like to know what magic sauce you would recommend for build automation on this sort of a platform.

This is akin to saying that you should not be using libc, on account of there being many newer, more suitable libraries for string manipulation, I/O, and so on.

There's a clear downside to having dependencies on relatively obscure, cutting-edge technologies that may not necessarily work or be available elsewhere. And since build systems usually don't really need to be fancy, as much as they need to work, being conservative in this area usually doesn't hurt.

Such a broad statement needs support and you offer only vague assertions. I would be the first to suggest alternatives for make on a large, complex project but for a simple project make is quite useful, particularly since it's so broadly available.

I have a few websites with very short makefiles to package static files. That's 5 simple lines of mostly patterns, very easy to understand and there's no tooling related overhead on any system we use. I wouldn't say that make is better than all of the alternatives but you're in serious diminishing returns territory trying to make further improvements.

The real advice I would offer is the bottom line observation that your build system is supposed to save you time. Don't start looking for a cool-kid approved new one until you're spending time on the tool rather than the complexity specific to your project.

"Which one is better depends on the platform you're on"

That works against a portable build. That still matters to some of us.

By 'platform' I meant programming language and execution environment (JVM/Ruby/Python), not OS.

I'm fond of not having the number of build systems I'm responsible for maintaining grow in step with the number of programming languages I use in a project.

All build systems I've come across are quite happy to build code in other languages - just like make. The reason I'd recommend the dominant system in each ecosystem is, well, that: It's dominant, thus likely to be better supported for the issue you're likely to face.

I've gone back and re-read your comments in this post and tried to find something concrete in them. You use strong, imperative language, yet I have no idea what you are recommending.

Which one is better depends on the platform you're on.

We are not stuck on any particular platform, we target numerous platforms new and old. Using make, the target may determine which of those wonderful tools in our ever changing (improving, failing and obsoleting) ecosystem does the actual build.

	xcodebuild -target MyApp -configuration Release clean
	xcodebuild -target MyApp-universal -configuration Release-universal clean
But all I need to do is type make

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact