Hacker News new | comments | show | ask | jobs | submit login
Afraid of Makefiles? Don't be (matthias-endler.de)
502 points by tdurden 116 days ago | hide | past | web | favorite | 272 comments



Make's underlying design is great (it builds a DAG of dependencies, which allows for parallel walking of the graph), but there's a number of practical problems that make it a royal pain to use as a generic build system:

1. Using make in a CI system doesn't really work, because of the way it handles conditional building based on mtime. Sometimes you just don't want the condition to be based on mtime, but rather a deterministic hash, or something else entirely.

2. Make is _really_ hard to use to try to compose a large build system from small re-usable steps. If you try to break it up into multiple Makefiles, you lose all of the benefits of a single connected graph. Read the article about why recursive make is harmful: http://aegis.sourceforge.net/auug97.pdf

3. Let's be honest, nobody really wants to learn Makefile syntax.

As a shameless plug, I built a tool similar to Make and redo, but just allows you to describe everything as a set of executables. It still builds a DAG of the dependencies, and allows you to compose massive build systems from smaller components: https://github.com/ejholmes/walk. You can use this to build anything your heart desires, as long as you can describe it as a graph of dependencies.


Also, I'm always surprised how little mention there is of graphs when we start talking about build systems. If you're making a build system (for literally anything) the DAG is your best friend. This is how all major build system tools work (make, terraform, systemd (yes, it's a build system when you think about it)) and it's how we're able to parallelize execution so easily. It's important to be conscious of the fact that this is what your doing when your making a build system; connecting a graph.

Highly recommend reading https://ocw.mit.edu/courses/electrical-engineering-and-compu... for some theory on parallel execution for graphs, if your interested in things like this.


> As a shameless plug, I built a tool similar to Make and redo

Also be sure to have a look at tup, which operates vastly more efficient by simply walking the DAG in the opposite direction:

http://gittup.org/tup/

That is, instead of looking what you want to build and checking all timestamps of dependency files, it can use e.g. inotify, then know exactly which files changed, and rebuilds everything that depends on those files.

Moreover, it performs the modification check only once, at the beginning. During the build, it doesn't need to re-check everything, because it already knows which files were recreated.


Good idea, but it misses some clever tricks. What happens when a dependency does change, but not in a way that actually matters? Buck has a nice feature where Java libraries are only fully recompiled when the API of their dependencies change, since everything is dynamically linked. https://www.youtube.com/watch?v=uvNI_E0ZgZU


tup can be made to ignore unimportant changes. I had to look up the syntax, hopefully I got it correct.

I have foo.c, bar.c and a rule like:

  : foreach foo.c bar.c |> ^o^ gcc -c %f -o %o |> %B.o {objs}
  : {objs} |> gcc %f -o %o |> baz
The ^o^ part tells it to not trigger the next rules if there's been no change to the output. So if you just change your source files to use an updated license, reformatted it for clarity, etc., then nothing else will happen.

I used this with some of my literate code projects where I had tup running org-tangle for me (via an emacs script). If I'd only updated the documentation, and the code hadn't changed, nothing else would build. If I'd only changed unimportant parts of the code that generated the same object files, no new binary or library would be built.


> What happens when a dependency does change, but not in a way that actually matters

I believe that tup does have some features in that direction, but I may be mistaken.


This looks awesome! Definitely very similar to walk(1). Thanks for sharing!


I can't recall what precisely, but I tried tup with a tiny project to build, and I found it couldn't handle the dependency structure. And apparently (as of 1 years ago) it wasn't supposed to be handled.


That sounds strange, especially for a tiny project. What kind of dependency structure did you have?


I really can't recall but it was a very stiff roadblock. You just couldn't depend on some file or variable ..

Maybe someone who used tup more recently can pitch in


> Using make in a CI system doesn't really work, because of the way it handles conditional building based on mtime.

Any CI I've seen starts from a fresh repo checkout and rebuilds everything every time, so it's not an issue with practice.

OTOH, probably all my projects were small enough for it to not hurt when the CI builds everything from scratch every time. I might look at this from another angle if the things I worked on were mega-LOC C++ programs, and not kilo-LOC Go programs.


That works, until your build system is sufficiently large or time consuming or not C/C++ like make was originally built for. For example, at my company we have a build system for building all of our Amazon Machine Images (AMIs). It doesn't make sense to re-build images unless they're dependencies have changed, but mtime just doesn't make any sense for a build system like this. Trying to coerce make into doing what we wanted was like pulling teeth.


Why didn't you just take make sources and fix them for <whatever> instead of mtime() check, making it a command line option or environment variable? Or simply .MTIME target, which would return int-like string of rank, defaulting to mtime() if not declared.


I guess it's because of his point #3.

Honestly, if I were to modify make for some custom behavior, I would look for another tool¹ or create yet another builder tool. Because although I already know most of the syntax, point #2 is an incredibly big problem, the GP missed point #4 in that make does not manage external dependencies², and the syntax is not bad only for learning, but for using too.

1 - Almost everybody seems to agree with me on this. The only problem is that we have all migrated to language constrained tools. This is a problem worth fixing.

2 - Wether it is installing them or simply testing if they are there, make can't do any. Hello autotools, with it syntax set to extreme cumbersomeness.


I don't know if you really understand Make. Even the original article misses point about what Make is really about in recommending `.PHONY` targets.

Make is about "making files." Or to be a little more semantically specific, Make is about "processes that transform files to make new files based upon their dependencies". It really doesn't matter that your file is a C source file, or some unpreprocessed CSS, or a template, or an AMI. To your AMI example, if you specify a dependency (or dependency chain, DAG) as a time-stamped (or set of) file, you can get make to rebuild the AME for you along with any other supporting or intermediate files.

IMO, the suckless guys are masters at writing deceptively complex but highly readable & concise Makefiles. Here's plan9 workalike utility library [0], a util-linux/busybox package of binaries [1], a window manager [2], a terminal emulator [3], and webkit2 based web browser [4]. I highly recommend you study them if you're looking to up your Make game.

[0] http://git.suckless.org/9base/tree/Makefile

[1] http://git.suckless.org/sbase/tree/Makefile

[2] http://git.suckless.org/dwm/tree/Makefile

[3] http://git.suckless.org/st/tree/Makefile

[4] http://git.suckless.org/surf/tree/Makefile


The first example uses recursive make, breaking the graph :(.

I'm not saying you can't use make (we did use make before we switched to walk), it's just more painful for non C/C++ build systems. All we really want from a build systems, is a generic tool for describing, and walking, a graph of dependencies.


Recursion does not imply a circular dependency which is most people's biggest concern with Make recursion. A graph with loops is still a graph. A broken graph is actually 2 graphs.

If you're careful (and even if you're not), loops in your dependency graph are usually a non-issue. And if you use a single `Makefile` it'll detect the circular dependency and try to ignore it.

Let's take an example processing Sass CSS files. You just need 2 folders in your project, `css/` and `scss/`, and a `Makefile` to process all your Sass files into CSS.

   # Locate our source assets, save a list
   SRC = $(shell ls scss/*.scss 2>/dev/null)
   # Specify a filename transformation from scss to css and convert the SRC list
   OUT = $(SRC:.scss=.css)
   # Define rule of how a scss to css transformation is supposed to happen
   css/%.css : scss/%.scss
        sass $<:$@
   # Make the `all` target depend on the OUT list
   all : $(OUT)

It takes just 5 lines to teach `make` how process Sass files. This would process any `.scss` file you dropped in `scss/` and save it in `css/`.


Sorry but that makefile is repeating the absolutely most common mistake in make.

If you use @import in your scss file the css will not be rebuilt if only the imported file is modified.

So no. It does not take 5 lines to teach make how to build Sass. To solve this you either need to teach make about sass @import, or teach sass about make and let it generate makefiles (like gcc -MMd). Or simply just use sass --watch for incremental builds and take the recompile hit if you need to restart it for whatever reason.

(While I'm at it...Another thing missing from that makefile is source maps (.css.map) generated by the sass compiler. It's not only one css file being generated from each sass file. That will complicate the rules even further)


You don't need to (specifically) teach make about `@import`. It's another scss file and it (hopefully) should be a part of your SRC list. Yes, having something like `gcc -MMd` would be better but that's sass for you.

To fix the `.css.map` issue is simple. You can't use the pattern you've seen for yacc/bison, though. The `$@` var would have both files in it.

   %.tab.c %.tab.h: %.y
           bison -d $<
Instead, simply adjust the pattern to rule to be the functional equivalent:

   css/%.css : scss/%.scss
        sass $<:$@
   css/%.css.map : scss/%.scss
        @touch $@


Even if the imported file is part of SRC list, any files importing it will not be rebuilt if only that file is modified.


Ahh. Got you. And easy fix again. Add `$(SRC)` to the pattern rule.

   css/%.css : $(SRC) scss/%.scss
        sass $<:$@
   css/%.css.map : $(SRC) scss/%.scss
        @touch $@


Now you will get correct results. But now if you change just one file, you will always rebuild everything, even files that don't import the changed file. Maybe not a big deal for SASS as you usually don't have mega LOCs of SASS and compiling is fairly quick. Just want to highlight that make isn't always as simple as it looks at second glance.


Yet another error is that if you remove one scss file, the old generated css file will not be removed after a rebuild. If the next build step does something like wildcard-include all *.css-files you will get problems.


Blind includes are a dumb to do (it's one easy way to exploit a source codebase). The better way is to generate your include from, you know, the actual way your codebase is. Use `$(OUT)` to build your include statement. That var says, "load any of my files", not the lazy/dangerous form of, "load any files that might actually be there".

And yes, by that logic, `SRC = $(shell ls scss/*.scss)` is dumb. That is not lost upon me.


> To solve this you either need to....

Or teach a third-party tool to output a list of @imported files for a given Sass file. Then this list can be used as that Sass file's dependencies.


Most languages don't build as quickly as go. Having to wait tens of minutes for a clean build is not unusual.


That brings back not-so-fond memories of trying to compile Servo on my notebook. :)

By the way, I just noticed that compilation times are another argument for microservices.


CircleCI (at least) caches build dependencies for speed, but it does it based on specific directories, not timestamps. As a result, it's not as fast as it could be because unless you munge your build to fit, it doesn't cache intermediate build products of your own code.


> so it's not an issue with practice.

I meant "in practice", of course...


> 2. Make is _really_ hard to use to try to compose a large build system ... recursive make is harmful

Note that "recursive make is harmful" does not argue against multiple Makefiles!

There's nothing wrong with using multiple Makefiles per-se, as long as they include rather than call each other. In other words, the article just says that Makefiles should use sub-Makefiles via "include" rather than executing those through separate (recursive) call to Make.

However, I agree that composition of Makefiles is still a pain, given that the included Makefile must be aware that it is executed from another (parent/grandparent) directory.


> 1. Using make in a CI system doesn't really work, because of the way it handles conditional building based on mtime

This is my #1 gripe with Make, and many other build systems as well. There are so many flaws in the timestamp approach. Most are easily fixed with cryptographic hashes.

I like the OCaml build system OPAM for that matter, it internally just stores checksum. I believe it also uses timestamps to speed things up, but only for equality comparison (not for older/newer comparisons which may easily lead to wrong result).


Any reason the hashes need to be cryptographic? Is there a security consideration I'm missing?


Depending on how deep down the rabbit hole you want to go, you could argue that e.g. using md5 could allow "an attacker" to submit e.g. an innocuous image that conflicts with another source file, causing it to be excluded from the build, causing a security hole to be opened.

But that's kinda silly.

I might argue in favor of (fast) cryptographic hash algorithms in general since they're fairly well understood / implemented / hardware accelerated / tend to have extremely "balanced" random output thus less likely to accidentally conflict... but that's about all I can think of.


Makefile syntax is pretty simple, once you're used to it.

Multiple Makefiles works fairly simply with the include directive.

The thing I've found painful before, with C projects, is getting Make to recognise that it needs to rebuild when a header has changed. There are various ways around this (makedepend etc) but they've all been quite painful to set up and not quite perfect.

That said, with modern machines that have NVMe storage and massively fast processors, a complete rebuild is seldom a big time cost


Or you can let the compiler generate the header-dependency Makefiles for you:

http://make.mad-scientist.net/papers/advanced-auto-dependenc...

You already tried that? I do not find it painful to set up.


Maybe not using exactly that method, but I am pretty sure I have tried using gcc for it. Will have a proper read of that later.

The syntax is a tad hairy though,and I'd want to adapt it - I tend to avoid compiling individual C files to objects these days, due to WHOPR optimisation.


If you don't care about incremental builds and want just a full rebuild, I would write a shell script instead of a Makefile.

That said, I think incremental builds are important for most use cases.


Oops, let me amend that. The partial order of make is still useful for parallel builds from scratch, even if you don't care about incremental builds. Like almost all other languages, shell forces a total order on execution.

On the other hand, computers are fast enough that serial full builds indeed work in some cases.


I'd argue that the make syntax and built-in features are a huge boon over starting from plain-old-shell regardless.


What are some examples of that? Shell can do basically everything make can. The syntax of both is ugly but I'll grant that make is more uniform.

And btw there is no way to use make without shell, but you can use shell without make.


I guess it probably comes down to preference, but I can take a list, write a one-line .c->.o transform, a one-line link target, add in a clean target etc etc faster with make.

Sure, I can write these as functions in bash, call once for each source file, check return codes etc etc, but I find expressing dependencies faster than writing anything like robust code in shell, and make deals with failed steps by stopping.


And as you rightly pointed out above - once you get the dependencies nailed down, you can parallel build trivially.


> 2. Make is _really_ hard to use to try to compose a large build system

Are Android and LEDE/OpenWrt big enough?

> 3. Let's be honest, nobody really wants to learn Makefile syntax.

That's probably very subjective, I find JavaScript, C++, PHP or Perl much worse :-)

Anyway, for the past years I'm cheating a lot and using CMake to get complex Makefiles almost for free.


> That's probably very subjective, I find JavaScript, C++, PHP or Perl much worse :-)

Oh definitely. I actually like Makefile's, but in my experience, more teams have a deep familiarity with some programming language, than with Makefile syntax. I haven't met very many people who have read the GNU make manual, and know all the idiosyncracies around Makefile syntax.


+1 for cmake. It works so well, in my latest C++ project the cmake file is more or less just a listing of source files and it does it job on windows, Mac and a few different Linux distros without any platform specific stuff. So well that I replaced the huge makefiles of the libraries I use with really small cmake files that just work and don't require modifications every 3 months because this specific distro causes problems or whatever.

It feels much more like in those IDEs where you just drag all the source files in and you're done.

I even prefer editing visual Studio project files by hand to many makefiles out there..


This lovely cough line brought to you by a lede/openwrt Makefile:

target/linux/ar71xx/image/Makefile: CMDLINE = $$(if $$(BOARDNAME),board=$$(BOARDNAME)) $$(if $$(MTDPARTS),mtdparts=$$(MTDPARTS)) $$(if $$(CONSOLE),console=$$(CONSOLE))


That's just a bit of cmd line building. Three clauses to build a string -

  If BOARDNAME is defined, add board=$(BOARDNAME) to the string
  If MTDPARTS is defined, add mtdparts=$(MTDPARTS) to the string
  If CONSOLE is defined, add console=$(CONSOLE) to the string
Pretty simple, if a little like the ternary operator in C.

You must have seen more complex clauses than that in shell scripts and all sorts of places.


> Are Android and LEDE/OpenWrt big enough?

Doesn't mean that it's not hard. :) And if IIRC google are trying to move android build to blueprint because make is too complicated.


> Let's be honest, nobody really wants to learn Makefile syntax.

I haven't found that to be the case. Once I show people how simple it is they realise make is somewhat approachable, similar to OP's article.


Consider yourself lucky! If I had a penny for everytime I had to explain automatic variables...


You can explain how to use a man page (man make RET /\$+ RET).


The number of times I've forgotten that unzip restores timestamps and so make has decided to rebuild everything...

`unzip -D` is your friend.


ccache is your friend ;-)


Honestly, I've never actually written a makefile to compile a C or C++ program. In fact, I haven't written any C of my own since university.

My makefiles are usually just a way to record a data pipeline. Get these files, shove them through these scripts here and those programs there. Launch a web server to show the output.


I honestly have no idea why there's so much fandom towards Make in this thread, but for me there are a few absolutely devastating problems with Makefiles:

a) Mtimes-as-change-detection is fundamentally broken given the reality of networked file systems and... physics. (Minor problem, but extremely annoying to work around in practice.)

b) Nobody can actually really understand all the interdependencies between all the code in their system(s!), and yet Makefiles expect you to specify all of that explicitly?!? Yes, you could technically specify that -- and you'll want to -- but you won't, because you don't know and don't have the time.

c) Make support for builds that change the structure of the build during the build is abysmal. E.g. after "processing foo.xml we now have more files than we had before! What are you going to do?". Well, in Make it's some sort of custom solution with ".d" files and "gcc -M" or whatever. This is utterly broken in that it pushes all the complexity onto the user. So, yes, an elegant model, but it doesn't actually solve the problem. If you want to see a better solution see the "Shake" paper.


> As a shameless plug, I built a tool similar to Make and redo […] https://github.com/ejholmes/walk

Your README’s example show `parse.h` as output from `Walkfile deps parse.o`, I think that is a mistake.

As for your build system (and comment), I have some questions:

1. How do you achieve using a deterministic hash as condition (and aren’t all hash functions deterministic)?

2. Why would you not be able to use mtime as a dependency? The only case I have run into is when the build depend on data from remote machines, but in that case, I think the proper solution is to have an initial “configure” step where you retrieve the data your build depend on and/or a build rule to update this data.

3. Does your build system execute the Walkfile for every node in the graph on each walk? Because that sounds like a quite noticeable overhead for larger projects.

4. Am I right in thinking that the primary advantage with your system, over make, is that a shell command is executed to obtain a target’s dependencies?


1. Yeah, hash functions are deterministic, but your input needs to be determinsitic across machines too. For example, on a unix system, you may want to conditionally build if any files have changed. To do that, you could generate a deterministic hash of the dependencies with something like `find . | sort | xargs sha1sum | sha1sum | cut -d ' ' -f 1`. Including mtime in that would break across machines.

2. Mainly because doesn't translate across machines; it only applies to your machine. If someone checks out the repo on their machine, mtime is different. As soon as you move a build system to CI, you need something better than mtime, like content adressable hashes, if you intend to cash targets.

3. It executes the Walkfile twice for each target in the graph; once to determine the targets deps, and once to execute the target. This definitely hasn't been a bottle neck for anything I've used walk(1) with so far.

4. Correct! But even more so, replace "shell script" with "executable". The Walkfile can be written in any language you want, as long as it has the executable bit set.


> your input needs to be determinsitic across machines too

This is one of those things that "distcc" and "ccache" have dealt with effectively - anyone trying to build big C++ projects would be well served by looking at those tools.


Thanks for the clarifications.

As for #1, what I do not understand is how a `Walkfile` allows me to use a content hash change to trigger a rebuild.

Your documentation says that a file list should be returned for `deps`, so how does the `Walkfile` communicate that e.g. `main.o` should be updated if `sha1sum main.c` is different than on last invocation?


Good question. It's not a concern of walk(1) itself, since it's impossible to generalize for every use case. walk(1) _always_ executes the target, but the target itself can decide whether it actually needs to do work. For a C project, you would just do something like this: https://github.com/ejholmes/walk/blob/master/test/111-compil....

That's a trivial example using mtime. You could replace it with a hash function if you wanted.


Very much agreed :| It's great when it's kept fairly simple (for both automating random stuff and building things! and it's installed everywhere! and has some cross-platform tools!), but it quickly turns into a nest of vicious footguns unless you're extraordinarily careful.


> If you try to break it up into multiple Makefiles, you lose all of the benefits of a single connected graph

Only if each Makefile is treated as a separate rule set processed with a separate make invocation.

> Read the article about why recursive make is harmful

That (now) venerable, old paper in fact shows how to break up into multiple makefiles (called "module.mk" in its examples) which are included into one graph.

(It's possible to actually have this file be called Makefile. Not only that, but it's possible to have it so you can actually type "make" in any directory of the project where there is a Makefile, and it will correctly load and execute the rules relative to the root of the tree.)


> Let's be honest, nobody really wants to learn Makefile syntax.

Make's syntax is quite simple. It's a bunch of variables one has to memorise, and man page is a command away. And any decent editor would know to insert literal tabs.

> Make is _really_ hard to use to try to compose a large build system from small re-usable steps.

I have no experience myself, but Linux's build system is a bunch of homebrew makefiles, all the BSDs and their ports trees build with bmake. These are enough positive examples for me to think that Make is good for big systems.


> Make's syntax is quite simple.

I'll just leave this here:

https://github.com/sapcc/limes/blob/62e07b430e2019a6c1891443... (then used in line 42)


That's a variable assignment. What's complex with that?


The fact that I even need to use variables because the language does not have proper string literals.



Main downside, as with many tools much better than make:

>walk is currently not considered stable, and there may be breaking changes before the 1.0 release.

:)

It also needs go, which is pretty non-lightweight dependency on windows, I guess. (And personally, I don't find these dir-hierarchy connection and bash's "case $target in a) ... b) ..." any friendlier at all, though make has some quirks with variable interpolation.)

I'm not to argue to use make for very complex situations, but usual src->intermediate->executable and generate this from that of any size and count is a perfect task for make. Makefiles are unsuitable not for big projects, but for complicated build systems, where it's not enough to just do them steps in correct order. If your build system is complicated, it should be worth it at least. Otherwise use make.

Fix: typos/grammar


walk is built with Go, which also means it's a single statically linked binary with no dependencies if you download the pre-compiled binary.

If you don't like bash syntax, you can use Python, or Ruby, or Perl, etc. Any executable can serve as a Walkfile.

It's a completely fair point that make is installed on pretty much every machine by default, which is why we won't see it going away anytime soon (nor should it, it's still good).


I think the primary thing that makes people fear Makefiles is that they try learning it by inspecting the output of automake/autoconf, cmake, or other such systems. These machine-generated Makefiles are almost always awful to look at, primarily because they have several dozen workarounds and least-common-denominators for make implementations dating back to the 1980s.

A properly hand-tailored Makefile is a thing of beauty, and it is not difficult.


Most of the things in an autogenerated Makefile are there for a reason: dependency tracking, stuff like "make help", proper cross compile support (trickier than you think!), test running, dist, distclean, configure flags... You could reimplement them yourself, but you're going to end up doing a whole lot of work for little gain.

I'll write Makefiles by hand for small C/C++ projects, but for anything serious I'll use cmake/etc.

Source: We made a "properly hand-tailored Makefile" for Rust. It started out short and elegant, but it quickly grew into a nightmare lasting 4 or 5 long years. Only about one or two people (Alex Crichton and Brian Anderson IIRC) had any clue how it operated. Proper cross-compiling support involved multiple levels of nested variable expansion all over the place to find the right build/host/target compilers so that Canadian cross builds worked. Alex ended up doing some heroic effort to throw away all the Makefiles and going to (effectively) a custom build system using Cargo, which was a massive simplification.


CMake is easy enough that I'm having trouble picturing a scenario unserious enough that I'd consider not using it.

Then again, as a ROS user, I'm also a defacto CMake expert, so perhaps I underestimate how difficult CMake is.


I've been told that, before KDE switched to CMake (becoming the first big open-source project to do so), there were only two or three people (out of hundreds of developers) who dared to touch the autotools stuff, except for peephole edits like adding a new source file to a list. For everything else, one of the experts needed to be brought in.

I only came into KDE when the switch to CMake had already happened, and remember it as reasonably approachable (even if quirky). Most developers were familiar with it and actively authoring the CMakeLists.txt files for their own projects. (That doesn't mean that there weren't some experts again, but they focused on implementing reusable modules that the others could easily integrate into their own build system.)


I'm already struggling to compare variables to strings. Sometimes the string will be interpreted as a variable instead of a string, and I don't know how to avoid that.


Most of the time in CMake, you want expansion, so:

    my_function("${MY_VAR}")
Avoid the quotes if your variable contains spaces or other separators, and it's your intention for the separate pieces to go into the function or macro independently.

The other case is builtin macros that expect to be passed the name of a variable that they themselves are manipulating. For example the list and string operations. See: https://cmake.org/cmake/help/v3.0/command/list.html


Well, not difficult... I'm not sure about that.

Most hand-tailored Makefiles have seen aside from well known OSS projects (Linux Kernel, suckless stuff) generally had major issues.

Here is a sample of what I've seen:

* not supporting common variables, specially DESTDIR in install target, but also PREFIX, BINDIR, etc, or CC and CFLAGS for compilation. (https://www.gnu.org/prep/standards/html_node/Directory-Varia...)

* bad ordering, at least preventing parallel build (-j N)

* bad error handling which result in silent failures, I've seen it in Makefiles using for/while loops, if/then/else, or successions of commands (cmd1;cmd2) inside targets for example.

Makefiles are also a little too crude for somewhat complex projects. Doing things like searching header pathes, detecting which OS you're on, what is the CPU endianness, searching the required dependencies and recovering information such as version about said dependencies would be extremely painful to do with only plain Makefile.

CMake contains a lot of ready to use modules to tackle the most common issues, and you can easily write your own modules if you need to.

Aside from very simple project, I would not recommend using plain Makefiles, there are just to easy to do wrong.


Agreed, it can get very painful very quickly.


I'm really impressed with the makefiles provided by devKitPro, which is a toolchain for making homebrew on a handful of video game platforms:

https://github.com/devkitPro/nds-examples

They're very clearly hand made, do just enough to be immensely useful, and are well commented enough to be easy to modify for a Make beginner, just like I was back in the day. I'm sure there are other great examples floating around the 'net, but I cut my teeth on GBA and NDS development, and taught myself make by following in their footsteps.


I keep trying to get into GBA development but this is actually what keeps blocking me. Everything they've written is set up just for their environment and is not really explained how it works. Sure, they explain what to change for your game, but my arm compiler is over here and could you just tell me where this gba.spec file comes from or what it does?! (etc. etc.)


I imagine anything short and simple enough can be finessed to be beautiful and elegant.

...but scaling from the small to the large elegantly is what make doesn't do well; and there's an astonishing number of tools designed to try to work around the problem.

A lot of people hate CMake for its strange language and (arguably) questionable design choices; but it scales to large projects without significant problems; you only have to look as far as the android native client makefiles to see how the truely heroic efforts to use make have resulted in makefiles that are only... moderately terrible, instead of absolutely aweful.

I feel like there's this nostalgic desire from many programmers to 'embrace simplicity' and avoid the complexity and annoyance of learning and using complex 3rd party tools when 'simple' solutions are good enough.

...but often those simple solutions are ever any good at a trivial scale.


Makefiles tend to get really messy when you have to build libraries and components in separate directories. It's definitely tuned more towards building single target projects.


Just have each component have its own Makefile and use a top-level Makefile to lead the orchestra.


Which is fragile. Extremely fragile in fact.

I've seen failures of such schemes to rebuild stuff when command line parameters (defines, environment) changed.

It also brings all sorts of trouble in parallel builds as Make is weak at handling dependencies that are not generated in the exact Makefile you run. (Thus non-recursive Makefile which still fail and have other warts.)

Even cmake and autoconf generated Makefile have trouble with complex projects. (Part of the reason why cmake can now generate Ninja files instead.)


That’s not really a good idea:

Recursive Make Considered Harmful

http://aegis.sourceforge.net/auug97.pdf


Eh, but by using includes you can still modularise.

I have a recursive-make project at the moment that suffers from none of the problems described in the paper. We will likely move to an include-based scheme before long, which will take some minor tweaking of targets in the leaf Makefiles.

Also - Considered harmful essays considered harmful - http://meyerweb.com/eric/comment/chech.html


This one is generated by humans; I think they are following an algorithm generated by machines: https://git.lede-project.org/?p=source.git;a=blob;f=package/...


And yet we keep producing such tool over and over.

Meson seems to be the latest fashion in Linux circles, it seems...


All build systems suck.

You are trying to solve a Turing complete problem in a simplest low impact way as possible.

There is no good solution. Everyone will fall short in one way or another. Hence a new build system. The wheel never stops.


What do you mean by Turing-complete problems? Turing completeness is related to programming languages, not problems.

Most build tools do have some kind of Turing-complete tooling built in with their DSL. But I don't think that's an absolute necessity.


You are moving symbols based on the action of other symbols.

These symbols are artifacts/files. Elementary recursive Build-Scripts can be Turing complete even if the overall language isn't.


Any build system is as good as the local projects configuration for it. Any solution is good so long as when I press build everything that I wanted to build get built.


If meson is actually python with special functions, why didn't they make it like "from meson import * " and then @target("all") def all: ... ?


Meson is written in Python not "actually Python". The build language is a custom non-Turing-complete DSL.


Make is awesome. I have always loved make, and got really good with some of its magic. After switching to Java years ago, we collectively decided, "platform independent tools are better", and then we used ant. Man was ant bad, but hey! It was platform independent.

Then we started using maven, and man, maven is ridiculously complex, especially adding custom tasks, but at least it was declarative. After getting into Rust, I have to say, Cargo got the declarative build just right.

But then, for some basic scripts I decided to pick Make back up. And I wondered, why did we move away from this? It's so simple and straightforward. My suggestion, like others are saying, is keep it simple. Try and make declarative files, without needing to customize to projects.

I do wish Make had a platform independent strict mode, because this is still an issue if you want to support different Unixes and Windows.

p.s. I just thought of an interesting project. Something like oh-my-zsh for common configs.


How can Ant be bad? It's just Make in XML ... ;-)

But because it is Make written in Java with a XML syntax, it has inherently the same problems.

Fun fact: They had for years a makefile rant on their home page which boilt down to "Ant was invented because its developer couldn't make Makefiles work as his editor didn't properly show tabs vs spaces": https://web.archive.org/web/20100203102803/http://ant.apache...


Ant is actually a different beast, it is more declarative than procedural Makefile language.

Still ugly and terrible and limiting.


How is it more declarative than the Makefile language?

You declare targets and dependencies and the steps within are target a executed procedurally in sequence.

This sentence is 100% true for both Ant and Make.

Ugly, yes. It was developed during the times when everything had to be XML. But also Makefiles don't win a beauty contest.

Terrible. Maybe, but not worse than trying to write portable Makefile for something non-trivial (e.g. libraries).

Limiting? In what way? You are encouraged to use the supplied tasks which cover a lot. Creating new tasks would be done in the language you programming. You can still execute any arbitrary command if you really want to.


In ant, to declare a target for any no direct code generator is a royal pain. Even worse than in Make.

Ant is declarative in that you do not directly refer to files except in tasks that directly handle files. You cannot say that phony task x depends on file y.


I always suspected that ant was written by people who didn't first take a look at what already existed. Make's use of tabs was a mistake, but not a fatal one.


I actually like tabs, but if they bother you, you can use semicolons instead.


I do like the straightforwardness of Make. But it doesn't do quite enough for it to be simple to use for C or C++ programs. The simplest generic Makefile I can come up with that handles header file dependencies correctly is 10-20 lines of relatively complex interaction with the compiler.

But then again maybe that's just an argument against C and C++'s compilation model. :)


To be fair, dependency tracking is properly a function of the compiler, you can't do it in make alone without the ability to parse the language. And it's not 10-20 lines. GCC and clang both support the -M family of arguments that do this for you. It's a matter of running it to generate some sort of dependency file and including it in the makefile.

Yes, that's annoying and definitely a problem with the inclusion-based dependency model of C. But really it's not such a big deal.


Those arguments are not enough. For dependencies to be fully reliable, the compiler needs to emit information about where headers were not found, in addition to where they were found. Otherwise headers introduced earlier into search paths after a build do not properly trigger a rebuild.

* http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...

* http://cr.yp.to/redo/honest-nonfile.html


Honestly, that's just intractable. That can't be solved for the same reason that changes to the makefile itself can't be tracked. Changes to the structure of the build system aren't "dependencies" as commonly understood.

Almost all other systems have this flaw in some way as well. You can fool everything, because a system capable of not being fooled would literally have to reparse and rebuild everything from scratch.


I would like to boast that I have clearly solved an intractable problem, since (as I said) I have tools that happily track this very thing. But the truth is that the idea that it is intractable is rubbish, rather than that I can achieve impossible things before breakfast. (-:

You've switched tack between "not such a big deal" and "can't be solved" in the space of two messages. Both of your extremes are wrong. The truth is that this is simply difficult and needs either improvements to compilers or, as in my case, add-on tools that replicate the compiler's pre-processing and emit both the redo-ifchange and the redo-ifcreate information.


Almost all other systems have this flaw because they require dependencies before a build. If recording dependencies after the build, the entire problem becomes very simple. See here:

http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...


The example above was of a change to the build structure (include path, specifically) which changes the dependency tree in a way that doesn't involve changes to the files themselves and will be invisible to make or other tools that use file timestamps. I pointed out that this was just one of a whole class of intractable problems with dependency tracking that we all choose to ignore. It can't be fixed, even in principle. The only truly sure way to know you're building correctly is to build from scratch. Everything else is just a heuristic.


Why do you think the problem can not be fixed?

Also, have you looked at the redo tool redo-ifcreate? http://news.dieweltistgarnichtso.net/bin/redo-sh.html


The following sentence about my redo implementation is wrong:

> In 2014, Nils Dagsson Moskopp re-implemented Pennarun redo, retargetting it at the Bourne Again shell and BusyBox.

I targeted the Bourne Shell (sh), not the Bourne Again Shell (bash). Also, my redo implementation contains redo-dot that paints a dependency-tree – I have not seen this otherwere.


You only put /bin/sh as the interpreter. You still used quite a number of Bourne Again shell constructs in the scripts themselves. Some are not in POSIX sh and utilities, some are not in the Bourne shell. (The Bourne shell is not conformant with POSIX sh, so targetting POSIX sh is not the same as targetting the Bourne shell.)

These include amongst others local, $(), >&-, the -a and -o operators to test, and command. (You also failed to guard against variables expanding to operators in the test command, but that is a common general error rather than a Bashism.)

Beware of testing against /bin/sh, even the BusyBox one, and assuming that that means POSIX compliance, let alone Bourne shell compatibility. Even the OpenBSD Korn shell running as /bin/sh or the Debian Almquist shell running as /bin/sh will silently admit some non-POSIX constructs.


While you are right about POSIX problems (like using “local”) I actually targeted Dash and older versions of BusyBox – not Bash.

I plan to work on POSIX compatibility for my redo implementation.


> It's a matter of running it to generate some sort of dependency file and including it in the makefile.

This is not as simple as it sounds. Make has a fundamental limitation that dependencies cannot be generated on the fly, and current workarounds for dependency-generating dependencies are very messy [1].

[1] http://make.mad-scientist.net/papers/advanced-auto-dependenc...


>Although it’s simple, there are serious problems with this method. First and foremost is that dependencies are only rebuilt when the user explicitly requests it; if the user doesn’t run make depend regularly they could become badly out-of-date and make will not properly rebuild targets. Thus, we cannot say this is seamless and accurate.

Not so fundamental, if you quickly make depend after changing includes. Why is that messy? I never got that "buy magic automation, sell simplicity" trading. Deps changed, deps have to be updated. Once updated, they just work.


Right, I was referring to adding the right -M arguments, including the generated .d files, and handling the edge cases with adding/removing files to the build. Overall the minimal Makefile I copy-paste into every new C project is about that size, so I suppose 10-20 lines includes a few other things as well.


Make should have included something like tcl, or shipped its own tcl and its own os/posix libs. It is very easy for makefiles to become tied to a particular OS/Distribution/dev environment.


GN Make 4.0 actually has Guile Scheme. This is pretty recent and I haven't seen it used yet. If anyone has seen it used, I'd be interested.

https://www.gnu.org/software/make/manual/html_node/Guile-Int...

https://jaxenter.com/gnu-make-4-0-adds-guile-output-sync-bre...

FWIW I wrote about how Make, shell, and awk heavily overlap in functionality here:

http://www.oilshell.org/blog/2016/11/14.html

This probably won't happen for quite a long time, but I'd like to expand my Oil shell to have the functionality of Make.

On the one hand, it's kind of ironic that you're asking for Tcl, when shell is Turing complete and already such an integral part of Make (every line of the Makefile spawns /bin/sh).

On the other hand, I understand that shell is a language with many sharp edges and most people dislike it. The point of Oil is to get rid of the sharp edges, and then maybe shell can take the place of Tcl/Guile.

It seems a little ridiculous to write build scripts in make, shell, and guile, all of which are Turing-complete (not to mention Awk or Perl, which often show up). And I don't actually think Lisp/Scheme are very good languages for Unix-like text processing and syscalls/OS integration.


The guile integration in gmake is a bit shallow, but still quite useful (here's the whole implementation [1][2]). It could also use more real-life usage examples - there's really nothing out there besides a few short snippets.

As a learning experience, I wanted to implement parts of the GMSL[3] on top of guile. It looked like it could be useful to others, so I made it into its own project: the GNU Make Toolkit[4] (it's still very much a work in progress - I've put more effort into the test suite and the documentation generator than into actual functionality).

[1]: http://git.savannah.gnu.org/cgit/make.git/tree/guile.c

[2]: http://git.savannah.gnu.org/cgit/make.git/tree/gmk-default.s...

[3]: http://gmsl.sourceforge.net/

[4]: https://github.com/gvalkov/gnu-make-toolkit


What happens when /bin/sh is dash or bash? I despise /bin/sh, it is basically PHP. There isn't a terrible amount of basic functionality that would enable makefiles to cross platform (even cross distribution would be nice). Interacting with files, file metadata, processes, exit codes, etc.

Absolutely love your oilshell posts. Keep it up. I think a concatenative language would be great as the embedded logic for Make, but a Lisp would also work.


> What happens when /bin/sh is dash or bash? Dash is specifically designed to be a posix compatible shell for use as /bin/sh, i.e. in shell scripts (or called by makefiles).

When Bash is invoked as "sh", it runs in posix compatibility mode.

There is a reason some of us keep harping on about "don't write bash-scripts, write portable shell scripts".

> I despise /bin/sh, it is basically PHP.

Similarly, I despise chickens. They're basically the sub-prime mortgage crisis.


Yes that is one of the problems with Make -- /bin/sh is different on different machines, and that information appears nowhere in the Makefile. There's no way to test that the commands you're running will actually work on someone else's machine, other than by "memorizing the manual".

Combining make and shell would mitigate this problem. Then you would need one binary in sync instead of two. Well, I suppose you also need "busybox", e.g. cp, mv, mkdir -p, etc. But in practice I think that is less of an issue than shell and make colliding.

Thanks for the encouragement! (If you didn't catch it I linked these build system observations in a sibling comment: http://www.oilshell.org/blog/2017/05/31.html)

I've played with concatenative languages, and read a lot about Forth. I'm not sure I see them as great for "logic". They are very elegant for certain problems, but fall down for others (IIRC the quadratic formula was a popular example.)

The main kind of logic you need in Makefiles is expressing build variants -- e.g. for debug/release, internationalized builds, coverage builds, profile-directed feedback, running parameterized test suites, etc. I think that can be done fairly well with a Python-like imperative language with dicts and lists. (Some people have mentioned Bazel, and it is derived from Python and works fairly well for that.)


> /bin/sh is different on different machines

Write portable shell scripts (or lines, in the case of a Makefile).

There's some tips here: http://people.fas.harvard.edu/~lib113/reference/unix/portabl... and of course you can refer to http://pubs.opengroup.org/onlinepubs/009695399/utilities/con...


That's not useful advice. That's what I mean by "memorizing the manual" (or POSIX spec).

There's no way to test that the commands you're running will actually work on someone else's machine, other than by "memorizing the manual".

I have an entire book on this:

https://www.amazon.com/Beginning-Portable-Shell-Scripting-Pr...

It goes through all the commands and common versions of Unix and which flags are likely to work, etc.

Even that book admits there's a lot of folklore, because nobody has actually gone and tested things recently. Something as simple as safely writing with "echo" is a problem. You can argue that any script that uses echo $foo is incorrect (because $foo might be a flag). Conversely 'echo --' is supposed to print -- by POSIX.

The only people who are likely to even attempt this are people whose full-time job it is to write shell scripts, or the authors of tools like autoconf, which must generate portable shell. autoconf shell is a good example of the anachronisms and hoops you need to jump through to support this style.

Nobody else has time for that, because they have to spend their mental energy writing C and C++, not shell and make. So that's why we have pretty low standards for the quality of build systems. The tools aren't there to support writing a robust and easy-to-use build system.


Meta, I really wish I could reply to both chubot and stephenr ...

There is way too much incidental complexity in /bin/sh and almost all build systems. Unix is awesome, but I also hate Unix for being "good enough". Unix Hater's Handbook and all. I hate it from above, not below.

We should always be seeking to reduce the cognitive burden in the tasks we do. One level of fail for /bin/sh is the level of semantic density and the lack of discoverability. It violates the principle of least surprise like nothing else I have used. Look at variable assignment!

    export FOO2= "true"
    export FOO3="true"
These are semantically different. One evaluates the string, the other does not. And by being so obtuse, the majority of folks randomly mutate their sh scripts until they appear to work. No knowledge gained and nothing that would qualify as engineering.

The biggest problems with sh and Make are that lack of debugging and introspection. Any follow on tool that would displace them should make developer ergonomics the highest priority.


does the faded color mean this post is downvoted? In that case, why is it so?


It does, and I don't know why. I mostly ignore votes, and usually only vote when I think a viewpoint needs to get more exposure, not because I think it is "right".


SHELL=/bin/bash


This is where cmake missed the boat, imo. They were even using Tcl/Tk for a medical imaging project[0] when they decided they needed their own build system. Then when they needed an expressive embedded language, wrote an ad hoc one from scratch.

[0] https://www.kitware.com/platforms/#itk


Schaudenfreude Lol! I don't like the term "too smart for one's own good" but sometimes it applies.

The CMake scripting language [0] .

[0] https://cmake.org/Wiki/CMake/Language_Syntax


POSIX even has a number of implicit rules that make should recognize ( http://pubs.opengroup.org/onlinepubs/009695399/utilities/mak... ).

The real problem I have is that make generally relies on shell commands like cp, so there's no truly cross-platform (e.g., Windows and POSIX) way to copy files around in a makefile. I guess you could build your own utility, use it, and delete it when you're finished building. I've actually resorted to using ExtUtils::Command ( http://perldoc.perl.org/ExtUtils/Command.html ).


make does have a platform independent strict mode. Unfortunately the ".POSIX:" mode is sufficiently limited that you won't want to use it for anything you're writing by hand.


Wow. I didn't realize this. Does it support Windows, without Linux etc installed?


I'd guess that nmake is POSIX-compliant (or close to it), simply because the POSIX requirements are so minimal that it would take quite a bit of effort to avoid being POSIX compliant.


Don't know if there's a Win32 build of GNU make, but Microsoft has nmake. I haven't written a Makefile for it in years, but I have a hard time imagining there being much portability between the two. Not in the Makefile stucture, but rather in the commands it invokes to actually build targets.


msys2 / cygnus, etc have GNU make for windows.


I sometimes think we developers collectively gravitate towards complexity just to feel smart.


Did you enjoy build.rs?


By using pseudo targets only in the example and not real files, the article misses the main point of targets and dependencies: target rules will only be executed if the dependencies changed. make will compare the time of last modification (mtime) on the filesystem to avoid unnecessary compilation. To me, this is the most important advantage of a proper Makefile over a simple shell script always executing lots of commands.


Don't you have to list every file though? Many of my projects have hundreds of source files.


So my example (below in another comment), there's a solution using $(wildcard).


No, you can use `wildcard`


Sneaky pro-tip - use Makefiles to parallelize jobs that have nothing to do with building software. Then throw a -j16 or something at it and watch the magic happen.

I was stuck on an old DoD redhat box and it didn't have gnu parallel or other such things and co-worker suggested make. It was available and it did the job nicely.


Seconded. Really good for preprocessing of data files, or repetitive data pipelines. Just set up a naming convention for the intermediate files and define transformation rules. Then you can drop one new file in somewhere, type "make", and the minimal set of processing will just run.

If you need more power, use the wildcard expansion and "patsubst" type rules to define rules at runtime.


Gnu xargs since circa 2009 has had a '-P' option that will let you execute simple tasks in parallel.


I forgot why I couldn't use xargs -P there. If I had to guess ancient DoD version of RHEL, like say RHEL 5. Note it was released in 2007 and xargs as you mentioned had that since 2009.

Sometimes Redhat backport things but I just remember not finding anything there and then being very happy to have discovered make. I think even with xargs -P I would have gone with make anyway as it involved a few dependencies and checking of file times.


For more, see https://news.ycombinator.com/item?id=14836340 (Ask HN: In what creative ways are you using Makefiles?).

I use a build system, redo rather than make, to rebuild my GOPHER site and Debian/FreeBSD package repositories. (https://news.ycombinator.com/item?id=14837740 https://news.ycombinator.com/item?id=14928216)


Fun fact: GNU Parallel used to be a wrapper for making Makefiles: https://www.gnu.org/software/parallel/history.html

Also see this if you have a system without GNU Parallel: http://oletange.blogspot.dk/2013/04/why-not-install-gnu-para...


Today's simple makefiles are the end result of lessons hard learned. You'd be horrified to see what the output of imake looked like.

From memory here's a Makefile that serves most of my needs (use tabs):

  SOURCE=$(wildcard *.c)
  OBJS=$(patsubst %.c,%.o, $(SOURCE))
  CFLAGS=-Wall
  # define CFLAGS and LDFLAGS as necessary

  all: name_of_bin

  name_of_bin: $(OBJS)
      $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS)

  %.o: %.c
      $(CC) $(CFLAGS) -o $@ $^

  clean:
      rm -f *.o name_of_bin

  .PHONY: clean all


better still, IMO -- capitalize on those sweet, sweet implicit rules.

  CFLAGS=-Wall
  # define CFLAGS, CXXFLAGS, LDLIBS and LDFLAGS as necessary

  all: name_of_bin

  name_of_bin: a.o b.o c.o

  clean:
      $(RM) -f *.o name_of_bin

  .PHONY: clean all


I don't really like those, because then it makes it harder to separate out the source and the build directory. VPATH is a partial solution, but I find it much easier to have a single explicit pattern rule, rather than using both implicit pattern rules and modification of those implicit pattern rules.


That's fair. In actual projects of more than a few source files I do copy/paste some boilerplate to give me a build dir, dependency files, and so on. I just thought this (actual real-world example that I use for smallish projects) would help someone.


Thanks... I suspected I was just redefining a built-in pattern rule, but I put it there anyway just so I can remember whether it's LDFLAGS (linker) vs LFLAGS (lexer).

Plus I was thinking maybe less magic for someone trying to make sense of it.

Is there no command needed for name_of_bin to be linked? Does it automatically figure it out based on *.o dependencies?


Yes it does. In fact, with GNU make, if you have a single file, hello.c, you can type "make hello" without a makefile and it will just work.


$(RM) is defined as "rm -f", so the extra -f is not needed.

Personally, I would just write "rm -f" explicitly. I don't see any advantages of using $(RM).


I just noticed I'm missing a "-c" flag in the .c -> .o compiler command. Sorry, just banged it out from memory on my phone, I didn't actually test it!


Broken. Update header file => no rebuild....


This is totally incomprehensible to me.


You might not be familiar with what C compiler and linker commands look like. Here's a listing of what "make" or "make all" actually executes in the example makefile (just take as given that these are normal and understandable commands to compile and link a binary):

  cc -Wall -o file1.o -c file1.c
  cc -Wall -o file2.o -c file2.c
  cc -Wall -o file3.o -c file3.c
  cc -Wall -o name_of_bin file1.o file2.o file3.o
and "make clean" executes:

  rm -f *.o name_of_bin
The big win is that make looks at modification times of all targets and dependencies. If you type 'make' without changing anything, it completes within microseconds. If you edit only file1.c, then it only recompiles file1.o and re-links name_of_bin. You really miss this kind of speed when moving back to any other kind of dev environment!


I don't really know what to tell you, there. Are you generally familiar with the C build process, and bash?

The only "unusual" things I'm seeing there are the $@ (target) and $^ (dependencies) automatic variables, the wildcard and patsubst functions, which have the description right in the title, and the general ``target: dependencies \n\t command-list`` format of Makefile rules.


> The only "unusual" things I'm seeing

And then proceeds to list almost the entirety of the Makefile


Sounds like a learning opportunity to me. It's worthwhile in the long term, as make will outlive most of us.


There are next to none good learning resources on Makefiles. Most are extremely tersely written, have next to no description of how various parts tie together and assume everyone builds C/C++.


That is only true if one does not believe in reading, or makes no effort at all to look for such resources. There are entire books on this subject.

* Clovis L. Tondo, Andrew Nathanson, and Eden Yount (1994). Mastering MAKE: a guide to building programs on DOS, OS/2, and UNIX systems. Prentice Hall.

* Robert Mecklenburg (2004). Managing Projects with GNU Make. Nutshell Handbooks. O'Reilly Media. ISBN 9780596552541.

* John Graham-Cumming (2015). The GNU Make Book. No Starch Press. ISBN 9781593276492.


The GNU make manual is a good place to start. It's not a tutorial, but it is fairly short. If you're on a different platform it's not exact, but most of the important material applies to POSIX make implementations.

Replace your own toolchain for the CC and related commands... really anything that takes multiple files as input and emits one file as output should fit the paradigm.


> If you're on a different platform it's not exact, but most of the important material applies to POSIX make implementations.

Which parts are "most of the material" though? ;)

> Replace your own toolchain for the CC and related commands... really anything that takes multiple files as input and emits one file as output should fit the paradigm.

I tried to. It's messy and undebuggable if you run into problems. Especially with tools that already look at the whole project (such as Typescript).

We ended up using Makefiles as just "command launchers" with targets that basically look like

   target:
     invoke-tool


Have a look at the lesson on Make from Software Carpentry! http://swcarpentry.github.io/make-novice/


Check out https://learnxinyminutes.com/docs/make

Disclosure: I wrote it so I apologise for any mistakes


Due to its versatility, Makefiles can be creatively used beyond building software projects. Case in point: I used a very simple hand-crafted Makefile [1] to drive massive Ansible deployment jobs (thousands of independently deployed hosts) and work around several Ansible design deficiencies (inability to run whole playbooks in parallel - not just individual tasks, hangs when deploying to hosts over unstable connection, etc.)

The principle was to create a make target and rule for every host. The rule runs ansible-playbook for this single host only. Running the playbook for e.g. 4 hosts in parallel was as simple as running 'make -j4'. At the end of the make rule, an empty file with the name of the host was created in the current directory - this file was the target of the rule - it prevented running Ansible for the same host again - kind of like Ansible retry file, only better.

I realize that Ansible probably is not the best tool for this kind of job, but this Makefile approach worked very well and was hacked together very quickly.

[1] https://gist.github.com/martinky/819ca4a9678dad554807b68705b...


"Build systems are the bastard stepchild of every software project" -- me a years ago

I've work in embedded software for over a decade, and all projects have used Make.

I have a love-hate relationship with Make. It's powerful and effective at what it does, but its syntax is bad and it lacks good datastructures and some basic functions that are useful when your project reaches several hundred files and multiple outputs. In other words, it does not scale well.

Worth noting that JGC's Gnu Make Standard Library (GMSL) [1] appears to be a solution for some of that, though I haven't applied it to our current project yet.

Everyone ends up adding their own half-broken hacks to work around some of Make's limitations. Most commonly, extracting header file dependency from C files and integrating that into Make's dependency tree.

I've looked at alternative build systems. For blank-slate candidates, tup [2] seemed like the most interesting for doing native dependency extraction and leveraging Lua for its datastructures and functions (though I initially rejected it due the the silliness of its front page.) djb's redo [3] (implemented by apenwarr [4]) looked like another interesting concept, until you realize that punting on Make's macro syntax to the shell means the tool is only doing half the job: having a good language to specify your targets and dependency is actually most of the problem.

Oh, and while I'm around I'll reiterate my biggest gripe with Make: it has two mechanisms to keep "intermediate" files, .INTERMEDIATE and .PRECIOUS. The first does not take wildcard arguments, the second does but it also keeps any half-generated broken artifact if the build is interrupted, which is a great way to break your build. Please can someone better than me add wildcard support to .INTERMEDIATE.

[1] http://gmsl.sourceforge.net

[2] http://gittup.org/tup/ Also its creator, Mike Shal, now works at Mozilla on their build system

[3] http://cr.yp.to/redo.html

[4] https://github.com/apenwarr/redo


Tup is really great. I wish it were more widely used. Tup can effortlessly handle a lot of situations that Make chokes on or requires deep magic to get right. A good example is the clean handling of automatically generated header files. This is all it takes to integrate protobufs into a Tupfile:

  : foreach *.proto |> !protoc |> %g.pb.cc | %g.pb.h
  : foreach *.pb.cc | *.pb.h |> !cpp |> %g.pb.o
The "|>" is the pipe operator and I've elided the definitions of the "!protoc" and "!cpp" macros (but they're about what you'd expect). Tup detects whenever a .proto file is changed and does the right thing. Getting this to work with Make requires advanced tricks like .PHONY and .PRECIOUS.


I'm going to disagree slightly on redo; yes it punts, but the fact that it reuses sh means you don't have to learn a new, slightly different language that is barely (if at all) better suited to the task. Also my shell linting tools work with it for free.

From my point of view it does more than Make with much less. If all I want is a better make, redo is what I use.

Tup is not a make replacement, because it can do strictly less than what make does (e.g. implementing "make install" is impossible because tup only is for building outputs that are local to the project). However, generating correct build files is so much easier because of the guarantees it enforces. This is despite the fact that I after using it my feelings about its syntax have gone from "hate with fury" to "still don't like it, but with the docs open I can figure out the right syntax for what I want"



Makefiles are easy for small to medium sized projects with few configurations. After that it seems like people throw up their hands and use autotools to deal with all the recursive make file business.

Most attempts to improve build tools completely replace make rather than adding features. I like the basic simplicity and the syntax, (the tab thing is a bit annoying but easy enough to adapt to).

It'd be interesting to hear everyone's go to build tools.


For my current project in C++ I just use cmake. It works fine on Linux, Windows, FreeBSD & OpenBSD, spits out stuff that integrates with the native/most commonly used build environment on the respective platform, provides facilities for config, build & installing, and if you avoid all legacy cruft it's even somewhat decent.


Currently, I use scons for C/C++ projects, though with a decent amount of stuff built on top of it. When I realized that I had a rudimentary package manager in it, I started thinking that maybe I should go searching for something that already does what I need.

I've used make quite a bit, and it is doable for individual projects. Where I start having trouble is managing libraries that may be used across multiple projects. When a library needs to supply its own configuration, and be subject to configuration specified of the project using that library, things get rather complicated, which is why I turned to scons.


> It'd be interesting to hear everyone's go to build tools.

A script written in whatever the primary language of the project is (usually with some library support), ideally; to reduce the minimum required knowledge to be a first time contributor to the project. Some kind of js build tool for js projects, fake for f#, and so on. I don't want people to need to learn "the one build tool DSL to rule them all" (be that makefile syntax, bazel rules, or whatever) to contribute to a project's infrastructure, on top of the project's primary language.


So for C? Fortran? You'd have users of those languages use those languages to write programs to do the build? But then, how would you keep them from generalizing their good ideas about builds into some kind of... system?


That is a good way to have your build system expand until it can read email.


> You've learned 90% of what you need to know about make.

That's probably in the ballpark, anyways.

The good (and horrible) stuff:

- implicit rules

- target specific variables

- functions

- includes

I find that with implicit rules and includes I can make really sane, 20-25 line makefiles that are not a nightmare to comprehend.

For a serious project of any scope, it's rare to use bare makefiles, though. recursive make, autotools/m4, cmake, etc all rear their beautiful/ugly heads soon enough.

But make is my go-to for a simple example/reproducible/portable test case.


> For a serious project of any scope, it's rare to use bare makefiles, though. recursive make, autotools/m4, cmake, etc all rear their beautiful/ugly heads soon enough.

If I didn't have to provide the option to build under Xcode or VS, I wouldn't have to live in the hell that is Cmake.

I'd just use make.


cmake has improved so much since 3.0 that it's almost unrecognizable. Though much of the cruft remains, thanks to backwards compatibility, and anything involving list or string operations is still absolute hell, I dare say it's actually pleasant to use, at least compared to 2.8.whatever-shipped-with-ubuntu-precise.

Personally, I'm glad we use cmake at work simply because we have a diversity of preferred build tools: some people like to work in XCode or Eclipse, others like to use a text editor and make, others like to use a text editor and ninja. While we all suffer a bit with cmake, we all benefit from its position as a "metabuild" tool.


I hate implicit anything, trying to reason about anything complex with implicits just annoys the reader.


I feel like any discussion of make is incomplete without a link to Recursive Make Considered Harmful[0]. Whether you agree with the premise or not, it does a nice job of introducing some advanced constructs that make supports and provides a non-contrived context in which you might use them.

[0] http://aegis.sourceforge.net/auug97.pdf


Non-Recursive Make is also Considered Harmful: https://news.ycombinator.com/item?id=11441719

In seriousness, the linked paper describes Shake, a Make replacement with two party tricks: one, it is implemented as a DSL embedded in Haskell, thus giving a nice way of programmatic rule generation; and two, it supports monadic dependencies, unlike Make's applicative-only ones.


I love Make for my small projects. It still could be better. Here is my list:

* Colorize errors

* Hide output unless the command fails

* Automatic help command which shows (non-file) targets

* Automatic clean command which deletes all intermediate files

* Hash-based update detection instead of mtime

* Changes in "Makefile" trigger rebuilds

* Parallel builds by default

* Handling multi-file outputs

* Continuous mode which watches the file system for changes and rebuilds automatically

I know of no build system which provides these features and is still simple and generic. Tup is close, but it fails with LaTeX, because of the circular dependencies (generates and reads aux file).


https://github.com/ejholmes/walk can do most of this FWIW.


Walk leaves most of this to the implementor of the Walkfile.


The trouble with "make" is that it's supposed to be driven by dependencies, but in practice it's used as a scripting language. If the dependency stuff worked, you would never need

   make clean; make
or

   touch


Yeah this is my pet peeve about how people use Makefiles. Make is supposed to be a dependency-driven "dataflow" language, but over time it's evolved into something like shell. And people tend to use it like shell.

The most glaring example is .PHONY targets, which are a hack and should just be shell functions. In 'make <foo>', <foo> should be data, not code. 'make test' doesn't make sense, but 'make bin/myprog' does.

I posted this link in another comment showing how Make, Shell, and Awk overlap:

http://www.oilshell.org/blog/2016/11/14.html

Here are some more comments on Make's confused execution model. It's sort of functional/dataflow and sort of not. In practice you end up "stepping through" an imperative program rather than reasoning about inputs and outputs like in a functional program.

https://news.ycombinator.com/item?id=14840696

And at the end of this post, I talk a bit more about the functional model:

http://www.oilshell.org/blog/2017/05/31.html


What simple tool would you suggest to address those .PHONY-targets instead of a Makefile?


I recommend using plain shell scripts. Each .PHONY target is a simple shell function, and then the last line of the script is "$@", which means run function $1 with parameters $2 to $n.

So instead of 'make test', I just use './test.sh all', or './test.sh foo'. The test script can invoke make.

The idea is that 'dataflow' parts are done in Make, and the imperative parts are done in shell. This works out fairly well if you're disciplined about it. The only point of Makefiles is to support quick incremental builds. If there's no incrementality, then you might as well use shell. (Note: whenever you use Make, you're using shell by definition. There's no possibility of only using Make. So I take Make out of the picture where it's not necessary.)

For example, all the instructions here are of the form 'foo.sh ACTION':

https://github.com/oilshell/oil/wiki/Contributing

Pretty much every shell script in the repo uses "the argv dispatch pattern"... I've been meaning to write a blog post about that pattern, i.e. using functions with the last line as "$@".

https://github.com/oilshell/oil


Wow, I never knew about "$@" that's a super handy tip. Now I just need an excuse to write a shell script :)


In my real world at work, the "dependency stuff" works just fine. I never clean for my iterative development, and `-B` is what you want for a pristine/paranoia build.


Almost every build system (where I think it isn't controversial to say make is most often used) looks nice and simple with short, single-output examples to demonstrate the basis of a system.

It's when you start having hundreds of sources, targets, external dependencies, flags and special cases that it becomes hard to write sane, understandable Makefiles, which it presumably why people tend to use other systems to generate makefiles.

So sure, understanding what make is, and how it works is probably important, since it'll be around forever. But there are usually N better ways of expressing a build system, nowadays.


So I saw this and thought why not give it a try. How hard could it be right? My goal? Take my bash file that does just this (I started go just yesterday so I might be doing cross compiling wrong :D) :

```

export GOPATH=$(pwd)

export PATH=$PATH:$GOPATH/bin

go install target/to/build

export GOOS=darwin

export GOARCH=amd64

go install target/to/build

```

which should be simple. Right? Set environment variables, run a command. Set another environment variable, run a command.

45 minutes in and I haven't been able to quite figure it out just yet. I definitely figured out how to write my build.sh files in less than 15 minutes for sure when I started out.


Here's my current pattern for Makefiles: https://github.com/sapcc/swift-drive-autopilot/blob/3eceb4e0...

It uses a private GOPATH inside the repo, created by:

  $ cd /path/to/repo
  $ mkdir -p .gopath/src/github.com/username
  $ ln -s ../../../.. .gopath/src/github.com/username/projectname
With this you can run `go build` etc. with `GOPATH=/path/to/repo/.gopath` and it will work regardless of where the repo is checked out, and you can also `go get` the repo into the normal `GOPATH`.

It's the best of both worlds, in a way. A Go developer can use `go get` etc. and everything works as he expects. A distro packager can grab a release tarball, unpack it in some random place and do `make && make check && make install DESTDIR=/tmp/build` as she expects.


Aside: for Go, just set your GOPATH and such once. Like ~/go. And never touch it again. You now can 'go get' any packages you need. When you set your package up for source control, its root will be (say, for a github package) $GOPATH/src/github.com/your-account/your-package. When you want to work on that package, you cd to that directory, and manage it there.

This is why many who use Go don't get all the fuss with makefiles that some folks insist on. You just go to your project directory and 'go test' or 'go build' or whatnot. Simple. No need to 'make test' or 'make build' unless you have some strange, complicated set up.



Each logical line of a makefile is executed in its own shell. This means that variables cannot be set across lines and directory changes cannot be changed across lines.


Just use the export command to set variables across lines.


One important tip is that the commands under a target each run sequentially, but in separate shells. So if you went to set env vars, cd, activate a Python virtualenv, etc to affect the next command, you need to make them a single command, like:

  target:
      cd ./dir; ./script.sh


    cd dir && \
        ./script.sh && \
        ./otherstuff.sh
handles errors, in case `dir` doesn't exist for example.


There's .ONESHELL for that (as long as we are talking about GNU Make)


Those who don't understand Make are condemned to reimplement it, poorly.


The closest thing I've seen that seem likes a real replacement is Bazel, but I haven't tried using it for anything but a couple toy projects yet. The concepts seem solid though, as expected from Google.

My main complaint about make is that relying on timestamps, while fast, is error-prone and doesn't work with idempotent commands that might refresh the timestamp without altering content.


I remember trying to wrap my head around the monstrosity that is Webpack. Gave up and used make, never looked back since


Webpack is an horrible monster that makes the machine-generated Makefiles look pretty.


Personal blog spam, I learned make recently too and discovered it was good for high level languages as well, here is an example of building a c# project: http://flukus.github.io/rediscovering-make.html .

Now the blog itself is built with make: http://flukus.github.io/building-a-blog-engine.html


Love the simplicity of the design.

Is there any reason why you didn't include the date of publishing in the pages. Or is it just me who looks for the dates on all the blogs that I read.


I wouldn't call is "simplicity", but rather "absence" of a design. See the arguments laid out in http://bettermotherfuckingwebsite.com/


I look for the date too, especially on technical blogs, it's meant to be there but there's a bug somewhere in the pipeline.


Love the simple blog design. That effectively nullifies this as being blogspam.


Thanks. I took the "making this pretty is the web clients job" approach, it looks nice in firefox reader mode. The two biggest issues are:

1. Syntax higlighting doesn't exist in reader.

2. The index page (http://flukus.github.io/) can't render in reader mode, it didn't hit the "must have a paragraph with at least 68 characters" or whatever the arbitrary limitation is.

For the later I'm hoping the add some sort of meta tag that allows me to enable it in future.


If you want all the greatness of Makefiles without the painful syntax I can highly recommend Snakemake: https://snakemake.readthedocs.io/en/stable/

It has completely replaced Makefiles for me. It can be used to run shell commands just like make, but the fact that it is written in Python allows you to also run arbitrary Python code straight from the Makefile (Snakefile). So now instead of writing a command-line interface for each of my Python scripts, I can simply import the script in the Snakefile and call a function directly.

Eg.

  rule make_plot:
    input: data = "{name}.txt"
    output: plot = "{name}.png"
    run:
      import my_package
      my_package.plot(input['data'], output['plot'], name = wildcards['name'])
Another great feature is its integration with cluster engines like SGD/LSF, which means it can automatically submit jobs to the cluster instead of running them locally.


Wow. This project has probably the worst example of the telescoping constructor antipattern I've ever seen:

https://snakemake.readthedocs.io/en/stable/api_reference/sna...


It's Python, not Java, so it can't have the telescoping constructor anti-pattern.

Admittedly, there are a huge number of arguments to the snakefile constructor, but they are optional named arguments making the constructor safer and easier to use than Java's telescoping constructor and it alternatives (Java Bean pattern or builder patterns). This application apparently has many options or settings.


Looks like much more syntax to me, hence more painful.

Make, bsdmake and gmake syntax is minimal and precise enough.


These days, most of my projects have a Makefile with four or five simple commands that _just work_ regardless of the language, runtime or operating system in use:

- make deps to setup/update dependencies

- make serve to start a local server

- make test to run automated tests

- make deploy to package/push to production

- make clean to remove previously built containers/binaries/whatever

There are usually a bunch of other more specific commands or targets (like dynamically defined targets to, say, scale-frontends-5 and other trickery), but this way I can switch to any project and get it running without bothering to lookup the npm/lein/Python incantation du jour.

Having sane, overridable (?=) defaults for environment variables is also great, and makes it very easy to do stuff like FOOBAR=/opt/scratch make serve for one-offs.

Dependency management is a much deeper and broader topic, but the usefulness of Makefiles to act as a living document of how to actually run your stuff (including documenting environment settings and build steps) shouldn't be ignored.

(Edit: mention defaults)


There are other projects than just web apps you know.


I use the above for database imports, ETL and Azure template deployments too. And who says "serve" launches an HTTP server? :)


For people who are more comfortable in Python, I highly recommend Snakemake[1]. I use it for both big stuff like automating data analysis workflows and small stuff like building my Resume PDF from LyX source.

[1]: https://snakemake.readthedocs.io/en/stable/


Make is fine for simple cases, but I'm working on a project that is based on buildroot right now, and it is kind of a nightmare: make just does not provide any good way at this scale to keep track of what's going on and inspect / understand what goes wrong. Especially in the context of a highly parallel build with some dependencies are gonna get missing.

In general also all the implicit it has makes it hard to predict what can happen. Again when you scale to support a project that would be 1) large and 2) wouldn't have a regular structure.

On another smaller scale: doing an incremental build of LLVM is a lot faster with Ninja compared to Make (crake-generated).

Make is great: just don't use it where it is not the best fit.


Here's some tips I like to follow whenever writing Makefiles (I find them joyful to write): http://clarkgrubb.com/makefile-style-guide


wow i've been feeling like not knowing make has been a major weakness of mine, this article has finally tied all my learning together. i feel totally capable of using make now. thank you.


Make is great! Just remember to use tabs for indentation. Make is very picky about that.


One very important thing missing from this primer is that Make targets are not 'mini-scripts', even though they look like it. Every line is 'its own script' in its own subshell - state is not passed between lines.

Make is scary because it's arcane and contains a lot of gotcha rules. I avoided learning Make for a long time. I'm glad I did learn it in the end, though I wouldn't call myself properly fluent in it yet. But there are a ton of gotchas and historical artifacts in Make.


> Every line is 'its own script' in its own subshell - state is not passed between lines. Unless you use the ONESHELL option.


Has anybody successfully used make to build java code? I realize there are any number of other options (ant, maven, and gradle arguably being the most popular).

In fact, I realize that the whole idea of using make is probably outright foolish owing to the intertwined nature of the classpath (which expresses runtime dependencies) and compile-time dependencies (which may not be available in compiled form on the classpath) in Java. I'm merely curious if it can be done.


The problem with using make here is transitive dependency resolution. Many of the new build systems have it "built in" now a days. Maybe make could wrap something that does that? Either way...I wouldn't recommend it unless it wrapped something that understood maven central.


Sure it can be done. Long, long ago, before Ant existed, I worked on a big Java project built entirely with javac and GNU make. In fact javac's built-in dependency tracking makes this significantly easier than with C.


A long time ago I had a build system for C++ and Java. It worked fine. The issue is generally that you want it to work on Windows and Unix, and then life is hell...


This is great, and needs saying.

Recently I wrote a similar blog about an alternative app pattern that uses makefiles:

https://zwischenzugs.wordpress.com/2017/08/07/a-non-cloud-se...


I guess that you're the owner of that blog, so I want to file a bug report: There are ligatures in your code snippets, e.g. in "git diff", the "ff" is a single character that's as wide as the "i" before it, which is really weird, and also not what the terminal would do. So you might want to add this to your CSS:

  pre { font-variant-ligatures: none }


Makefiles are simple, but 99% of the existing Makefiles are computer-generated incomprehensible blobs. I don't want that.


I did not know people are afraid of Makefiles. Maybe a naïve question, but what is so scary about make?


I think chungy is right: most people get their first experience with Makefiles by trying to debug some automake monstrosity. When they write their own, they assume they have to be just as complicated (or even worse, they'll copy/paste from an autogenerated Makefile or use it as a base).

If I thought every Makefile had to be like that I'd write ./build.sh too.


I wouldn't say I'm afraid of one, but I can't read one and tell you what it does, or write one that does anything useful. Unless I'm way off the mark, it's just a programming language, one that I will eventually learn when I need it badly enough. I even snuck a manual off the freebie table at work.


It's all fun and games while your commands basically look like

    target:
        invoke-some-tool
And then, in a bigger project, you start dealing with dependencies. Exclusions. Code generation. Config-dependent builds. Dev vs stage vs prod. Sub-projects. Invocation of tools depending on other tools. Pipelines. Toolchains.

Then Makefiles quickly devolve into an incomprehensible undebuggable nightmare with arcane syntax rules and behaviours. It doesn't help that there are next to none good resources on Makefiles.


>>> Congratulations! You've learned 90% of what you need to know about

The next 90% will be to learn that Make breaks when having tabs and spaces in the same file, and your developers all use slightly different editors that will mix them up all the time.


Most editors that I've used switch to tabs mode when opening Makefile* files. But yes, it's a weird restriction. I suppose it's a result of being one of the first languages with syntactical indentation.


Instead of makefile I can recommend Taskfile https://hackernoon.com/introducing-the-taskfile-5ddfe7ed83bd

Simple to use without any magic.


Please, don't ship your own Makefiles. Yes, autotools sucks - but there is one thing that sucks more: no "make uninstall" target.

Good people do not ship software without a way to get rid of it, if needed.


I had written Non Recursive Makefile Boilerplate (nrmb) for C, which should work in large projects with recursive directory structure. There is no need to manually add source file names in makefile, it automaically do this. One makefile compiles it all. Of course, it isn't perfect but it does the job and you can modify it for your project. Here is the link

https://github.com/quantos-unios/nrmb

Have a look :)


Make is fine, but I think we have better tools nowadays to do the same things.

Even though it may not have been originally intended as such, I've found Fabric http://docs.fabfile.org/en/1.13/tutorial.html to be far far more powerful and intuitive as a means of creating CLI's (that you can easily parametrize and test) around common tasks such as building software.


After using the various javascript build processes, I went back to good old makefiles and the result is way simpler. I have a target to build the final project with optimizations and a target to build a live-reload version of the project, that watches for changes on disk and rebuilds the parts as needed (thanks to watchify).

This works in my cases because I have browserify doing all the heavy lifting with respect to dependency management.


Opinion poll. I'm writing a little automation language in YAML and I was wondering if people prefer a dependency graph concept where tasks run parallel by default, unless stated as dependency, or a sequential set of instructions where tasks only run in parallel if explicitly "forked".

I'd say people would lean towards the former, but time and real world experience has shown that sequential dominates everything else.


Or just use cmake and save yourself time, effort, and pain.


Yes. Among other things, a modern build system really needs a "help" command.


I almost always roll a basic Makefile for even simple web projects. PHONY commands like "make run" and "make test" in every project make context switching a bit more easier.

While things like "npm start" are nice, not all projects are Node.js. In my current startup we're gonna have standardised Makefiles in each project so its easy to build, test, run, install any microservice locally :)


Using Cmake is so much nicer than make, and it's deeply cross-platform. Cmake makes cross-compiling really easy, while with make you have to be careful and preserve flags correctly. Much nicer to just include a cmake module that sets up everything for you. Plus it can generate xcode and visual studio configs for you. Doing make by hand just seems unncessary.


I'm not sure why people compare cmake and make - cmake is essentially a makefile generator to handle automatic dependency tracking. You can even generate other types other types of build files (ninja, visual studio, etc). Not to just point your comment out because I see this all the time, but I don't understand where this confusion is coming from.


Well, if anything, I think it demonstrates that Makefile is something that should be compiled, not written by hand

edit: and for context, this is usually what happens anyway, since autoconf does it too. it's hard to think of a major project with a manually edited Makefile


> I'm not sure why people compare cmake and make

You can declare your project structure, metadata, and other build/install/test logic in either. They're thinking about their use cases, not the language and workflow used to achieve them.


I always felt that for small setups, makefiles are much more concise and easy to understand. And more generic. Cmakefile.txt seems always really messy and introduce concepts such as project and exes and dlls. But of course Cmake is super-portable I know.


Is

    project(my_software CXX)

    add_library(my_lib foo.cpp bar.cpp)
    target_include_directories(my_lib PUBLIC include/lib)

    add_executable(my_app bluh.cpp)
    target_link_libraries(my_app my_lib)
really more complicated than the equivalent makefile ? Especially given that:

* this can generate ninja files instead of make

* it adds the common targets such as make help, make clean, make install

* dependency graphs can be generated with cmake --graphviz

* solutions for various IDEs

* etc...


Every time I think this, I end up going back later and wishing I had just done Cmake. If you just want to compile all the .c files it's easy enough to GLOB and make a couple targets, then come back and make it more sophisticated later


Except with make most small setups are at least 20-40 lines of nontrivial code becaue you have to redo everything from scratch


  include std.mk
  
  TARGET=foo
  HFILES=bar.h
  OFILES=main.o foo.o baz.o
  
  include cmd.mk
This is what Makefiles tend to look like in some places.


May I kindly plug my upcoming book [1] for writing CMake in an effective and straightforward manner :-) I was just porting a large Makefile based build system over to CMake and had the 'pleasure' to find out why recursive Make really is the root of all evil and just doesn't scale well enough for large software systems. At least to my mind.

[1] https://leanpub.com/effective-cmake

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: