Hacker News new | past | comments | ask | show | jobs | submit login
An opinionated approach to GNU Make (davis-hansson.com)
414 points by DarkCrusader2 37 days ago | hide | past | web | favorite | 194 comments



For one-line recipes you can use semicolon

    foo.o: foo.c ; $(CC) $<
If you find $(@D) hard to remember then do $(dir $@) instead.

I wrote a book about GNU Make stuff (https://nostarch.com/gnumake). If (and only if) you've read the GNU Make Manual from the FSF my book may help you. It's not for newbies. Don't want the book? All the articles are here: https://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-art...


I can attest that it's a great book. It took my Make savvy from "pretty average" to "can build Makefiles that conform pretty decently to DRY". Not all of the book is useful to me, but it's a pleasant read and also a good well-indexed reference manual and I consult it at least monthly for fun.


With backslashes before newlines you can do “multiline oneliners” too.


Yep. One of the reasons I recommend the FSF manual is that it's a great introduction: https://www.gnu.org/software/make/manual/make.html#Splitting...


You can also do "multiline multiliners" by adding a target .ONESHELL. By overriding the variable SHELL, this allows you to put arbitrary multi-line scripts of your preferred programming language into makefile recipes.

https://www.gnu.org/software/make/manual/html_node/One-Shell...


Learned some good tricks here. There's one that I use but did not find in the article. This one:

    # Borrowed from https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
    .PHONY: help
    help: ## Display this help section
        @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {printf "\033[36m%-38s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
    .DEFAULT_GOAL := help

    .PHONY: load-terraform-output
    load-terraform-output: ## Request the output JSON from Terraform 
        some commands
        some more
It makes Makefiles a little more self documenting.

Now I issue a `make` (or `make help`) to get a listing of the documented tasks. Very helpful.


On the subject of good tricks, I like to put this debugging snippet in all my Makefiles. It supports typing `make print-VAR1 print-VAR2` and it'll dump the values of VAR1 and VAR2 (this could be extended to also show `$(flavor $star)` and `$(origin $star)` and other little details, but usually I just simply want the final value).

    define newline # a literal \n


    endef
    # Makefile debugging trick:
    # call print-VARIABLE to see the runtime value of any variable
    # (hardened a bit against some special characters appearing in the output)
    print-%:
            @echo '$*=$(subst ','\'',$(subst $(newline),\n,$($*)))'
    .PHONY: print-*


I like to use a `showconfig` rule so that `make showconfig` will display all public vars and give their default values. It ends up being a little like `./configure --help`. But I see now that I could tighten that up a bit with the above rule and something like `showconfig: print-VAR1 print-VARX`. Thanks!


That sound pretty cool!

I eventually did something similar to that and built something alike to `make -p`, by hand, that was also a bit like `./configure` or CMakeCache.txt. It could then be read back into `make` and bypass a fair bit of configuration testing work (it's much faster on cygwin anyways). I got a bit too nervous at the automagical complexity though and never made a pull request (plus I only just fixed it to work on the old version of make that ships with macOS). But I thought perhaps you might find this fascinating for the levels of make hackery to try to untangle: https://github.com/JuliaLang/julia/compare/master...vtjnash:...


Make is useful, but I really wish the implementation had evolved over the years, making stuff like this natural.

As an example, python has doc strings accessible via __doc__

  $ python
  >>> class foo:
  ...   '''description of foo'''
  ... 
  >>> foo.__doc__
  'description of foo'
  >>>


That's awesome! I've always been annoyed that there isn't a built-in `ls` like feature, but this is a really clean solution.


    make -p
the main problem is that make itself doesn't support a concept of "hidden files". also a problem is that GNU make comes with lots of implicit rules, so you really need something like make -R -r -p for this to remotely work.


> make -R -r -p

I tried that on a small Makefile with 4 simple rules, and it spit out 2436 lines of diagnostic information. Which to be fair is less then `make -p`'s 8687 lines, but it's still completely useless to me.


I learned about this trick from the same blog post. I tweaked a little to suit my needs. But wow, it is such a game changer. This means all you need to write in auxiliary docs is 'make help'. That's it.


This is a great trick, thanks! Much appreciated.


This is great, thanks!


If you're using GNU-specific Make features, please, please consider naming the file GNUmakefile instead of Makefile and use $(MAKE) for recursion. GNU make will happily pick up GNUmakefile before Makefile, and using $(MAKE) will remove a lot of headaches when people try to build your project outside of author's $PREFERRED_PLATFORM.


Do you have a good guide for writing portable makefiles? I’m always using GNU make so I don’t notice it with mine but I always feel a little bad.


The best way is reading the spec: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/m...

But to get you started Chris Wellons does have a pretty nice blog post about writing POSIX Makefiles: https://nullprogram.com/blog/2017/08/20/


I don't think there's a guide, the only resource I'm aware of is The Open Group Base Specifications[1] which can be reference when in doubt. Almost other times I just install bmake alongside with gmake and try to get it work with both (it's still not perfectly conform to POSIX, since GNU supports some of BSD's make feature, and vice versa).

[1]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/m...


I'm a big fan of "be strictly conformant" (e.g. use no C++ extensions) for maximum portability but I always install Bash and GNU Make. The extra power is worth it.


I use the *BSDs a lot and it's pretty common to install gmake from ports. I wouldn't feel bad.


The POSIX standard?

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/m...

If you don't use anything that is not there, or in a manner not covered by a requirement, you will likely have a highly portable result.


There is a very good tutorial by Chris Wellons

https://nullprogram.com/blog/2017/08/20/


Does somebody even use other make implementations anymore? I think such suggestions aren't relevant nowadays. Same goes for bash. Posix compatibility is overrated.


It's still 100% relevant. I can only speak for FreeBSD, but this is a real pain when building from source (outside of FreeBSD Ports).

https://www.freebsd.org/cgi/man.cgi?make(1)


Adding to this, in cases where I needed to use non-portable Make features for projects that I would build for both Linux and FreeBSD, I always name the Makefile for GNU Make as GNUmakefile, and if I create a separate Makefile for the Make that FreeBSD ships with then I name that as BSDmakefile, which the Make that FreeBSD ships with will pick up by default when you run `make` just as GNU Make does for GNUmakefile.

Unhappily though, I think there are further portability issues, namely that when I create my BSDmakefile I am not sure that all the features I use would work with the versions of Make that come with OpenBSD, NetBSD, etc.

I mean, largely I don't do anything too crazy or fancy but the feeling is always there and I don't really have time/inclination to test every change with OpenBSD, NetBSD, etc since I don't normally use any of those other operating systems in the BSD family aside from FreeBSD which I do use.

For the last few years I've been using C less though, as I prefer Rust anyway. And for the times when I do end up creating a makefile I now usually create only a GNUmakefile. Supporting only GNU Make feels a little bad but I think in most cases people will have it installed even on FreeBSD, and at least naming it as GNUmakefile like is being suggested and as I am doing, does not waste peoples time trying to run other implementations of Make with those projects even if it might annoy them to install GNU Make.


I very quickly adopted a habit of typing gmake for downloaded sources. Can’t see any pain at all and why it is relevant. Make(1) is basically gnu make today and when it’s not, you’re aware of its alias.

A suggestion for naming makefiles GNUmakefile is a funny try to borderline, but it is late for around two decades and cannot have effect beyond boring. In technical terms, it is easier to rename BSDmakefiles rather than millions of projects out there.


> Make(1) is basically gnu make today

It is not on a number of platforms that enjoy popular usage.

> In technical terms, it is easier to rename BSDmakefiles rather than millions of projects out there.

GNU is not the standard. Generally it's a superset of it; if so, then it should have the onus of the nonstandard name.


We have three choices with different success chances:

- Rename non-gnu makefiles of a specific platform (tedious but possible),

- Rename gnu makefiles in the entire internets (0% chance),

- Continue insisting to no avail (requires no effort).

>if so, then it should

A wonderful world where such implications always work, but not where we live.


installing gnu make on FreeBSD is not hard. Other unixes such as AIX and Solaris are super obscure now, and gnu make also works there.


Installing GNU Make is not the problem. It's knowing which Make to invoke. If you use GNUMakefile, it's unambiguous.


I find on bsd when I type make and it spews out many screenfulls of syntax errors before performing any work, then I type "gmake" as a next step. Not challenging.


There are some GNU Makefiles that invoke `make` manually instead of `$(MAKE)`. On FreeBSD, where `make` is bmake, this is a pain in the ass.


I agree that would be bad, but I haven't personally seen a lot of that. Increasing awareness of the $(MAKE) variable I guess would be the solution.

Of course the "recursive make considered harmful" paper also articulates pretty well why such a usage may have its own problems. http://aegis.sourceforge.net/auug97.pdf


too bad the standard is not #!/usr/bin/gmake first line or something similar


It's simple, you make gnu make the default one and the problem is solved.


You definitely do not want to make GNU Make the default on a system that ships with another implementation of Make, because then you break other builds.

The solution, like the other people were saying, is to name the makefile as GNUmakefile if it is using GNU Make specific features.


>because then you break other builds.

[citation needed] Gnu make is a superset of posix compatible make standard.(with very few exceptions) It shouldn't break. If it breaks, I think it's a bug, and the makefile should be altered into posix-compliant compatibility with gnu make.


The suggestion by the parent of my comment was to replace the system Make with GNU Make because others are using GNU Make specific features.

On top of that you want to rewrite all of the Makefiles for other implementations of Make, that exist on systems where the implementation of Make that is included with the system is not GNU Make, so that the system still works after you've switched out the implementation of Make that came with the system with GNU Make.

...

Why?

Why should everyone else do a bunch of extra work just so that the projects using GNU Make can keep using GNU Make specific features, when literally all those people using those GNU Make features had to do was to rename their Makefile to GNUmakefile and it would keep working for them when they invoke GNU Make just the same way they did before and would not cause confusion for others?


If you are talking about a bunch of BSD userland, probably the only place that uses BSD extensions and other gnu incompatible non-posix stuff, I think it's pretty easy to hardwire it to only build with BSD make variants. Who even builds it manually, and how often? The OPs comment was about software in general, and for software in general gnu make is more optimal as a default in 2019, that is unless you have to be standards-compliant.


>>because then you break other builds.

>[citation needed]

From OpenBSD make(1) manpage: "The handling of ‘.depend’ is a BSD extension."


Well, using "BSD extension" in 2019 is a luxury.


> Well, using "BSD extension" in 2019 is a luxury.

Considering it's the default on the OS I'm using to post this (OpenBSD) I'd regard it as fairly standard.


BSD make (pmake) is still very common, as it's the default make in almost all BSDs. Also Windows has nmake which is kinda-but-not-quite POSIX-compatible.


Considering installing build tools and makefile generators such as cmake, meson, jam, ninja, scons, etc is common developer practice, I see no problem with gnu make being a requirement, provided it's possible to build it on the target system.


I personally quit using CMake years ago; great promise but underdelivered in my opinion. I’ve moved back to hand-rolled (BSD) make files for my own projects.

To your point though - I don’t think the fracturing of (often poorly developed) build systems supports the thesis “GNU Make everywhere”.


On most other operating systems than Linux gmake != make. Darwin, bsd's, Windows, Solaris, ...


Darwin's make is GNU make.

  $ uname -rs
  Darwin 18.7.0

  $ which make
  /usr/bin/make

  $ make --version
  GNU Make 3.81
  Copyright (C) 2006  Free Software Foundation, Inc.
  This is free software; see the source for copying conditions.
  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
  PARTICULAR PURPOSE.
   
  This program built for i386-apple-darwin11.3.0


I know that. Using Visual studio's nmake will require a separate makefile anyway for any non-trivial build. Installing gnumake on Darwin is not an issue, and other unixes are on life-support.


FreeBSD made a new release a month ago:

https://www.freebsd.org/releases/12.1R/relnotes.html

OpenBSD released OpenBSD 6.6 in October of this year:

https://www.openbsd.org/

Both of these do not use GNU Make, and both of these are living and vibrant *NIX distributions.


Terrible clickbaity title that he walks back throughout the entire post - "this is not dogma". There are a couple of valid remedial notes in here (like using files as targets) but the part about changing the default shell and inserting hacks that allow you to use spaces will just baffle anyone else and probably make your Makefiles non-portable (eg macs don't have a recent bash).


The "replace the tab with magic character" thing particularly baffling. I agree that it was probably a dumb idea in the beginning for make to force you to use a literal tab character, but using these kinds of hacks is a terrible idea. It will just make it harder to read makefiles for normal people and it will make them less portable. Every sensible programming editor knows that make is particular about spaces and tabs and will make sure that it uses tabs to indent the recipes, so it's not an issue any more. If anything, using some other character will make it more confusing, because I'm guessing most editors will indent/syntax highlight it incorrectly.

It's not dissimilar from early C programmers who wanted C to be more like Pascal and put things like this in headers:

    #define BEGIN {
    #define END   }
and then never used curly braces. NO! DON'T DO IT!


I really don't understand his justification for such a significant modification:

> Make leans heavily on the shell, and in the shell spaces matter. Hence, the default behavior in Make of using tabs forces you to mix tabs and spaces, and that leads to readability issues.

I agree with you in that replacing TAB with > makes things much less readable.


Yes, his justification is nonsense.

1) The original creator of Make plainly said that his original choice of tabs over spaces was accidental, and honestly a mistake. He just learned about (at the time) new tools lex and yacc for writing parsers and chose TAB as a line prefix for shell commands. Not much thought given really.

2) Tab in the very beginning of the line determines that the rest of the line is a command. However, the rest of that line goes into shell pretty much verbatim (modulo a few substitutions). Because of that, you may have tabs in the middle of that line, no problem, and they will be passed into the shell to execute!

3) To my knowledge, shells treat '\t' and ' ' the same way as whitespace. If anyone knows such differences, please let me know tho, I will correct myself. On the other hand, ">" is an important shell operator.


The shell performs field splitting according to the characters in the IFS environment variable; if unset it splits on <space>, <tab>, and <newline>.

There are many other places in Unix where tabs are special. The "<<-" heredoc operator will strip leading <tab> characters, so if you want to nest embedded content neatly in your script you have to use tabs. Similarly, programs like cut(1) default to using <tab> as the delimiter.

The fact of the matter is that until very recently the vast majority of programs and programmers on Unix used tab indentation, and to a lesser extent tabs for intra-line alignment. So while the requirement of leading tabs in Make may have been a mistake, it was a benign mistake that would have gone mostly unnoticed. And the alternative surely would been a more permissive syntax allowing both <tab> and <space>.


Well now I have to try it


> macs don't have a recent bash

Macs don't have GNU make 4, only 3.81, so it's outright not portable with that > thingie.


Use Homebrew [1] to install the most current versions of bash and make.

By default, it’s installed as gmake to not conflict with the existing make 3.81.

[1]: https://brew.sh/


Sometimes you're not in a position to know, suggest, nor decide if people are using such a package manager (may it be Homebrew, MacPorts, ArchMac...) They may type make out of habit. This introduces a significant dependency, which makes people less inclined to use your stuff.

It all depends on the use case, but in some situations a cop out such as "just use Homebrew" is actively user hostile due to the amount of human friction it generates.

As it turns out, I've written bucketloads of portable (GNU and non-GNU) makefiles and never needed a post v3 feature.


> Macs don't have GNU make 4, only 3.81

Released April 1st, 2004.


Without looking it up: make probably had a license change (to GPLv3?) between those versions.

That's usually the reason why Macs have outdated tools from Linux land.


Yes, 3.81 is GPLv2 or later, 3.82 is GPLv3.


But why would GPLv3 be a problem for Apple in this case?

Is this a more political move?


GPLv3 requires open hardware in the sense that you can replace the operating system with anything you want. That is probably not a position that Apple wants to formally accept.

Our company has a similar problem in that there are safety implications if we were to allow the user to replace our system with their own, therefore GPLv3 is off limits to us.


I don't think your interpretation of GPLv3 is correct.

This is a standalone utility, is has nothing to do with the OS.


It actually is correct, I think. If you're selling hardware that ships with any GPLv3 software of any kind, the license requires that you don't restrict users from installing their own OS if they want. This was intended as a workaround for so-called "tivoizaion", where TiVo boxes were open source but would only boot signed firmware (and the keys were confidential).

Apple didn't want this, so they blacklisted all GPLv3 software. So did most companies that ship embedded Linux products like smart TVs.


No, I don't think completely different software "infect" each other this way.

Consider this: Canonical ships Ubuntu which is based on the GPLv3 licensed Linux kernel. Yet their distribution contains all kind of software including gplv2, Apache, BSD and even proprietary. Some are from canonical themselves, and they are in no ways "infected" by the gplv3 license.

Also, tivo-ization was about TiVo publishing a Linux code that could not be used by anyone since the hardware would reject it (due to missing signing keys). These days these issues are solved by having keys and such in the bootloader which is BSD licensed (see for example Android).


The Tivoization clause applies to hardware vendors, with hardware covered by GPLv3's definition of a "User Product".

Ubuntu itself isn't a valid comparison, because it's not a physical product. And computers that ship with Ubuntu don't violate GPLv3, because you can install anything you want on them without restriction.

The license incompatibility isn't intrinsic to MacOS, it's intrinsic to Apple's computers. Which ship with MacOS.

If MacOS included GPLv3 software, Apple wouldn't be able to pre-install it unless they provided all users with signing keys to install their own modified versions of the GPLv3 software on-target. Check out the GPLv3's "User Product" and "Installation Information" sections for more details - they're written in plain English, and are pretty clear.

Basically - if you want to sell hardware that comes with any GPLv3 software installed, then you need to empower your customers to replace or modify that software on their devices. Not the entire OS, mind. Just the pieces that are GPLv3. But that requirement prevents a hardware vendor from doing something like saying "I'll only boot a signed/verified root filesystem" or "The rootfs (or even just /usr) is read-only to everything except for signed OS updates".

So it's not that shipping GPLv3 Make/Coreutils would infect MacOS and require Apple to release the MacOS source. Instead, it's that shipping GPLv3 Make/Coreutils would prevent Apple from locking down their computers and code-signing MacOS (unless they provided the OS-level signing keys on request).


> But that requirement prevents a hardware vendor from doing something like saying "I'll only boot a signed/verified root filesystem" or "The rootfs is read-only to everything except signed OS updates".

Only if you don't allow the user to decide which signing keys to trust. So GPLv3 doesn't prevent you from improving security, only from bossing around your users.


Good point!


> unless they provided the OS-level signing keys on request

All they need is to provide a bypass which permits executing a modified binary, which they already do otherwise projects like Homebrew and MacPorts wouldn't be possible, not to mention the fact that development on macOS would become a nightmare.

From a technical and legal perspective there's no reason to avoid the GPLv3 for a program like GNU Make. Apple's lawyers are just lazy and instituted a blanket, company-wide prohibition against GPLv3. A GPLv3'd Bash might plausibly create some headaches given the system integration, but they could have always used ksh like every other BSD, or zsh as they're now doing, as the default system shell and shipped a modern, updated Bash as a convenience for user scripts. iOS also poses additional headaches, but that's a separate issue--Apple already deprecated system(3) and in general you can't (and don't) rely on shell programs in iOS.

TL;DR: Apple's lawyers are lazy and needlessly hostile, even given an expansive, unrealistic interpretation of the GPLv3 anti-circumvention clause.


Agreed with this. There's one extra dimension though - the toolchain.

The catch with MacPorts and Homebrew is that both of them use the XCode toolchain for compilation and linking. And Apple encumbers the heck out of XCode with EULAs, App-store accounts, etc. My interpretation of GPLv3's "Installation Information" section is that Apple wouldn't be able to force you to do any of that junk if they were shipping GPLv3 components.

I hate that Apple does all of that bullshit. But somebody at Apple seems to think there's value in all that EULA protection and control, which is probably why GPLv3 software got the boot.


There are other companies selling Linux systems + hardware. Google, System 67, Asus, Dell, HP, Samsung, ...

Note also that you can't install custom software on a Samsung smartwatch, but you can install Windows on a Mac.


The companies selling Linux systems including GPLv3 code plus hardware are happy to include Installation Information.

Installing Windows on a Mac is not relevant here, because Installation Information is "information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source." Windows is not a modified version of a covered work, and so the ability to install Windows doesn't imply the ability to install a modified version of a covered work.

I would be surprised if a Samsung smartwatch had any GPLv3 code. Android rewrote a bunch of userspace specifically to avoid GPL (any version).


System76, Asus, etc. don't run afoul of GPLv3 because they don't restrict what you can install.

Most Android devices also don't violate any GPLv3 licenses, because they don't include any GPLv3 software. GPLv2 software like the kernel doesn't have this same restriction.

Apple decided to take the approach of "don't include GPLv3 software" so that they wouldn't have to worry about it once they fully lock down their computers. If Apple included a modern Bash with MacOS, they would need to give all owners access to any compilers or signing keys necessary to modify Bash and replace the binary that Apple shipped. Today, it's possible to do that. Tomorrow, it might not be.


But the kernel itself is GPLv3.

Nevermind, let's agree to disagree


The kernel itself is specifically under GPLv2 only. That's not a point you can disagree on.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


This isn't about software infecting each other, this is about a hardware vendor distributing software with the hardware. Here's the relevant text of the clause:

> A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. ... If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information.

Ubuntu is neither tangible personal property normally used for personal/family/household purposes, nor something designed or sold for incorporation into a dwelling.

A Macbook is tangible physical property normally used for personal/family/household purposes.


They appear to have a general rule of never using GPLv3, even if it would be fine for some tools. (It'd be problematic for things that could also be a part of iOS, or components they'd lock down with unavoidable SIP-like mechanisms, option they likely want to keep open for parts of the system)


This is a horrible decision and is bad for developers everywhere.


Apple is actively ripping out software licensed under earlier versions of GPL as well, as they're now horribly out of date.


It's a terrible decision for developers everywhere but what about Apple's profits?


When people write software, they don't need to target any and all platforms that can possibly operate on this software as input. If you want to only target GNU, that is perfectly fine. Just name your file `GNUmakefile` or put a comment at the header or in `README` to indicate you only support GNU.


>macs don't have a recent bash

macOS Catalina now uses zsh.


The default for new accounts in Catalina is zsh but if you upgrade, you'll still have bash: https://support.apple.com/en-us/HT208050


> probably make your Makefiles non-portable (eg macs don't have a recent bash).

Is there any point in supporting a platform which hasn't updated its major version of Make in over 15 years?


Picking the platforms you support by which version of make they ship is a rather odd way of doing business.


> And you will never again pull your hair out because some editor swapped a tab for four spaces and made Make do insane things.

steps up to you and whispers in your ear

Or you could, you know... use a real text editor.


I am really curious about how this would happen.

I have been working with Makefile’s, with a team, for close to ten years and have never encountered anyone accidentally using spaces. I guess someone editing it via GitHubs web ui or something.


Happened to me just the other day.

I added a makefile to a project that didn't yet have one, and I didn't have makefile support added in my IDE.

Got some weird, unhelpful errors when trying to run make, and eventually realised spaces were being used instead of tabs.

The solution was to configure my .editorconfig file to use tabs for makefiles, and as a nice-to-have, to also install a makefile plugin in my IDE.


> The solution was to configure my .editorconfig file to use tabs for makefiles

The real solution is to not have your editor silently convert tabs to spaces, ever.


The real solution for dealing with spaces vs tabs is that there's no real solution that will work all the time. Or maybe we could go back in time and murder both hitler and the person who decided that tabs should be a separate character (which I wouldn't be surprised if they were the same person).

If you don't autoconvert you risk ending up with a file mixing tabs and spaces which can range from a mild annoyance to generating very perplexing errors that are a pain to debug if the file's format is sensitive to indentation (such as python scripts or... Makefiles). I suppose you could argue that the conversion should require user interaction but frankly that sounds like a hassle since 99% of the time autoconverting is absolutely fine.

Furthermore it makes total sense to use an editor config to force a certain standard and avoid problems in the future, especially if you work with many third party libraries with different coding styles. At work I deal with python files indented with 4 spaces, the Linux kernel indented indented with 8-space tabs and a bunch of other projects and language who may or may not be using other variants. You'll have to pry my editorconfigs from my cold, dead hands.


> Or maybe we could go back in time and murder both hitler and the person who decided that tabs should be a separate character (which I wouldn't be surprised if they were the same person).

You got the timeline almost right. A tab and a space are not only different characters, they are not even in the same category of characters.

A space is a whitespace character whereas a tab is a control character that moves the cursor to a specific position. Its name is an indicator for what it was used in the beginning, to arrange data in a tabular form.

The real problem is that many editors don't show tabs differently from spaces. In vim you can do a lot with the options "listchars".

My settings there are: set list listchars=tab:>-,eol:$,nbsp:~,trail:X


> Or maybe we could go back in time and murder both hitler and the person who decided that tabs should be a separate character (which I wouldn't be surprised if they were the same person).

Tab released the spring holding the carriage in place and let it slide backwards until a tab on the base his a tab on the carriage that prevented it moving further on its own. High tech typewriters let you adjust the position of these tab stops (or "stop tabs", if you prefer). This back from the days when many typewriters omitted "1" and "0" -- we just typed l and O.

Tab on the oldest printing terminals simply had this behavior as they were essentially complicated typewriters anyway ("TTY" is "Tele TYpewriter")

BTW your post does not invoke Godwin's law.


I have to say, I long favoured tabs myself - but I eventually gave up and decided not to die on that hill.


Any editor from the last twenty years should know to indent a Makefile with tabs.


Well, we could pretend that's the case, or we could accept the reality - many don't, at least not out of the box.


Given that plenty are FLOSS there’s no reason to pretend that it’s difficult to upgrade.


I've copied lines from a shell script into a makefile and it copied the leading spaces as well. I didn't really think about it and when I ran make I got a bunch of weird errors. I figured out what the problem was pretty much immediately. Overall this was just a minor annoyance for a minute or two, but it does happen. I'm not sure why people pretend that it doesn't.

I don't work with make much anymore (or Python or Haskell), but I've come to see the use of invisible characters to express structure as a mistake. It can make code real neat and tidy looking, but I'm not sure the trade off is worth it. In the past I've resorted to actually having my editor show whitespace symbols to make sure code that looks aligned actually is aligned.


There is nothing that prohibited the original author of Make from allowing any whitespace as a command prefix. The fact that it does not do so is a historical accident, and a mistake admitted by its creator. Such a change would simply make everyone's life easier.

IMO, since Makefile has only one-level indentation for commands, this use of significant whitespace is rather benign, compared to Python.


I ":set expandtab" in vim by default, I don't like mixing tabs and spaces. People have different opinions on this, I get it. So when I pop on over to the Makefile buffer, I have to remember to either ":set noexpandtab" or to copy a previous target down and edit it directly. Now I'll just swap out ">" instead of using tabs, which will make things simpler.


I have the same setting, but vim automatically sets `noexpandtab` when editing a file named Makefile. I don't have any explicit setting to do that, nor any Makefile-related plugin.


>I have the same setting, but vim automatically sets `noexpandtab` when editing a file named Makefile.

Yes, you can see this in the make filetype plugin in Vim’s $VIMRUNTIME directory, which is usually

    /usr/share/vim/vim81
which contains all of the plugins, syntax definition files, etc. that come with Vim. The file for make is located at

    $VIMRUNTIME/ftplugin/make.vim.
It’s pretty straight forward to override any default behavior you’d like to change; this is a great article on how all of this works [1].

[1]: https://vimways.org/2018/from-vimrc-to-vim/


That's interesting, I didn't know that. I think I might be shooting myself in the foot by saving a session and doing "vim -S" to load it again with expandtab set or something. I load multiple buffers associated with a project and use :b to move from file to file. I know it expands the tab when I move to the Makefile buffer, because I have to kick myself and remember to set noexpandtab when it inevitably puts in spaces. Or maybe putting expandtab in my .vimrc overrides the normal behavior.


I think you need to need to `filetype indent on` and/or `filetype plugin on` in your vimrc.


I can give an example.

Recently I was working on a project with multiple Makefiles, ie Makefile.{win,nix}. My Emacs config did not load makefile-mode due to the suffixes. As spaces are my default, I got spaces on TAB, iirc even after turning on makefile-mode.


Shouldn't you configure emacs to auto-detect tabs/spaces on the file you've opened?


Whoever is responsible should rename the files so their names end with ".mk" or ".make", the standard extensions that Emacs and other editors recognize.

Auto-detecting tabs/spaces sounds like magic, which not everyone likes.


Of course emacs did not "detect" your non-standard makefile.

It is your responsibility to let editors know that, be adding that magical first line to your make.


Not sure who you're responding to. I don't blame anything here, just illustrate what might cause confusion of tabs and spaces in Makefiles, namely unexpected filename suffixes. Holds true for other editors besides Emacs obviously.


I've done it, thanks to using terminal copy/paste (either my actual clipboard, or GNU screen's copy buffer) instead of my editor's built-in copy/paste. Sometimes it's because I'm viewing a patch in a web UI or `git show` and pasting a few lines from it into my text editor.

But you notice your mistake very quickly and I've never sent code to code review with spaces instead of tabs.


Ok, what's really insane is that this tool that does insane things if you accidentally use spaces instead of tabs is still the standard tool used to build software all over the world...


Indeed, folks make similar comments about Python too. Such a mix up happened to me one afternoon in 2002 or so when copy pasting code.

So, I configured my syntax highlighting to very subtly show whitespace characters. The editor keeps them consistent in every other case. Have never had the problem again since.


Ok, since we're sharing controversial opinions...

If you're going to rely on a fancy shell anyway, why not just throw make out of the loop altogether? That is, unless you're working on a big project where incremental builds really make a difference. (But IME, with a few exceptions, these are the projects that usually outgrow and abandon make anyway)

You can run cc via shell via make, or you can just run cc via shell. In the latter case, there's one less program (with quirks) to fight with, and more flexibility to do stuff that you can't easily bend make to do.


Because the core proposition value of make is the dependency graph and incremental rebuild, which then trivially allows for parallelisation with -j<n>, and doing that with shell scripts is basically reimplementing that part of make, only badly.

Instead of pushing for makefiles to become shell scripts for the part you mention, I usually implement such things as support scripts (in shell, python, whatever) - who often come in useful on their own - and call those from make targets. IOW using the best tool for each job, with make as the orchestrator.


> Because the core proposition value of make is the dependency graph, which then trivially allows for parallelisation with -j<n>

My core argument is that this rarely matters, and in the cases it does, it is often the case that the project's needs surpass what make provides anyway. Or they're using a very special version of make with very special supporting infrastructure, like bsd.mk.

I'm compiling some 100k lines of legacy C (for a custom operating system) plus compat shims that make the thing run on Linux and my very very trivial build script (plain posix compatible shell) takes less than 5 seconds to run in debug mode. Optimized release builds are a bit slower, but this hardly matters in day to day development. It would be trivial to add some parallelism with xargs.

Fwiw I can compile the same thing with CMake, and it's faster only sometimes. It often happens that a header changes and everything needs to be built anyway. And it often happens that the parallelism bites me back when one file gives an error which scrolls out of sight as a dozen other files are still being compiled in parallel.

I've wasted far more time fighting subtly broken makefiles (that don't pick up some dependencies and thus fail to recompile the things that need to be recompiled), and build systems with fancy declarative languages that don't present an obvious way to do what you do in one line of shell when you don't need to worry about a dependency graph.


Personally I see CMake as the best way to work with portable C/C++ these days, as it supports Linux/Mac/Windows, including library-detection and everything else a traditional autotools "./configure" script does.

That said, I'm not a fan of the CMake scripting language, which is quirky and error-prone. It should be easy to write a new 'Find' module, but it's not. There should be officially documented design patterns to make the task trivial. There still aren't.

CMake gets the job done, but it's surprisingly difficult to work with.

</rant>


Not everything else. One particular thing cmake does not do well that bugs me quite a lot is supporting source code generators, especially in cross compilations, where those source code generators are themselves written in C or some other language requiring compilation. With Makefiles, whether with autotools or manually written ones, this is easy to set up with a few custom rules. With cmake, those generators need to be turned into standalone projects.


I really want to like CMake, but having spent the past 5 years using qmake daily, CMake seems very voodoo-ish and archaic.


Too bad, Qt project itself is moving to CMake as its main build tool: https://bugreports.qt.io/browse/QTBUG-73351


I'm aware.


> I'm compiling some 100k lines of legacy C (for a custom operating system) plus compat shims that make the thing run on Linux and my very very trivial build script (plain posix compatible shell) takes less than 5 seconds to run in debug mode.

That's why /you/ don't need make. But there are plenty of 100k line projects that would take much longer, like nearly anything written in c++ for example.


I knew someone was going to say C++.

Thing is, I hardly ever see C++ projects today use plain make. If make is a part of the build procedure, its files are generated by another set of tooling.

Again: I hardly ever bump into big projects that use plain make. All the projects I see use something more complicated, or are small enough that they'd be fine with a plain shell script.

Urgently needing the dependency graph that make provides, but little else, seems like a very niche set of requirements.


I like using make in my own small (<1k LoC) personal projects.

Usually, I have exclusively a few .PHONY targets that run a few lines of shell to init/clean/build.

I get all the same advantages as shell scripts, except I have less clutter in my repo and get tab autocompletion for free.

Make is a great tool for my use case. It's possible to waste tons of time fiddling with it. I've done it. It's not a necessary consequence of using Make, though; in my case, it was a consequence of using features of Make that I didn't need.


> Make is a great tool for my use case.

I'm sure it is, but I would suggest also looking into possible alternatives like "redo" and "tup". (If nothing else then for your own edumafacation.)

Anyway... this does not change the fact that make is fundamentally not suited to handle the non-trivial build problems in the modern age. It just does not work because it's built on the idea of files when describing dependencies. Almost no modern programming language works exclusively on a file level -- C and C++ were the major holdouts here, but C++ is moving towards a module-based compilation model... for lots of good reasons.

(Of course, make can theoretically be made to work for any arbitrary scenario, but ITT we've already seen quite a few incredibly awful/ugly hacks been posted to solve problems... that shouldn't have been problems in the first place.)


Other replies have mentioned C++. Personally, I think 5 seconds is a lot! I can live with rebuilds taking that long, but if I can make them take 1 second or half a second instead, I’ll be much happier.


> Because the core proposition value of make is the dependency graph and incremental rebuild, which then trivially allows for parallelisation with -j<n>,

In my personal experience 90% of Makefiles I come across not only don't take advantage of these, but actually have to actively employ workarounds to avoid these things. If you're taking advantage of the things make has to offer then by all means use make, but if all of your targets are .PHONY and you're having to turn off make features just so you can put all your bash scripts in one file then maybe just put your bash scripts in separate files.


Author here: Exactly this. We keep a `bin/` in the root dir with scripts/programs for more involved steps. Make gives us the dependency graph, incremental rebuilds and parallelization.


> Because the core proposition value of make is the dependency graph and incremental rebuild

So you are in agreement with your parent? He's suggesting make for these use cases.

The issue is that a lot of times people use Makefiles for putting convenient commands that have little to do with dependencies/incremental builds.


Besides performance, I do enjoy the feedback of incremental builds. It's not a huge deal but it's not uncommon that I spot a problem just because the build doesn't behave in the way I expected. Out of the top of my head:

- I forget to add a dependency to the Makefile, I modify said file and notice that `make` doesn't do anything, hinting that something is wrong. If the build system rebuilds everything every time I probably won't notice the problem immediately.

- I edit code in a project but run make in the wrong location (because I'm working on two projects at one for instance). Again, make will tell me that there's nothing to do when I make changes.

- I run the latest build and it behaves in an unexpected way, I'm thinking that maybe I haven't actually built the last version of the source so the code I'm looking at isn't what's being executed. If I run "make" and I see that it starts rebuilding some artifacts I know that I'm out of date and need to try again. If nothing happens then I need to stop looking for excuse and actually start debugging.

- I pull code written by other people (or apply a patch, or unstash old changes). Running make will give me some insight on what parts of the codebase have been impacted. If I see that it's rebuilding something I didn't expect to change it'll probably catch my eye and get me to double check what happened.

So I definitely wouldn't trade a Makefile for a shell script, especially since if you build system can easily be replaced by a simple script you'll probably be able to do just fine with a very basic Makefile.


This came up just a few days ago[0], and someone suggested you could use either of these:

    #!/bin/sh
    set -e

    case "$1" in
        build)
            ;;
        run)
            ;;
        clean)
            ;;
        *)
            echo "unknown: $1"; exit 2
            ;;
    esac
OR

    #!/bin/sh
    set -e

    test $# -gt 0

    build() {
        :
    }

    run() {
        :
    }

    clean() {
        :
    }

    "$@"
Personally, I find makefiles to be ubiquitous and well-understood, and if I want to do anything complex, I can always put that complexity in a shell script that's called by a make target.

[0] https://news.ycombinator.com/item?id=21735176#21754331


The second example is vulnerable to command injection.

./test.sh ls -la


"vulnerable"

It's a shell script as part of your build system. Surely you're not passing untrusted input to your command line are you?


Someone might. It's always better not to introduce such code.


Unless I'm missing something, this seems a bit silly.

Why would I be any more likely to type `test.sh rm -rf /` that `rm -rm /`?


I think you're both right that the risk is low, assuming it's a build script that is to be run locally by a developer.

Code tends to get copied and pasted, and can easily sneak into other programs. Programs are integrated in ways which weren't originally intended. It's not a secure coding pattern, and that's why I mentioned it.

During security reviews, I would be focusing on more risky vulnerabilities, but I still review and flag findings in build scripts. I'm more concerned with build scripts downloading content over HTTP, or missing security compiler flags, but I digress.


> If you're going to rely on a fancy shell anyway, why not just throw make out of the loop altogether?

I think can still add value in that scenario as the standard project entrypoint. Someone completely new to the project should be able to `make help`, `make install`, etc. Even if the make targets are simple wrappers to project specific tooling (npm, pip, sbt, etc). If the Makefile is kept simple, then users can treat it as a form of README and should be able to cherry-pick from it as they see fit.

But yeah, I share your sentiment about quirky, non-portable Makefiles potentially being anti-value.


I certainly like it when I encounter a new project, type make, and it just works.

This is rare these days.. honestly at this point there's so much tooling already around that one can't really ever take that for granted. Almost always, I need to look for some readme or look around and figure out what build system is in use (sometimes it's a common system plus custom stuff so just seeing a name you recognize isn't automatically going to mean the standard invocation will work).

In this case, it hardly matters whether I'm going to run make or ./build or make help or ./build help. A simple script, which I'm advocating for simple projects, can double as a form of README just as a Makefile can.


It’s nice to use make to provide a uniform interface for diverse projects, including random personal projects. I.e. whatever project you’re in, in whatever language etc, you know what the semantics of `make build` and `make test` are going to be. Even if these recipes are trivial the uniform interface is a win.


This is a good list, but I have to disagree on the tab thing.

> And you will never again pull your hair out because some editor swapped a tab for four spaces

How many editors out there in 2019 will automatically replace tabs with spaces by default? Unless you're editing makefiles in Microsoft Word or something, I don't see why a working code editor will do something you didn't tell it to do.

Vim, Emacs, Nano, Sublime Text, Kate/Kdevelop, Visual Studio/Code, Atom, Brackets, Text Mate, Scintilla/SciTE/Geany/etc, Programmers Notepad, CodeBlocks, Eclipse, and JetBrains all know not to mess with your tabs.

Rather than switch away from tabs so you can keep using a broken editor, the correct solution is to switch an editor that works. And if you configured your editor to replace tabs with spaces, and it doesn't give you an option to handle Makefiles differently, then that's a broken editor.


I'd add: knows not to mess with your tabs in makefiles.

I generally use expandtabs in vim - but I still don't break makefiles. I suspect it's quite possible to force vim to expand tabs in makefiles.. But why would I want to?


> MAKEFLAGS += --no-builtin-rules

> This disables the bewildering array of built in rules ...

Oh, wow, I totally disagree. It's good to be opinionated -- that is, unless you're wrong. ;)

In all seriousness I find it terribly useful to quickly create simple Makefiles and follow idioms like CFLAGS/CXXFLAGS/LDFLAGS/LDLIBS/etc.


I've recently been lightly toasted by GNU Make's implicit "old-fashioned" suffix rules, where it was looking for YACC files as prerequisites for my own generated C code. The manual suggests a workaround that clears the implicit suffixes and also allows you to add back the ones you want. For me, clearing all of them with an empty ".SUFFIXES :" rule was exactly what I needed.

https://www.gnu.org/software/make/manual/html_node/Suffix-Ru...


If you think implicit yacc rules are "old-fashioned", let me introduce you to GNU Make's… implicit SCCS rules!


I think putting those rules in a standard lib you could include would be awesome. I would then be able to open up the file and see check if it was CPPFLAGS or CXXFLAGS, or whatever, and see how it's written. It'd be way more discover-able, AND it would make make much nicer to teach.

I have tried to teach make, but these implicit rules firing when I'm just trying to do a simple example can mask errors and create a lot of confusion.

Also, not looking at all the VCS versions of the file makes make run much faster, so being modular in what you imported would be a win for bigger systems too.


CPPFLAGS are meant to be passed to the C preprocessor, while CXXFLAGS are passed to the C++ compiler.


I wonder if anyone has ever generalized "make" so that it can operate on general dependencies and tactics rather than always on files? The sort of thing I mean is that you'd be able to write:

    url_exists(http://example.com/file): file
        rsync file example.com:/html
I've had a few attempts at this (most recently in 2013: https://people.redhat.com/~rjones/goaljobs/) but never really arrived at anything satisfactory.


Well, kind of.

I'm using make to build a content graph, part of that is http:// dependencies. The underlying concept is that you can define protocols (https would be one) and then teach my make pre-parser about building them. An example would be:

    myfile.js: https://code.jquery.com/jquery-3.4.1.js local.js
        uglifyjs $^ -o $@
The 'https://' is rewritten to map to a local cache directory (so it's little more than a glorified variable, but still more intuitive since protocols are so familiar from our use of the web), so the above code is actually something like

    myfile.js: cache/https/code.jquery.com/jquery-3.4.1.js local.js
        uglifyjs $^ -o $@
and that in turn invokes a rule that makes a HEAD request to the URL - if that is outdated against the local copy, that gets renewed. So, in regular make code, something like this:

    cache/https/%: cache/https/%.head
        curl -Ls https://$* > $@

    cache/https/%.head: cache/https/%.head.force
        rsync --checksum $< $@

    cache/https/%.head.force: FORCE
        curl -Ls -I https://$* > $@
I could see a way to turn this around, though. The concept of "protocols" is currently only GET in my setup, but I do plan on supporting the idea of POST, eventually. So what you're asking for is that, I guess.

I must say, though, that in my now three years of using (or abusing) make, "just make it another file" is surprisingly often the right next step. In the POST situation, it would simply be a local file that has direct or side-effects attached that do what you're expecting it to do.


> "just make it another file" is surprisingly often the right next step

This way it also often turns out that the "fetch"/"cache" part gets naturally separated from the rest, and that's super useful to track down any permanent or transitory issue.


Yes! It's basically a built-in stack trace! Permanently written straight to your disk!

It does get unwieldy after a certain size, but that's what I'm building tooling for.


Perhaps the part I'm missing is how you are rewriting the https:// URLs to the paths. Do you preprocess the Makefile?


Yeah, I'm using a preprocessor to generate the makefiles from what is basically blueprints.


We did in build2[1]. Specifically, we have a notion of target type which may not be path or mtime-based. For example, one such target type is alias (something that is achieved in make with PHONY targets). So you could define, say, a url target type. Then you can implement a rule that knows how to perform a certain operation on certain target types (we've also generalized a notion of operation so in build2 test, install, dist are operations, not targets).

[1] https://build2.org


This has also been a holy grail for me. The closest I've come to this are custom starlark rules in Bazel.


This. Bazel is as close as it gets.


There are literally hundreds of build tools and workflow engines that replace or extend some or all of Make. Many of them are domain-specific (especially to bioinformatics or machine learning), many of them are not.


Any links to any of these tools?


$(CXX) -MM $(CPPFLAGS) $< | sed 's,\($\)\.o[ :],\1.o $@ : ,g' > $@

My makefiles are beautiful... To me

And they're sure as sh!t not wrong, or they wouldn't work! :)


I ragequit as soon as he said not to use tabs. The article is invalid.


He is completely right about that, sadly.


Ah, yes, dial-up modems on a noisy line. I remember the 1990s like they were just yesterday.


Looks like your cat pressed a few keys when it jumped up onto your desk.


Another trick, create a `help` target:

    help:                      ## Show this help
        @fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##//'
    
    foo:         ## foo help
        # make help will print the names of the targets with 
        # the help message following ##
I don't remember the source, I think I saw it in a stackoverflow question.


Experimental, but this exists: https://github.com/mrtazz/checkmake


My 2c on GNU Make is that you should not use it for new projects. There are better build systems, like Bazel, that are faster and produce consistent, reproducible results.

Make's lack of hermeticity means that it's easy to accidentally craft a Makefile where there are dependencies between targets that are not explicitly listed in the Makefile. When this happens, you can't be sure that when you change an input file and run `make`, all the dependent targets will be rebuilt. This means you often have to `make clean; make` to make sure that things get correctly rebuilt.

Bazel also supports things like caching of artifacts so that when you build, switch branches, build, switch back, and build, it can re-use built artifacts from build #1 so that build #3 becomes a no-op. With remote caching, this can happen across computers and users; when user 1 builds at SHA 123, and then user 2 later checks out SHA 123 and does a build, user 2's build will simply copy the cached artifacts over the network and do no local work. (There is some operational overhead to maintaining this shared cache however).

That said, guides like this are really helpful for maintaining and extending existing make-based build systems!


Agreed that a pure Make approach is less than ideal for new software projects. It's great as an orchestration tool though. I use Make for all kinds of wrappers, because it gives built-in dependency ordering and tab-completion. Make is also a good backend for systems like CMake or Autotools.

I manage a couple of Yocto Linux-based firmware builds for my employer. Each project has a top-level Makefile that wraps the underlying build tools, so the build process is as easy as cloning the repo and running "make".

Make is also great for all of the post-build orchestration steps like "grab these files, rename this one, generate a little report, and make a tarball of the result". Steps that are each a little snippet of shell-script, and have some ordering dependency.

A 'build.sh' type shell-script would work too, but wouldn't include all of Make's built in features and dependency management. And you'd need to write stuff if you wanted it to parse specific target requests. At which point - why not just use Make?

So I guess I really like Make for the role of "be the top-level glue around other build systems". It includes dependency-management and file-generation rules, works out parallel ordering where possible, and comes with tab-completion on every desktop Linux.

Plus every developer who uses the command-line understands that a Makefile means "stuff can be built from inside this folder". They also probably know that "make" or "make all" is likely to run the build, "make install" will probably install it, and "make clean" will probably clean the folder. These conventions have deep staying power. And builds that follow them can be used by a lot of engineers with no further training or documentation required.


I’ve never used Make in my life until a few weeks ago.

It’s surprisingly good for managing the build process for a web project that various build tools like Gulp, Grunt, etc. have been created to handle.

I finally ended up using npm scripts and it was easy to convert them into a Makefile.


Don’t agree at all.

Make is the standard and available everywhere, Bazel is not. Everyone can/should be able to debug Make. Bazel is yet another adhoc solution in a problem space full of adhoc solutions. Which is great if one needs the features that Bazel offers, but I dare say that the vast majority of projects out there do not and are better served by Make.


"Opinionated" sure is right.

I love make, but a lot of this advice is targeted at being able to write "better" shell scripts in make. I don't recommend it. If you find yourself writing a shell for loop, you probably want instead to build a list of targets. If you find yourself wanting complex shell variable preparation, you probably instead want target-specific variables. ONESHELL is a good way to accidentally build some invisible dependencies into a recipe, and make it difficult to use custom functions or canned recipes.

If you do find yourself really wanting a shell, you'll probably also want "advanced" features like error traps, and you'll probably want to work with tools like shellcheck (imo critical for any shell script longer than one pipeline). Both are thwarted by baking the invocations into your make recipe. And the recipe still looks great - probably better! - if you extract any logic into a separate script (which then also opens up further possibilities, like including that tool itself in a shell pipeline in a make recipe).


I liked Make due to my perceived simplicity of it. Thanks to this article, I think I'll just use Rake next time.


For real simplicity I like ninjabuild.

I use it in cases where Makefiles would be a real mess to maintain and in general where I want to paralelize a set of file transforming jobs and have a nice progress indicator and ability to cancel and continue.

I just have a nice PHP generator class for ninja files and I build ninja files programatically in a real programming language based on whatever I like.

With this approach it's possible to fully parallelize builds accross however many different toolchains and SDKs with relative ease and without need to deal with quirks, or limitations of languages of various build systems or build tools that try to be semi-universal (like GNU make) but are not, or have a lot of non-obvious magic integrated.

And I also live how ninja tracks changes in its own build rules and automatically rebuilds targets that would be affected, which is what I always disliked about makefiles. If you change makefile rule you basically have to do `make clean ; make`.


I'll second you on ninjabuild.

I just wish https://github.com/ninja-build/ninja/pull/1578 would get merged, so I can compose ninjabuild repos seamlessly.


I like make because it can be as simple as you want it, yet powerful if you need it to be.


I hope I'm not carried away by the example, but I would definitely not build Docker images using make. A multi-stage Dockerfile can take care of building, testing, packing, etc. source code a lot more predictably than any amount of make, by using known images of the tools involved (e.g., node, webpack, etc.).

Moreover, Docker's caching implicitly allows to declare dependencies with a lot fewer headaches than make. On the downside, it is all to easy to write something that busts Docker's cache early in the build process, rendering `docker build` super-expensive.

So my opinionated tooling: docker, git (to extract project versions and inject them into the image), bash (to glue everything together). Everything else belongs inside the Docker build.


I like to use a simple makefile that runs `docker build` and `docker push` commands, automatically tagging images based on the current solution version and git head. That means I can simply do `make build|rebuild|release`, and get images all properly tagged.

But yeah, all the actual work is in the Dockerfile.


I do just the same, but replacing docker by docker-compose. I like its Yaml files for managing settings around docker and plugging multiple containers together, you'd reinvent a lot of that by omitting it.


The problem with this:

    out/image-id: $(shell find src -type f)
is that if you delete a file from src (without also modifying or adding a file), make won't consider image-id to need rebuilding


Another problem is that if one of the input files itself is generated, it might not get rebuilt after a "make clean", because the ultimate target no longer knows it's dependent on it.


Concerning creating directories:

  out/foo:
  > mkdir -p $(@D)
  > …
I prefer to use this pattern, which lets you have many fewer `mkdir -p …` lines in the make output when you use it a bunch of times:

  out/foo: | out
  > …

  out:
  > mkdir "$@"
(Here using the > recipe prefix from the article, though you won’t find me doing that in real life—I like my tabs.)



That looks to be a good article, explaining the reasoning behind the different alternatives. What I wrote matches your Solution 4—that pipe is indeed a subtle beast! I’ve definitely combined this with fancier patterns and second expansion as well.


So force GNU make and bash as shell?

No thanks.


This is chalk full of bad advice like "use GNU make and bash" that it's obvious to me it was written by someone who only knows GNU/Linux and doesn't know UNIX (or much else). Blind leading the blind again. Don't write GNU-specific constructs unless you have no other choice; use ksh instead of bash, it's portable and a standard and has been long before the GNU/Linux abomination. What nonsense I get to read...


If somebody is targeting inter-Unix builds, Autotools is probably a better pick than hand-rolling a Makefile.

But realistically, Linux has captured 75% or more of the entire *nix market. A lot of developers spend their entire careers in Linux these days, and that trend isn't likely to swing back towards other Unix flavors anytime soon.

As long as a developer understands what users will be using their Makefile, it shouldn't matter if they target platform-specific features. The BSD ports tree uses a bunch of BSD Make features that aren't in GNU Make. And that's totally fine, because they know their target users.

Like let's say I'm writing a Makefile that wraps tools which don't run on anything other than Linux. Why would I care about portability across Make implementations?

If a dev doesn't know their users' environments (other than being POSIXy), probably they shouldn't be writing raw Make of any flavor. And if they do know their users, then it isn't a big deal to rely on some platform-specific features.


I 120% recommend MAKEFLAGS += --no-builtin-rules

If I could rewrite history, I would make those builtin rules nicely, explicitly, importable. No magic when I'm trying to teach make, please.


I'm glad the author specified that this is mainly for GNU make, because I suspect that if I tried most of these using OpenBSD make I would have a bad time.


To be honest - I don't believe I've had to write a single Makefile since I started using CMake.

At most I've had to tinker with existing ones - in which case I wasn't motivated to make them more elegant, as opposed to replacing them with CMake generation.


FYI, there's a patched Make providing (among other things) an interactive debugger:

https://github.com/rocky/remake


Remake is wonderful, but note that many bad makefiles invoke 'make' directly instead of using '$(MAKE)', so remake won't work properly until those makefiles are all corrected.


.ONESHELL is a huge help, and replacing the recipe tab is very interesting. Great post!


More derangements to undo when you want it to build on a different system.


I think I have decided that I'm done with Makefiles. They are very tempting, because they follow naturally from interactive exploration. You see yourself writing a command a lot, and think "I'll just paste that into a Makefile". Now you don't have to remember the command anymore.

But the problem is that building software is a lot more than just running a bunch of commands. The commands represent solutions to problems, but if the solutions aren't good enough, you just make more problems for yourself.

The biggest problems I've had with Makefile-based builds are getting everyone using the repository the right version of dependencies, and incrementality. A project I did at work involved protos, and it was great when I was the only person working on it. I had a Makefile that generated them for Go and Typescript (gRPC-Web) and usually an incremental edit to a proto file and a re-run of the Makefile resulted in an incremental update to the generated protos. Perfect. Then other people started working on the project, and sometimes a simple proto change would regenerate the entire proto. Sometimes the protos would compile, but not actually work. The problem was that there is a hidden dependency of the proto compiler, protoc-gen-(go|ts), and the language-specific proto API version that controls the output of the proto compilation process. Make has no real way to say "when I say protoc, I mean protoc from this .tar.gz file with SHA256:abc123def456..." You just kind of yolo it. Yolo-ing it works fine for one person; even if your dev machine gets destroyed, you'll probably get it working again in a day or two. As soon as you have four people working on it, every hidden dependency destroys a day of productivity. I just don't think it's a good idea.

Meanwhile, you can see how well automated dependency management systems work. Things like npm and go modules pretty much always deliver the right version of dependencies to you. With go, the compiler even updates the project definition for you, so you don't even have to manage files. It just works. This is what we should be aiming for for everything.

I have also not had much luck with incremental builds in make. Some projects have a really good set of Makefiles that usually results in an edit resulting in a changed binary. Some don't! You make a change, try out your binary, and see that it decided to cache something that isn't cacheable. How do you debug it? Blow away the cache and wait 20 minutes for a full build. Correctness or speed, choose any 1. I had this problem all the time when I worked on a buildroot project, probably because I never understood what the build system was doing. "Oh yeah, just clean out those dep files." What even are the dep files? I never understood how to make it work for me, even after asking questions and getting little pieces of wisdom that seemed a lot like cargo-culting or religion. Nobody could ever point to "here's the function that computes the dependency graph" and "here's the function that schedules commands to use all your CPUs". The reason is... because it lives in many different modules that don't know about each other. (Some in make itself, some in makefiles, some in the jobs make runs... it's a mess.)

Meanwhile, I've also worked on projects that use a full build system that tracks every dependency required to build every input. You start it up, and it uses 300M of RAM to build a full graph. When it's done it maxes out all your CPUs until you have a binary. You change one file, and 100% of the time, it just builds what depended on that file. You run it in your CI environment and it builds and the tests pass, the first time.

I am really tired of not having that. I started using Bazel for all my personal projects that involve protocol buffers or have files in more than one language. The setup is intense, watching your CPU stress the neighborhood power grid as it builds the proto compiler from scratch is surprising, but once it starts working, it keeps working. There are no magic incantations. The SHA256 of everything you depend on is versioned in the repository. It works with traditional go tools like goimports and gopls. Someone can join your project and contribute code by only installing one piece of software and cloning your repository. It's the way of the future. Makefiles got us far, but I'm done. I am tired of debugging builds. I am tired of helping people install software. "bazel build ..." and get your work done.


If you're doing anything other than a unity build (single compilation unit) you're doing it wrong.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: