Hacker News new | past | comments | ask | show | jobs | submit login
Self-Documented Makefile (marmelab.com)
249 points by fzaninotto on Feb 29, 2016 | hide | past | favorite | 61 comments

As I used `make` more and more for our node projects, I missed the clean outputs `grunt` or `gulp` provide.

To fix that, I created `make2tap`: https://www.npmjs.com/package/make2tap

This small utility takes a `make` output and generate a `TAP` one that you can pipe to any `TAP reporter`.

Our current `make build | make2tap | tap-format-spec` looks like: http://i.imgur.com/chs0Jf3.png

Beautiful, just beautiful.

They should really be using bash for this, not make. There is nothing wrong with bash scripts calling Make -- for building with DEPENDENCIES. But when you aren't doing that, just use bash (because Make is actually Make + bash to begin with).

This is dumb:

    restart-frontend: ## Restart the frontend and admin apps in dev
       @make stop-frontend-dev && make run-frontend-dev
       @echo "Frontend app restarted"
Write it with a shell script like this:

    stop-frontend-dev() {

    run-frontend-dev() {

    restart-front-end() {
      echo "Frontend app restarted"
    build() {
       make  # this actually does stuff you can't do in bash.

    "$@"  # call function $1 with args $2...  Can also print help here.
This is a lot cleaner. The PID stuff can be done with bash too.

This is only cleaner in your simple case. Make's power lies in dependency tracking and its declarative approach to defining those dependencies.

When you use make anyway, adding those targets there is the logical thing to do. One interface instead of two. It's even less lines of code than your proposed solution (which btw does not fail hard like make does, so a recipe for desaster).

Finally: Pretty please, sh, not bash. Almost none of the bash-scripts out there use actual bash-features and those that do can usually easily be rewritten to just rely on a plain posix shell.

My point is precisely that the script is not using "Make's power in dependency tracking". It's just running commands. If you need Make, use Make; otherwise use shell. It's far from unusual to have a Makefile and some shell scripts at the top level of a project.

Make's semantics are to invoke the shell at EVERY LINE. This is slow and makes quoting a nightmare (try quoting find or sed in bash in Make correctly; you cannot directly invoke them from Make). Conversely, bash's semantics do not involve make whatsoever :)

Shell also has local variables for modularity, and better constructs for conditionals and iterations. And functions. Make has a poor man's implementation of Turing-complete constructs compared to shell (which is a little ironic since shell is already sort of a poor man's procedural language).

'set -o errexit' makes bash fail on errors. Also, as mentioned, the Makefile isn't using phony targets. Neither shell or Make is perfect, but shell is the more appropriate tool when you're not making use of Make's incremental build features.

Another pattern I find useful is to have the actions take more arguments, like:

    ./run.sh dev-frontend --flag-for-testing
which is implemented just like this:

    dev-front-end() {
      ./frontend.py --port 8080 --other-default-flag "$@"
The arguments to Make are actually TARGETS/files and not functions, so I don't think you can do this.

> Make's semantics are to invoke the shell at EVERY LINE. This is slow [...]

I assume you mean commands here? Actually, how slow is it? Ever measured it? I am sure me writing this line took more time than they could save by a rewrite over the course of a few years of using the performance optimized version.

> Shell also has local variables for modularity, and better constructs for conditionals and iterations

And variables in make that you set for a command are only visible to it. Besides - wasn't your point that the OP should not use make because s/he does not use its features?

> 'set -o errexit' makes bash fail on errors

Indeed it does, it was just missing from your better version.

> Another pattern I find useful is to have the actions take more arguments [...] The arguments to Make are actually TARGETS/files and not functions, so I don't think you can do this.

Yes, one communicates with make through variables, not through arguments.

Over the last 20 or so years I've done my fair share of shell scripting and of writing makefiles. Maybe I am just too old and boring to get excited over this, but let me recap: The OP happens to be using make, s/he used an elegant hack to have the file document itself and its usage. S/He wrote about the latter. No, I am not impressed by the post - I used a similar technique in 2003 or 2004 to bootstrap the documentation of a very complex build system - and I'm sure others have before and after me. But I like the concise article.

Debating whether make is "better" than "sh" is completely off-topic here - if there was a universal truth we'd not have seen tools like psh, scsh and the rest of language-specific shell substitutes nor would we have the myriad of language-specific build tools. Back then, when I was young and naive, I've implemented a few of those myself. But now I am not young anymore. And now I naively believe that people should use what they are most productive with. Or everybody come to their senses and just use lisp.

For makefiles, even GNU make executes commands using /bin/sh (unless you set $(SHELL) explicitly); trouble is, many Linux systems use bash for /bin/sh, and its sh emulation is lacking.

Before fig, Docker build repositories often have awkward arguments to build or run a container, and that's where I first saw people using a Makefile as basically a bash script file. I copied it and since then I've used it like 100 times since, it feels a lot more easier than the above bash script approach, though for reasons I can't quite put into words right now.

It also turns out that dependencies creep up pretty often, so shortly after we started abusing the Makefile, we also started using it "right" too.

Examples: * https://github.com/dergachev/screengif/blob/master/Makefile * https://github.com/dergachev/drupal-docker-marriage/blob/mas...

Nope, to me it isn't cleaner. Also, make has "make -n" which is absolutely golden.

It's not about which script file is considered to be "cleaner", aesthetic preferences don't matter here.

The Makefile here almost certainly has some unexpected side-effects when used in a way that the original author didn't intend to. E.g. accidentally removing your PID files (e.g. with git clean) and then you have more than one dev server running.

Make isn't intended for starting/stopping services or doing non-build tasks. It really sucks for that kind of tasks. It might work ok for the handful of workflows the author intended it for, but then spectacularly break down when someone else tries to do something outside of that or when new features get added.

Sometimes I see people use Make for situations where some build platform (e.g. Windows or an embedded toolchain) doesn't have a proper shell at hand. But it's still not a great idea to use Makefiles as a replacement for shell scripts.

There are much better alternatives for doing non-build tasks.

What problems would there be doing things from GNU make, that there wouldn't be when doing the same thing from a shell script?

My experience has been that you can totally use GNU make as a replacement for shell scripts, and it works fine. Why wouldn't it? It's just sequences of shell commands, where each sequence has a name.

And in fact for this sort of situation, where you've got one of a pre-canned set of sequences to run, it works very well - because GNU make will handle the command line for you, and most shells will give you some kind of basic (but usable) tab completion for the sequences available, without any effort on your part.

You do need to use phony targets, though. (https://www.gnu.org/software/make/manual/html_node/Phony-Tar...) This might seem like a ridiculous requirement, but I've seen plenty of bash scripts that just kept on going when a shell command failed; it's not like GNU make is unique in requiring a small amount of boilerplate to make things work nicely.

I was replying to this line

> This is a lot cleaner. The PID stuff can be done with bash too.

In any case, until there is a tool which is as ubiquitous and as declarative as make you're going to have hard time convincing those of use who use it this way to stop.

It is... It's alway better to write code that shows the intent. When you add comments you force other dev to maintain them, and if they don't you will end up with obsolete or incorrect comments.

Plus, you end up doing stupid stuff like:

DoThat(); // It does that

Bit of a tangent, but in a fit of snark a few years ago I wrote a bash "task runner" in the vein of Grunt and Make:


Not very useful, but it kept me amused for a weekend at least.

But why does it matter? With your solution they'll have two "task files" to manage instead of one.

Because make is for managing dependencies, which it tracks by files, as soon as you need to do something which doesnt leave files around, such as start a new service, you then need to plug this hole, by using "touch" and leaving files around, which this example doesnt do so it may actually leave the system inconsistent and make the makefile do things it shouldnt, thus causing errors difficult to diagnose. But then you run into the problem of making make have functions of a service manager, which it isnt, and you will most likely fail doing that.

Use Make when you need to track dependencies based on files, such as for build systems whose output is files, and bash/other script for managing a hosts services which are/may be required for the build system.

I like the ideas here, but for long-running processes like file watching, dev servers, hot reloading, etc. a better format is Procfile (https://devcenter.heroku.com/articles/procfile). The ideas from this article could be nicely applied to it.

Procfil is a format that declares a named list of processes to be run that can be controlled by tools like Foreman (http://ddollar.github.io/foreman/) and Honcho (https://pypi.python.org/pypi/honcho). The advantage is being able to start and stop them concurrently as a group, useful for things that otherwise take a tmux session or multiple windows/tabs, like dev server + file watching + live reload: they become a simple `foreman start`. Processes can also be started individually. Procfiles can also be exported to other formats, like systemd, upstart, inittab, etc.

Here's an example Procfile from a web project I've been working on. Since it uses node I went with node tools like http-server and watch, but it could just as easily use any other web server or file watcher. The way it works is it starts a web server serving public/; starts a live reload server for public/; and watches the src/ directory for changes and re-runs make. The makefile has a few rules for compiling JS and CSS from src/ to public/.

    web: ./node_modules/.bin/http-server
    livereload: ./node_modules/.bin/livereload public
    watch: ./node_modules/.bin/watch make src

An important point of the article was to use standard tools installed everywhere and not some obscure niche tools. Please note that you yourself felt the need to explain what “Procfil” is in the first place.

I wouldn't consider the industry backing of Heroku and a collective 4,500+ GitHub stars between the tools particularly niche or obscure. Crank it up to ~20k stars if you want to count deploy tools that can use them, like Dokku or Flynn. Anyway, the other great thing about Procfile is that it's just a declarative format. Lots of tools can leverage them, and they are an important part of my and many others' workflows.

I also don't think the article is too keen on "standards", judging by it referring to make a "task launcher" and the suggested usage completely diverging from the expected behavior of the program.

Is it even packaged for Debian/Ubuntu yet? Make has been there for decades and is available on practically every developer machine out there. Compared to Make, nearly every tool is niche and obscure.

> but for long-running processes like file watching, dev servers, hot reloading, etc.

I don't think anybody should use make to do that at first place. That's not what make was built for. Likewise Foreman should not be used as a build tool because it is not.


now i've seen the makefile in the example,I understand your comment and this is absolutely not where one wants to use make, that's just ridiculous.

Wouldn't "it's not appropriate for the task" be a better reason not to use something than "it's not made for the task"? Don't you have any better reasons at all? Does make bring out people's conservative side or something?

Let me ask you this. Would you sit on a tree stump? How about kill a fly with a newspaper? Sometimes things are great for purposes for which they weren't originally intended.

> Does make bring out people's conservative side or something?

Misuse of tools in software development is why we end up with broken software, useless solutions that solve stupid problems because the problem wasn't well understood as first place, and first and foremost unnecessary dependencies. That's why we end up with this makefile "hack".

Now explain what it's got to do with "conservatism". bad practices != innovation .

Do you really believe any use of a tool in a way that wasn't intended is a "bad practice"? Is there no more subtlety or thought to it than that? This adherence to an ultra-simplistic black-and-white rule is absolutely a form of conservatism.

If you think this particular use of make is a "bad practice", then argue why it is! If there's no better reason than "This use isn't as intended!" then your opinion won't have much weight with people.

The example isn't a makefile, it's a procfile.

I'm talking about the link, not the op.

>Wouldn’t it better if we could just type make, and get a list of available commands, together with their desciption?

No, Jesus Christ, please don't. Preserve default action as being to build.

Good user interface and good user experience relies on meeting expectations. This behavior breaks those expectations. What if your expectations are different, you ask? In environments where there is an established tradition I think it is rude to break with the norm unless there is a compelling reason to do so. The commandline is popular among developers, other IT professionals and power users because of how efficient it is. It is efficient because there is not all the noise, handholding and other bullsh*t. Please let us keep it that way.

Use a specific target to list other targets. I've seen some people use "make targets".

Bah. What's the harm here? It's not like you'll type make and then not be able to figure out what to do next. The problem you're trying to avoid when preserving default behavior is confusing the user, and that's not applicable here.

Instead of blindly following laws like "Always meet user expectations" we should think about things in a case-by-case basis. Your law is at best a rule of thumb.

There are few things more infuriating than a build tool that prompts for user input when it doesn't need to. Most people like to automate these things, and having a prompt ask for input after typing `make` is completely unexpected and annoying.

Bower is a particularly bad offender here. By default it will periodically ask you if it can send anonymous usage statistics. If you've never experienced it before, you have to angrily google the problem once piping `yes` to it fails and find a github issue where the solution is `bower install --config.interactive=false`

Always assume your build tool is running in an automated environment without user input. If the user wants to do something out of the ordinary, provide command line options and try to follow the conventions that already exist.

I agree 100% that a good build tool makes it easy to build in an automated context. But that's not relevant here, is it? make build is not less automatable than make. Sure, someone has to manually run make once to figure out what the actual build command is. But no person in his right mind would automate a build without running it manually once. I say this as someone whose job it used to be too automate builds.

Yeah you're right, you'd run it at least once locally and discover what the right command is. I guess I'm just letting my feelings for Bower permeate to other build tools, because sometimes it'll run without user input, and other times it will prompt for it.

It's not that hard to use the tty command to see if it's an interactive shell and set the config appropriately. Bower should do that automatically.

But this is yet another anti-pattern: Having different behaviour depending on whether it's an interactive shell or not.

A big offender here is CURL, where this affects the verbosity during download. This is especially frustrating with regard to automation.

1. You type by hand, everything works.

2. You copy&paste into the command line, and everything works.

3. You run it from a script, and it behaves differently.

One reason to leave it: It saves some writing / debugging for people doing system packaging. In debian the package build file (debian/rules) is reduced to two lines these days - as long as you don't mess with the standards. "configure && make && make install" is standard - if you go outside of that, someone will have to do extra work.

One thing I don't quite know yet: how does 'make install' interact with package managers? Does it bypass them?

Depends on what you mean by "interact". When you run it yourself, package managers don't matter. You're not interacting with them in any way (assuming it's a standard autotools/cmake/qmake/...-generated Makefile).

If you mean during packaging, then usually the code is configured with the actual destination path (configure --prefix=/usr), but then installed into a directory which will be packaged by overriding destination (make install DESTDIR=/tmp/some/path)

If you're talking about your local machine, it almost always does (and they rarely include `make uninstall` which is really annoying). As for creating packages, the way OBS (and I assume other packaging systems) work is that you `make install` into a temporary directory then figure out where things were installed and package those files.

"make help", as the Linux kernel does.

I agree, don't break the normal expectations, but they could've done much much worst.

This is basically how I've been doing for quite some time now. `make ?` Check it out here: https://github.com/gtramontina/sourcing/blob/master/Makefile...

If your users set failglob in their shells, they will have to type the less convenient "make '?'" or "make \?".

And regardless of whether they do or don't, if they are in a directory with a single character file name, they will have to escape it too...

I usually escape regardless... But you made a good point about the single character directory. The target name can be anything that makes sense, anyway... `make help` as suggested previously.

Then don't set the default target and keep using it as `make help`.

Or better, change the completion to should the description there when typing "make <tab>"

You could always alias make to the default you like. But if the newbie can't "make" it work, he/she gets frustrated.

I really wish dd had this type of caution. It would save me a lot of misery, preventing the loss of numerous drives.

Not sure it's a good habit for newbies to run arbitrary commands to see what they do. Asking for help somewhere (eg. `--help` flag) looks like a much better practice

"You could always alias make to the default you like"

That breaks down when you work with multiple projects.

For example, try writing a script that downloads and builds a couple of projects. You would have to parse each makefile and hope you can discover the right target to build.

"all" is the default target to build in GNU autotools projects.

    make all && sudo make install
"all" is also recommended default target in GNU make documentation:

Phony targets can have prerequisites. When one directory contains multiple programs, it is most convenient to describe all of the programs in one makefile ./Makefile. Since the target remade by default will be the first one in the makefile, it is common to make this a phony target named ‘all’ and give it, as prerequisites, all the individual programs.

An alternative way that just uses GNU make functions: http://www.cmcrossroads.com/article/self-documenting-makefil...

Thanks for sharing! This is a great way to document and see documentation for main targets in Makefile.

We use Makefile in same way to execute project related tasks like deployments and run development environments. This will even further help to show main targets from a Makefile easily and pretty standard way. Will be taken into use.

You can achieve similar by writing bash scripts, but it will be mostly custom and others need to learn how to use it and extend it. Makefile gives you a standard way of writing small utilities related to all your project, and almost everybody knows how Makefile works and if not, they can learn from existing documentation.

Good idea but target names might contain numbers as well, so you should adjust the regular expression used:

@grep -P '^[a-zA-Z0-9_-]+:.?## .$$' $(MAKEFILE_LIST) | ...

Shorter, more fail-proof help target:

    	@awk -F ':|##' \
            '/^[^\t].+?:.*?##/ {\
                printf "\033[36m%-30s\033[0m %s\n", $$1, $$NF \
             }' $(MAKEFILE_LIST)
... but I agree, breaking expectations is somewhat bad. Also, many shells have completion for Makefiles nowadays, though, that won't get you an additional help text.

You probably want to be using phony targets if your Makefile consists of stuff like this. See: https://www.gnu.org/software/make/manual/html_node/Phony-Tar...

I've been using a similar self-documenting technique myself for a while now, too. Although, my version preserved the traditional part where just calling `make` starts building the program and also supported short one line descriptions and longer ones.

Slightly OT: I like how Rake handles this, which is what gave me the idea in the first place

Instead of using `awk` to break lines you could just use `fmt`, which is part of the GNU coreutils.

Change `grep -P` to `grep -E` (or simply `egrep`) and it also works on OS X.

Yep, that fixes the OSX problem. Post updated with this version.

Here is a version with no dependencies to grep or awk but just sed. I did not tested it on OS X yet.

    	@eval $$(sed -r -n 's/^([a-zA-Z0-9_-]+):.*?## (.*)$$/printf "\\033[36m%-30s\\033[0m %s\\n" "\1" "\2" ;/; ta; b; :a p' $(MAKEFILE_LIST) | sort)

I don't have an OS X box, but I do know that you'll need to change the `-r` to `-E` (GNU sed vs BSD sed). Recent versions of GNU sed (4.2+, I think) also accept `-E` for compatibility with BSD sed (though this is undocumented).

I do this in my bash scripts, if anyone wants to see how.


But then you have lost the default target of make and instead of make && make install you end up with make build && make install. That's going to break a brain or two when people try to figure out why their default MO doesn't work.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact