Hacker News new | past | comments | ask | show | jobs | submit login
The Unreasonable Effectiveness of Makefiles (matt-rickard.com)
256 points by srijan4 on Aug 12, 2022 | hide | past | favorite | 210 comments



My most horrible abuse of `make` was to write a batch job runner.

Most of the targets in the Makefile had a command to kick off the job and wait for it to finish (this was accomplished with a Python script since kicking off a job involved telling another application to run the job) followed by a `touch $@` so that make would know which jobs it had successfully run. If a process had dependencies these were declared as you'd expect.

The other targets in the Makefile lashed those together into groups of processes, all the way up to individual days and times. So "monday-9pm" might run "daily-batch", "daily-batch" would have "daily-batch-part-1" (etc), and each "daily-batch-part-..." would list individual jobs.

It was awful. It still is awful because it works so well that there's been no need to replace it. I keep having dreams of replacing it, but like they say there's nothing more permanent than a temporary solution.

All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.


Congrats! You invented Apache Airflow: https://airflow.apache.org


With the added bonus of not having to learn or maintain Apache Airflow!


Interesting:

> Airflow was started in October 2014 by Maxime Beauchemin at Airbnb. It was open source from the very first commit and officially brought under the Airbnb GitHub and announced in June 2015.

I believe I starting building my tool somewhere around 2010, possibly 2011. The core mechanism has been completely unchanged in that time. If Airflow was a thing at the time, I'd have hopefully looked into it. I looked at a handful of similar products and didn't find anything that was a good fit.

Based on a really quick skim of the Airflow docs it seems like it checks all of the boxes. Off the top of my head:

* LocalExecutor (with some degree of parallelism, assuming the dependencies are all declared properly) seems to do exactly what I want.

* I could write an Operator to handle the interaction with the system where the processes actually run. The existing Python script that does this interaction can probably get me 90% of the way there. Due to the nature of what I'm running, any job scheduler will have to tell the target system to do a thing then poll it to wait for the thing to be done. To do this without any custom code, I could just use BashOperator to call my existing script.

* It's written in Python, so the barrier to entry (for me) is fairly low.

* Converting the existing Makefile to an Airflow DAG is likely something that can be done automatically. We deliberately keep the Makefile very consistent, so a conversion program can take advantage of that.

I think my dream of replacing this might have new life!


But... why would you want to spend energy replacing something that has been running stable for over a decade?


There are a number of deficiencies with the current system that aren't showstoppers, but are pain points nonetheless. Off the top of my head:

* There's no reasonable way to do cross-batch dependencies (e.g., if process X in batch A fails, don't run process Y in batch B). I've got a few ideas on how I could add this in, but nothing has been implemented yet.

* There's no easy way to visualize what's going on. Airflow has a Gantt view that looks very useful for this purpose, our business users would absolutely LOVE the task duration graph, and the visualization of the DAG looks really helpful too.

* Continuing a failed batch is a very manual process.

None of these are showstoppers because, as you said, this has been running fine for over a decade. These are mostly quality-of-life improvements.


Ah, I understand. That makes sense. If you have business users, then it makes sense to go with something like Airflow because they do make it easier for less technical users to inspect jobs, kick them off, visualize them, etc. The UI makes all the difference for those use cases.


But airflow is an abomination... that I am forced to use at my current job.


It says "Apache" right there on the tin.


What don't you like about it?


> My most horrible abuse of `make` was to (...)

Heh. The text that follows this sentence is likely the most beautiful and elegant use of a Makefile ever.

I love the humble bragging of this site.


> All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.

Sometimes the most interesting thing is not the story itself, but the story behind the story.

This has my interest peaked. Is there anywhere else I can read about this?


This is just the “job server” which you can read about in the Gnu make documentation, which is excellent.

https://www.gnu.org/software/make/manual/html_node/Job-Slots...

Basically you can specify a “-j 24” (e.g.) option to make, and it will run as many as 24 build steps in parallel. If your Makefile is correct, that’s all you need.

Because make knows the dependency graph, it can correctly handle cases where some build steps have to be done serially, while others can be fully parallelized. E.g.,

  a: b;
  b: c;
In which builds of b and c are serial, versus

  x: y;
  x: z;
for which builds of y and z are parallel.

It’s quite a powerful capability and it feels great to see 24 build-steps start in parallel from one simple command.


This was another thing that attracted me to `make` for this task. I figured that, as long as the dependencies were all declared properly, I should be able to run multiple jobs in parallel.

I didn't pursue this very far as there were other problems with doing this, but I'd like to pursue it again. The problems are all with the target system, `make` does the parallel execution work perfectly.


Just a friendly tip that it’s _piqued_ :)


You never know, maybe it is the peak of their interest and it's all downhill from here


Which, by the way, means "sharpened". Probably "peak" has the same root.


pique comes from (Vulgar) Latin piccare, which means "prick with a sword". The route to English is via French.

peak comes from Old English pīc, meaning just "peak" (e.g. of a mountain).

The two are completely different words that just sound similar.

pique may possibly go back to Proto-Germanic, and peak does, but the two go back to two separate words (*pīkaz, *pikkāre) though both are then related to sharp things and possibly onomatopoeic.


It took a while, but I finally found it!

https://web.archive.org/web/20110606144530/http://www.ibm.co...

There’s a reference to some sample Makefiles to start and stop some Linux services in parallel. It’s obviously not complete, but this (or something similar) was what inspired my system.


> All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.

Any sufficiently complicated init system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of systemd.


including, most notably, systemd.


That's beautiful, not awful


I did the same. Parallelization thanks to GNU make's `-j`, recoverability (only rerun the failed steps, not from scratch). If you use the remake fork of GNU make, you also get debugging and profiling for free.


My most horrible abuse of make was a distributed CI where I put a wrapper in the MAKE env var so that recursive make executions would invoke my wrapper which would enqueue jobs for remote workers to pick up


Slightly tangential but I've worked for several companies now that use `make` as a simple command runner, and I have to say it's been a boon.

Being able to drop into any repo at work and expect that `make init`, `make test` and `make start` will by convention always work no matter what the underlying language or technology is, has saved me a lot of time.


I've worked on a few projects that apply this pattern of using a Makefile to define and run imperative commands. A few people develop the pattern independently, then it gets proliferated through the company as part of the boilerplate into new repositories. It's not a terrible pattern, it's just a bit strange.

For many junior colleagues, this pattern is the first time they've ever encountered make -- hijacked as some kind of imperative command runner.

It's quite rare to run into someone who is aware that make can be used to define rules for producing files from other files.

I find it all a bit odd. Of course, no-one is born knowing about middle-aged build tools.


> It's quite rare to run into someone who is aware that make can be used to define rules for producing files from other files.

Is it, though?

That's literally what Make does as part of its happy path.

GNU Make even added support for pattern rules, as this use case is so pervasive.

What do you think people think make is about?


oh i agree, that's why i find the situation odd!

i'm talking working on projects with people whose first encounter with make is in a project where someone else has defined a Makefile to wrap imperative actions, e.g. `make run-unit-tests`, `make deploy`. If they think about make at all, there's a good chance they think make is for performing imperative actions, and has nothing specifically to do with producing files from other files using rules and a dependency graph, or the idea of a target being a file, or a target being out of date.


This is what I do for all non-rust projects. I knew what it was supposed to do, but wow if it took me forever to figure out how to do it (the connection between rule name and file name is really poorly documented in tutorials, probably should've just read the man page)


Yeah, I did this too. It's not that surprising considering that typically end-users only interact with phony targets (all, clean, install, etc).


This was the nicest thing about blaze at google. I'm a big believer that having a single standard tool for things is a huge value add, regardless of what the tool is. I didn't really like blaze particularly, and I don't really like make particularly, but it's amazing to just have a single standard that everybody uses, no matter what it is.


Rust's Cargo has the same appeal. There are 90,000 libraries that support cargo build/doc/run/test without fuss.


No, I believe GP is advocating against language-specific tools being the standard. In the ideal world you would have a Makefile that calls cargo so "make" still works like it does identically in the js or python or golang repos.


yup, thats the appeal - in a multi language environment a common call, not having to futz with cargo run, npm run start, python3 -m something, etc


Conventions are great, but that doesn't look like anything specific to make, a shell wrapper could do that:

    #!/bin/sh
    
    case $1 in
        init)
             ... do whatever for each project init
        ;;
        start)
             ... do whatever for each project start
        ;;
        test)
             ... do whatever for each project tests
        ;;
        *)
            echo "Usage: $0 init|start|test" >&2
            exit 1
        ;;
    esac
In my home/personal projects I use a similar convention (clean, deploy, update, start, stop, test...), I call those little sh scripts in the root of the repo "runme".

The advantage could be, maybe, no need to install make if not present, and no need to learn make stuff if you don't know it.

Sometimes they don't match the usual words (deploy, start, stop, etc) but then I know that if I don't remember them, I just type ./runme and get the help.

For my scenario, it's perfect because of it's simplicity.


A shell wrapper could do that, but Makefiles are a DSL to do exactly that with less boilerplate.


And have a nice inbuilt graph runner if you decide one task depends on another...


Complete with automatic parallelization if you ask for it! And automatic KEY=VALUE command line parsing, default echoing of commands (easily silenced), default barf on subprocess failure (easily bypassed). The variable system also interacts reasonably sensibly with the environment.

I've never rated Make for building C programs, but it's pretty good as a convenient cross-platform shell-agnostic task runner. There are also several minimal-dependency builds for Windows, that mean you can just add the exe to your repo and forget about it.


To tell the truth, make sucks incredibly for building modern C programs. There are just too many targets. It's why all of them generate their makefile with some abomination.

But it is still a great task runner.


Tbf, that particularity is easily achieved in shell scripts too:

    task1() {
        echo hello
    }

    task2() {
        task1()
        echo world
    }

    "$@"


But now update it to not re-run tasks unnecessarily - it's already wordier than a shell script right now.

Meanwhile, in Make that's

    task1:
        echo hello

    task2: task1
        echo world


True, that's where Make shines. Though given the popularity of so many Make alternatives (the strictly subset of command runner variety, like just[1]) who keep its syntax but not this mechanism, I wonder if for command runner unnecessarily re-running dependencies is really a big deal. Because quite often the tasks are simple and idempotent anyway, and then it's a bit of a hassle to artificially back the target by a dummy file in Make (which your example doesn't do here e.g.).

[1] https://github.com/casey/just


> I wonder if for command runner unnecessarily re-running dependencies is really a big deal.

I've used in in the past with python/django roughly like so (untested from memory, there may be a "last modified" bug in here that still makes something run unnecessarily):

  .PHONY: runserver

  environ:
    python -m venv $@

  environ/lib: requirements | environ
    . environ/bin/activate && pip install -r $<
    touch $@

  runserver: | environ/lib
    . environ/bin/activate && python manage.py runserver [foo]
Setting up these prerequisites takes a while and doing every time you start the dev server would be a pain, but not doing it and forgetting when requirements were updated is also a pain. This would handle both.


What is the | doing in the dependencies?


Specifies order without a hard more-recent-than dependency on everything after the |. So if the timestamp on "environ" is updated, that won't cause the "environ/lib" recipe to run, but if they both have to run then it ensures "environ" happens before "environ/lib".

It might not be necessary for this example, but I've found being more liberal with this feature and manually using "touch" has been more reliable in stopping unnecessary re-runs when the recipe target and dependency are directories instead of just files.


But now put .PHONY everywhere.

I don't think Makefiles are a bad way to go but a bash script is likely more accessible and easily reasoned about in most places.


make gives you autocomplete more easily for free. One reason I use it always.


+1 for this, free autocomplete is the reason that I love using Make as the top-level tool, even if the actual heavy lifting is done by CMake or something.


You can make "make init" work on Windows and Unix if you work at it, out of the same Makefile.

The above won't.


This is standard in Node.js ecosystem and I love it. Each package has scripts in package.json that you can run with npm run [name], and some of these like start, test or build (and more) are standardized. It's really great DX.


But it's npm, so when you switch to a java project, for example, you have different commands.


Quite a few companies I've contracted with have lifted the pattern up into Bazel or Gnu Make - for node projects `make lint` can be a pass through.

In the project repo, either work.


I did this for a while but make isn't well suited for this use case. What I ended up doing is have a shell script with a bunch of functions in it. Functions can automatically become a callable command (with a way to make private functions if you want) with pretty much no boilerplate code or arg parsing. You can even auto-generate a help menu using compgen.

The benefit of this is it's just shell scripting so you can use shell features like $@ to pass args to another command and everything else the shell has to offer.

I've written about this process at https://nickjanetakis.com/blog/replacing-make-with-a-shell-s... and an example file is here https://github.com/nickjj/docker-flask-example/blob/main/run.


Nice shell script. It’s rare to see one written so well. I’ll add you to my list of people I can still count on one hand that properly quote variables.

If I had to pick one nit, and it’s a stylistic choice, you use braces around variable names where they aren’t strictly needed.

I also like to add “set -u”.


I agree whole-heartedly with OP's use of "braces around variable names where they aren’t strictly needed". I have two reasons. First, consistency is nice. Second, they aren't needed now, but invariably you will end up coming back and adding to the script, and will end up needing them.

OK, maybe I have 3 [I mean 4] reasons. 3) if you always put the braces in, it won't break your script when they aren't required. However, if you don't put the braces in when they are required. it will break your script. 4) often putting the braces in when they are not required makes the script easier for me to read. I often use spacing that is not required for the same reason.

I'm not saying I never break my own "rules" (they are really more guidelines than rules). You will find variable names used in my shell scripts that have no surrounding braces, but I probably use more of the "unnecessary" ones than a lot of people do. And yes, I'm aware that sometimes not having them makes me less consistent. Everything has a balance, people just differ on what style provides the balance they prefer.


Thanks.

My thought process around using braces when they're not needed is mainly around consistency. If you pick and choose when to add them then you need to make a decision every time you add a variable. I've written about that here: https://nickjanetakis.com/blog/why-you-should-put-braces-aro...

That is a good call about `set -u`, it's something I've been using more recently but I haven't added it into that script yet but thanks for the reminder, I will soon. I ended up making a post about that here: https://nickjanetakis.com/blog/prevent-unset-variables-in-yo...

Another small thing I've been doing recently is defining options like this:

    set -o errexit
    set -o pipefail
    set -o nounset
It's a little more explicit on what each option does. It might be just enough context to avoid having to look up what something does. Philosophy wise that's also something I've been doing semi-recently which is to use long form flags over short flags in scripts https://nickjanetakis.com/blog/when-to-use-long-word-or-shor....


It adds too much visual noise for me, especially since you already need to double-quote the variables to protect against whitespace expansion. The rules around when braces are needed are simple so I leave them off when they aren't necessary. The rules around when double-quotes are needed are much more subtle, so I almost always use double-quotes, even when they aren't needed. e.g.

   foo=$bar  # quoting not needed but I'd still do this:
   foo="$bar"
A bug-bear of mine is unquoted variables especially with braces, even when using them for optional args:

   ${TTY}
Using your original script as an example, I'd prefer this:

    dc_exec=(docker-compose "${DC:-exec}")
    dc_run=(docker-compose run)
    if [[ ! -t ]]; then
      # we have no TTY so we're probably running under CI
      # which means we need to disable TTY docker allocation
      dc_exec=("${dc_exec[@]}" --no-TTY)
      dc_run=("${dc_run[@]}" --no-TTY)
    fi

    _dc() {
      "${dc_exec[@]}" "$@"
    }

    _build_run_down() {
      docker-compose build
      "${dc_run[@]}" "$@"
      docker-compose down
    }
Of course, this uses bash arrays and isn't POSIX. But the ergonomics of using an array for construction a long command are so much nicer than backslashes or long lines that I use them all the time now.

   cmd=(
      /usr/bin/some-command
      --option1="foo"
      --option2="bar"
      --option3="baz"
   )
   "${cmd[@]}"

I also prefer using self-documenting long-options like above when writing shell scripts.

Another thing that goes along with set -u is to fail early. So for example, your script seems to require global POSTGRES_USER, etc, so why not:

   set -o nounset
   # fail early on required variables
   : "${POSTGRES_USER}"
   : "${POSTGRES_PASSWORD}"
Happy to code-golf shell scripts. There's usually nothing but hostility toward them around here. :-)


Thanks.

The postgres variables are sourced in the line above I use them, they will always be set. It's a hard requirement of the Docker postgres image. I did end up pushing up the nounset change and it didn't complain about it since it's set from being sourced.


Doh, I missed that.

Aside, but I gotta say, lots of good stuff here:

https://nickjanetakis.com/blog/

I'm not personally a fan of videos, but I have plenty of collegeues who are that I'm going to happily start pointing at your videos. Some of these will be very handy references I can add to code reviews.


Thanks, I really appreciate it. Word of mouth helps a lot.


$ make run-dev

That command (to run an Angular/nodejs dev instance has staved off carpel-tunnel syndrome for me for maybe another 5 years.


For a task runner I really like just and its Justfile format: https://github.com/casey/just It is heavily inspired by make but doesn't focus on the DAG stuff (but does support tasks and dependencies). Crucially it has a much better user experience for listing and documenting tasks--just comment your tasks and it will build a nice list of them in the CLI. It also supports passing CLI parameters to task invocations so you can build simple CLI tools with it too (no need to clutter your repo with little one-off CLI tools written in a myriad of different languages).

If most of your make usage is a bunch of .PHONY nonsense and tricks to make it so developers can run a simple command to get going, check out just. You will find it's not difficult to immediately switch over to its task format.


I don't understand the use case of `just`. It drops every useful feature from `make`. It doesn't look like it has parallelism or the ability to not needlessly re-run tasks.

Even if `just` was installed on a standard Linux box, I don't see the benefit of it over a bash script.


I don't think just is trying to be a build system. It's major focus is as a task runner and in that space it does it's job well IMO.


As a task runner, why is it better than a bash script? Being able to run tasks in parallel is like the most fundamental feature I would expect from a task runner. Being able to skip over tasks that don't need to be done is a close second.


Because I don't want to have to read and figure out each person's bash idiosyncrasies, bugs, etc. in pile of undocumented scripts to add a new task that runs a new test case. Just gives you a framework to put all your tasks in one file, document them, and find and run them easily.

If bash works for you, stick with it. It does not work with large teams and people who aren't deeply experienced in all the footguns of bash in my experience.


Just looks soooo promising! I don't think I can use it until conventional file target and dependencies are supported though. Right now everything's tasks (phonies) so conventional makefile rules like the following are impractical:

  tic-tac-toe: tic.o tac.o toe.o
    cc -o '$@' $^
  
  %.o: %.c; cc -c $^


You might find checkexec useful to pair with just, it is basically a tool that only does the file-based dependency part of make: https://github.com/kurtbuilds/checkexec


For those looking for a powerful task runners that feel like a makefile, please take a look at Run:

https://github.com/TekWizely/run

It's better a managing and invoking tasks and generates help text from comments.


Yeah, that does look fairly promising... turns simple descriptions into usable flags / help comments / etc, which is quite difficult with make.

I'll have to poke around this a bit more later, thanks for the link!


Seconded - I love just & Justfile. Such an upgrade after trying to force things into package.json scripts. Chaining commands, optional CLI arguments, comments, simple variables, etc. Very simple and a breath of fresh air.


Just seems neat, but except for the dependencies it's just a way to package multiple shell scripts into one file, no?

I've thought about trying it out a few times but can never see its value over scripts in ./bin.


Scripts in bin have no documentation, no easy way to enumerate them, etc. There is definitely a time and a place for bin scripts, especially as things grow in complexity. However the beauty of just is that there's one file (the justfile) that defines all of your project's actions. You don't have to go spelunking into bin to figure out how to tweak a compiler flag, etc. And since just will run anything there's no reason why your complex bin scripts can't just be called from a simple one liner task in a justfile.

Could your write a bash script that does stuff like enumerate all the bin scripts, pull out documentation comments, etc.? Absolutely, and people have followed that pattern for a while (see https://github.com/qrush/sub) but it's a bunch of boilerplate to copy between projects. Just pulls out that logic into a simpler config file.


I've found these guidelines for Makefiles make for a pretty good experience using make: https://tech.davis-hansson.com/p/make/

The advice on output sentinel files for rules creating multiple files helps keep rebuilding dependencies reliable. Avoiding most of the cryptic make variables also helps Makefiles to remain easily understandable when you're not frequently working on them. And using .ONESHELL to allow multi-line statements (e.g. loops, conditional etc) is great. No need to contort things into one line. or escape line breaks.

Seems like you could even use a more serious programming language instead of sh/bash by setting SHELL to Python or similar. That may be a road to madness though...


> Seems like you could even use a more serious programming language instead of sh/bash by setting SHELL to Python or similar. That may be a road to madness though...

TIL.

    SHELL=/usr/bin/python
    .ONESHELL:

    all:
      @from plumbum.cmd import ls
      print(ls["-a"]())

It totally works... Mwoooo ha ha ha haaaa!


The problem with .ONESHELL is, that it is for the whole file. I so wish it was per target. That would be really useful. But for the whole file? Maybe I need each line to be a separate shell anywhere in the file and that will make it impossible to use .ONESHELL for the entire file.


It is per target, that's how it works when I've used it. For example:

  # Makefile
  SHELL := bash
  .ONESHELL:
  .RECIPEPREFIX = >
  
  thing1:
  > FOO=bar
  > echo "$${FOO:?}"
  .PHONY: thing1
  
  thing2:
  > echo "$${FOO:?}"
  .PHONY: thing2
Results in:

  $ make thing1 thing2
  FOO=bar
  echo "${FOO:?}"
  bar
  echo "${FOO:?}"
  bash: line 1: FOO: parameter null or not set
Note how the bash error for unset FOO is line 1 for the second target.

Edit: Maybe I misinterpreted, do you mean you'd want to choose whether a given target is ONESHELL or not?


> [...] do you mean you'd want to choose whether a given target is ONESHELL or not?

Yep, exactly! I would like to have some targets use ONESHELL and others in the same Makefile not use ONESHELL. So that I can choose the most appropriate for each target.

So far I have managed by avoiding ONESHELL and doing the typical "backslash, next line continues" thingy. But it puts some people off.


Never loved make. First used it in the early nineties and found the syntax obscure and error messages cryptic.

My response to this article would be, if make is so great why did they have to invent 'configure' and 'xmkmf'? And why do people continue to create new build tools every couple of years?

Yeah, I mean I guess it worked, but unreasonably effective? Hardly.


> why did they have to invent 'configure'

Cross-architecture and linux distro compatibility, mostly.


Err, pedantically, configure was not for cross Linux distro compatibility, but for cross unix compatibility. It existed long before Linux was a sparkle in Linus’s eye.

And even then, it handled even some non unix environments as well.


GNU autoconf was first released a few months before Linux.

Is there some older implementation/standard/practice of ./configure before autoconf?


Autoconf makes it convenient to write configure scripts. Configure scripts existed long before autoconf, but were written piecemeal.


I was waiting for someone to say this. I can't stand make.


> … why do people continue to create new build tools every couple of years?

Seems like a rite[0] of passage to some degree. Perhaps similar to people talking a stab at The Next Actually Correct CMS, and The Next Object System That Doesn’t Suck, or The Next Linux Distro For Smart People.

[0] edit: corrected “right V. rite” per https://news.ycombinator.com/item?id=32442473



Eh, they solve different problems. Make is too simple to customize your build to deal with system differences- it just builds your code.


i've turned to cmake to do some really weird dependency management for various script calling. It's much more scriptable/friendly than make in its modern form but obviously no python :)


I once used GNU make to manage a large data pipeline on an 18-person project and it worked well.

We developed a lot of Python scripts. To manage them I created some helper tools to integrate them via make. People are welcome to reuse it, I got it released it as open source software. I named it make-booster: https://github.com/david-a-wheeler/make-booster

From its readme:

"This project (contained in this directory and below) provides utility routines intended to greatly simplify data processing (particularly a data pipeline) using GNU make. It includes some mechanisms specifically to help Python, as well as general-purpose mechanisms that can be useful in any system. In particular, it helps reliably reproduce results, and it automatically determines what needs to run and runs only that (producing a significant speedup in most cases)."

"For example, imagine that Python file BBB.py says include CC, and file CC.py reads from file F.txt (and CC.py declares its INPUTS= as described below). Now if you modify file F.txt or CC.py, any rule that runs BBB.py will automatically be re-run in the correct order when you use make, even if you didn't directly edit BBB.py."

This is NOT functionality directly provided by Python, and the overhead with >1000 files was 0.07seconds which we could live with :-).

Make provides a way to handle dependencies as DAGs. Using it at scale requires that you call or write mechanisms to provide that DAG dependency info, but any such tool needs that info. Some compilers already come with generators for make, and in those cases it's especially convenient to use make.


I really like using make for data pipelines as you suggest, and thanks for pointing out your package.

In this pipeline use case, you have base data, and a series of transformations that massage it into usable results. You are always revising the pipeline, usually at the output end (but not always) so you want to skip as many preprocessing steps as possible. Make automates all that.

This works great for image processing pipelines, science data pipelines, and physical simulators for a few examples.

There have been a few blog posts and ensuing HN discussions about this use pattern for make. The discussion generally gets mixed up between make’s use as a build system for code, alas.


Back in my early days of data eng for ML pipelines, I stumbled onto Drake- and it opened my eyes to managing pipelines as DAGs. This pattern is supremely effective, and I try to teach anyone who might benefit.

https://www.factual.com/blog/introducing-drake-a-kind-of-mak...


Drake looks interesting, the visualization looks fun.

I don't see any evidence that it handles refunding of steps when transitive depended on code is changed, though. E.g., if script BBB.py includes CC, and CC is changed, then all steps transitively depending on BBB should be rerun. My make-booster specifically deals with that case.

I also expect drake to have a slow start, which slows development.


Thanks, and I agree, make can work well for data pipelines.

When you're integrating many different data sources with a complicated set of scripts, it's important to automate what you can. The easy but impractical thing to do is rerun everything. Make, properly used, will rerun everything that needs to be run in the correct order... and nothing else. GNU make is also awesome for running things in parallel.


I'm trying to understand why so many people seems to hate make.

I hate building system that don't use Makefile, or that use but don't respect the variable convention. It makes really quite annoying to do things like changing allocation library, add compilers flags, etc.


Yeah. As far as I can tell, most people complaining about Make, and building its replacements haven't figured out how to use Make, or bothered to read the manual. It's really not that complicated...


100% agreed. Half these comments make no sense and demonstrate a real ignorance of make. Please everyone read the manual and judge make for make, not autotools...


Yeah, I am currently learning make, and I feel kind of confused by a lot of comments: Either I really don't understand make, yet, or people really didn't understand that make is for building files from other files and handling the dependencies.


The problem is that as soon as you support multiple platforms, the exact same action of "building files from other files" can look fairly different. Since if you use make you want your readme to be: "cd folder; make -j" and since you don't want to write multiple entirely distinct set of makefiles for different platforms, this turns the makefiles in a hellish mess of special characters.

And all that just for executing a set of tasks in a DAG much slower than ninja


> It's really not that complicated...

…until it is.

I usually reach for make because it’s simple to get a simple project built but once I start throwing stuff like source code generators at it then I have to spend a bunch of time getting the dependency tree right so I’m not chasing bugs I’ve already fixed but didn’t get recompiled. Or recompile the whole thing when I change some trivial thing.

Still, for the stuff I do, it gets the job done without too much trouble.


while make is simple, autotools is really complicated. However it's also very powerful and I've been in situations where cmake couldn't help me but autotools had solved the problem already.


As a build system make wasn't really designed to handle stuff like partial rebuilds, caching, or distributed building. Modern build systems like bazel are just orders and orders of magnitude faster and better for complex projects.


The whole point of make is to handle partial rebuilds. It is why it was invented in the first place.

"Make originated with a visit from Steve Johnson (author of yacc, etc.), storming into my office, cursing the Fates that had caused him to waste a morning debugging a correct program (bug had been fixed, file hadn't been compiled, cc *.o was therefore unaffected). As I had spent a part of the previous evening coping with the same disaster on a project I was working on, the idea of a tool to solve it came up. It began with an elaborate idea of a dependency analyzer, boiled down to something much simpler, and turned into Make that weekend."

If they had rebuilt the whole project, make would not have been needed, However because they wanted to do partial builds and the manual process had downsides make was invented.


I don't know blaze and I am just learning make, but make seems amazing for partial builds.


I love make. What I hate are the default implicit rules. Compiling C from an empty makefile is cool and all, but I resent having implicit build rules for rcs, modula2, texinfo, lex for ratfor (a "preprocessor for fortran 66" tf?)... talk about surprise rules when one of your files is a .p, .r or .f file.


Agreed 100%. It's possible to disable the builtins (with GNU Make at least), but I wish that they were off by default.


Good thing they stop adding built-ins, I guess, 20 years ago.

In case someone want to see all the defaults, run `make -p`.


Because gnu autotools blows goats. Oh, and the tab thing.


> Oh, and the tab thing.

You don't need to use tab.

    .RECIPEPREFIX=>
    foo:
    >echo bar


surely it blows gnus rather than goats?


make != autoconf


Yes, but 99% of the makefiles you encounter in the wild comes from autotools.


This isn't at all true, in my experience. If it's true for you, please consider that your issues are about autotools and not Make, and direct your complaints in that direction.


I use Make for all kinds of general-purpose build automation tasks, and as kind of a top-level catch all. No Autotools to be found in them (except maybe if called from some recipe).


the vast majority of the ones I encounter are from cmake...


GNU Autotools and GNU Make are separate things, my dude.

Autotools uses Make as the backend, but so do a lot of other things too. You don't need Autotools to use Make.


I'd encourage anyone thinking of using make to look at alternatives. Make is great, but is quickly becomes a ball of duct-tape. Make works very well when you spend the time to express your dependency tree, but realistically that never happens and people tend to add hacks upon hacks for Makefiles. Not only that, but they don't scale well as your project adds more components, such as integration testing, documentation, etc.

I found Earthly[0] to be a great replacement. Everything runs in Docker. Your builds are reproducible, cache-able, and parallelizable by default. I've heard Dagger[1] is another good tool in the space.

[0]: https://earthly.dev/

[1]: https://dagger.io/


To go along with Make, use this technique to automatically generate and include dependency information:

https://scottmcpeak.com/autodepend/autodepend.html

And this, avoid recursive make:

https://accu.org/journals/overload/14/71/miller_2004/


Its unfortunate that other build systems haven't taken over. Make is terrible for incremental builds and its reliance on binaries often means issues getting it to run and being very platform dependent. It is better than using a bat or shell file for the same purpose but its a long way behind many of the other language specific tools. I am surprised something better hasn't become popular, Make is the CVS of the build tools.


I find it works well for incremental builds, that is what it's for.


Make is fantastic at what it does well.

These days, I would absolutely not use Make to compile code written in C, except for the smallest personal projects. It is just too fussy to construct the Makefile correctly, in a way that you can get correct incremental builds. Nearly any other build system is better at building projects written in C, in the sense that it is easy & straightforward to get your build system to do correct incremental builds.


Advertisement time (not affiliated, just want to share the joy) : I personally use Xmake and try to advertise it every time I get the chance : FOSS, no DSL it's just Lua, dead simple yet featureful, and it is ninja fast, or at least claims to be I never bothered to check that out, it's fast enough for me.

https://xmake.io/#/


Absolutely! Basically every company I've worked with over the last couple years as a contractor followed this methodology and it's grown to be my default runner as it's language agnostic.


> incremental builds

I've found that to not matter that much these days - unless your projects is hundreds of thousands (or maybe millions even) of LOC, full builds are instant on modern machines.


That's definitely not true for:

- Most C++ or Rust projects

- Medium size or larger C projects

- Anything which is built with tools written in JavaScript (due to Node startup time overhead)

Stuff where a full build is close enough to instant:

- Most Java, C#, Go projects

- Small C projects

- Tiny/trivial C++ or Rust programs, or C++ programs written for embedded systems

This is just my experience. YMMV.


any sizeable rust or c++ project is gonna take a while to compile in my experience? I tend to break things off into libraries that are faster to compile. Some of my coworkers don't like it but they learn to deal (c++)


> I tend to break things off into libraries that are faster to compile.

Yes, exactly. Incremental compilation! This is what Make is not good at.


Unless your modern machine isn’t very powerful because you value food and rent more.


With python or bash. Certainly not with c++


Honestly, I only find Makefiles useful when I have a tiny C/C++ project and need stuff just to compile quickly and easily without the overhead of a real build system.

For literally everything else, I found myself using it more as a task runner - and Make doesn't do a great job at it. You end up mixing Bash and Make variables, string interpolation, and it becomes really messy, really fast. Not to mention the footguns associated with Make.

I found bake (https://github.com/hyperupcall/bake) to suit my needs (disclaimer: I wrote it). It's literally just a Bash script with all the boilerplate taken care of you - what a task runner is meant to be imo


Does Bake support dependency expression / parallel execution? Those are killer features that keep me on Make for general-purpose automation.


Make is solving many complicated tasks, like keeping in mind when to re-run some target (there is communication going on with the compilers that provide make .d files so that they know which source files influence a binary), or running job servers managing parallelism that also support nested sub-makes. But it also has many ugly warts. It's hard to design something that solves both those tasks as well as make does, and also as generalist as make does. Often they are solving a subset, what currently itches the main developer. But something that is both as general, and as comprehensive as make, those tools are rare. Ninja for example checks many of the boxes, but lacks make jobserver support.


Ninja looks clean because it is new. Give it some decades and it is likely inherit a few of warts make has.


It's not that new, and in practice there aren't very many ways for it to get warty because it's basically a dependency graph with "run this command line if this node is dirty" attached to the edges.

I like it very much.


Ninja is so cool… until you can't compile stuff because the author had a different ninja version than what's on your distribution…


> File-watcher or live-reloading. You can create a build/deploy loop fairly easily

When I worked with latex more, I kept a ~/.Makefile-latex and a shell function that would pretty much just do

  inotifywait -e modify -e move --recursive --monitor . | while read; do make --makefile ~/.Makefile-latex; done
and I kept emacs and xpdf in side-by-side windows. Whenever I'd save a file in emacs (or xfig or whatever), a second later xpdf would refresh the pdf (and stay on the same page). It took away some of the pain of working with latex.

edit: I used this complicated setup instead of LyX or whatever other "(la)tex IDE" because I had ancillary files like xfig diagrams that would get compiled to .eps files and gnuplot scripts that would render data into graphs, and the makefile knew how to generate everything.


Very nice! It turns out that latexmk has this functionality:

    latexmk -pvc -pdf foo.tex
(It can be configured to HUP your pdf reader if needed, too.)

I usually add something like this command as the `auto` target in my latex Makefiles, which works pretty nicely.


TIL about latexmk (which I gather is a perl script that handles a lot of the latex build deps, like running bibtex at the right time(s), for you automatically), thanks!

I never really understood most of the TeX world. I cargo-culted the bits I needed to make:

* articles for journal submissions (easy, they all provided packages or templates),

* my thesis (somebody had written a latex class that took care of all the school's formatting requirements a ~decade prior), and

* a resume,

and never dug into the differences between tex, latex, miktex, texlive, auctex, lyx, pdftex, xetex, luatex etc etc etc etc etc.

I was just to ask you if you knew of a 40000-foot overview of these kinds of things but stumbled across this article over at Overleaf: https://www.overleaf.com/learn/latex/Articles/What%27s_in_a_... , it looks pretty apropos.


This is funny, because I just wrote a (fish) shell script that does this as well because all of the tex IDEs are so painful. Mostly because the entire efficiency of latex is that you're editing text and can do it in a text editor like emacs and move things around very quickly. I don't want a new interface!

But. I'm kind of proud. My shell script monitors the tex files for character changes and then once a (configurable) threshold of changes is met, it kicks off the compilation process. But the real game changer is that every time it compiles, if the compile is successful it commits the recent edits to a git branch. Then if I want, I can go through the git branch and see the entire history of edits for only versions of the document that compiled. It's a game changer in a big way. When I finish a section, I squash the minor edits into one commit and give it a good message and then commit the full thing to the main branch. Then there is where I can make sure my manuscripts look the way they should and do major revisions or collaborative edits.

The icing on the cake is that the fish script monitors for a special keypress and will take actions. So I hit "c" to compile the tex, "b" to rerun the massive bibliography compilations, "s" to squash the commits into the main branch, and "l" to display log errors. It's a dream! Now I don't think about compilation at all, or when I should commit something minor to git (and fiddle with the commands). I just type away and watch/request the pdf refresh when I need it... and _actually_ get work done. My god. So happy.

I just finished this today.


If you're using emacs why not a save hook?


IIRC entirely because I didn't use emacs to edit the figures, I was using xfig and later some other figure drawing program I can't remember the name of. If I were just using a save hook (or flymake-mode, which I later used for similar things), when I edited a figure emacs wouldn't know about it so I'd have to either manually run the build or go to emacs and force a file write.


I like make. But these days to me the best part about it is that it’s a common entry point. Most popular languages come with their own make-esque tools that provide the same experience to developers and systems.

Tying together multiple projects, source from different locations, etc Id probably use make or a script.


The short article conflates popularity with quality. Windows 3.11 became the most sold os in history despite being utter trash. Make is popular because it was the first build system, not because it is not utter trash.


Just got out of a python training session with one of my student running w11. Can confirm. So many problems.

Seems like one version out of 2 of windows being trouble stills stand.


What's a good replacement for make that has feature parity?


I like WAF. Scons is good too. Then there is Ninja and the whole ecosystem around that. Search the HN archives for many more suggestions.


Well, many of make’s features is what makes it trash.


So your solution is to burn the computer and yell at clouds?


I love Make. It's a terrible tool for quite a few things, but it's awesome at the thing I use it most for - abstracting away complex series of shell commands behind one or two words. It's like shell aliases that can follow a repo anywhere.

    make test

    make format

    make clean

    make docker-stack
Fantastically useful stuff, even if all it's doing is calling language specific build systems in the background.


I wanted to learn makefiles to generate a static website.

Quickly ran into some massive limitations - one of which is that it completely broke apart when filenames had spaces in them.

"But why would you do that you're doing it wrong" - don't care wanted spaces in filenames.

Ended up switching Rake (a make like DSL written in ruby) and never looked back. Not only can you do all the make declarative stuff, but you get the full power of ruby to mess around with strings.


Next thing you ask for leading tabs in filenames ...

And the feature bloat slippery slope is sliding towards Bazel!


Anyone who finds make unreasonably effective must be working with GNU Make.

If I had to use some barely POSIX conforming thing from BSD thing or wherever, I'd instead write a write a top-to-bottom linear shell script full of conditionals.


In that case,a good habit is to name those GNUMakefile. Works the same, but announce gnu make (which is the one worth it)


GNU Make is so prevalent that nobody seems to bother with this. I can't recall the last time I've seen a GNUmakefile. (Note the lower case 'm', according to the manual.)

It makes sense if you ship both a GNUmakefile and a Makefile.


I agree. I try to stick to just POSIX make when that's easy to do, but POSIX make is so impoverished that it usually isn't worth trying to stick to just POSIX make.

In general, for most people, make is GNU make. And GNU make is actually pretty decent at a lot of tasks.

Many ridiculous makefile problems I see stem from not using it well. I suggest:

* Use compiler dependency generators (generate .d files and "include" them via make). This eliminates many lines and errors.

* Don't use recursive makefiles.

* Set macros with := or ::=

* Use substitutions so definitions can be changed in just one place and the rest automatically works.

There is no perfect tool for all circumstances. But GNU make is still useful for many circumstances.


Here is an odd thing: GNU Make looks for the file in this order (according to the manual): GNUmakefile, makefile, Makefile.

The last time I saw a lower case 'm' makefile was eons ago.

The manual itself recommends Makefile rather than makefile, right in the same paragraphs where it tells you makefile is checked first.

Why look for makefile before Makefile, a file next to nobody has.

Maybe lower-case makefile was a predecessor to GNUmakefile; a way of having a GNU-specific makefile. (If so, that's a noteworthy thing the manual ought to mention.)


As these types of post often come around to make's sub-optimal use as general runner, I'd like to point out my project, Run:

https://github.com/TekWizely/run

It feels like a makefile but is optimized managing and invoking small tasks and wrappers, and auto-generates help text from comments.


Recently I've been much more pleased by the "Unreasonable Effectiveness of Ninja Plus A Few Trivial Scripts".

A raw Ninja file is verbose but easy to read and understand, and there is basically no magic happening behind the scenes (well, except for "deps = gcc", but that's minor). Any action that's too complicated to go directly in the Ninja file goes to a "rule command \ command = ${command}" that just runs a python or shell script.

The only thing I'd like to add to my setup would be "Ninja with glob support", but again that can be handled in a few lines of Python that spit out another chunk of .ninja rules so it's not really a blocker.


My most elaborate use of make was ten years ago to build a bootable, reproducible embedded system image including the OS, application, media files and cryptographic signatures. It's still in active use and no one is complaining.


Pretty much discussion about make as a task runner, and maybe the following is just a very specific use case: make test

I combined learning make with learning TDD, or rather develope some individual style of TDD. I am astonished how effective make is in a rapidly evolving code base with not insificant amounts of unit, component and integration tests. That said I try hard to use the makefiles both, heavily recursive and highly structured.

Of course, I am learning, so my set up is far from optimal. But wow.... Makefiles are so powerful for testing. Its amazing. On the other hand, that the first time I am properly using a build system.


The 2 biggest things I find missing in makefiles is the ability to have the "modified date" of a target be something other than the mtime of the target file, and for the dependencies of a target to be generated rather than statically specified. The latter there are workarounds for, but not the former. The author's suggestion of "better support for make docker" is completely the wrong thing to do - you don't need to support docker, you need to support mtimes and dependencies in a way that docker could use.


I find a bunch of the decisions in Make to not be great - but it's all just syntax stuff.

At its core all a Makefile is is a dependency graph, which is necessary in all the more advanced config managers, only those others are much more heavyweight than the vast majority of projects ever need. Most code doesn't need to vary arguments or parameters based on the target environment, and so that complexity is unneeded, in which case a Makefile can do it all.


Aww, noone mentioned that `debian/rules` files are almost always makefiles? (To the point of starting with `#!/usr/bin/make -f` ...)


Make dsl is terrible, it's the yaml of build systems.

If you are a python dev, give doit a try.


Interesting that everyone who recommends an alternative to Make picks... something different. I doubt there will be two people recommending the same thing in the whole discussion.


Indeed, which supports the idea that make is so popular because it was first, and is still installed by default on most unixes.

Like JS is so popular because it's the main language on the web clients, and by default on most of them.

Also, because doit assumes you have a python project, and make is language agnostic, the latter will have a broader potential user base.

However, doit is not that far from make in the sense it doesn't assume each language workflow. It just gives you a declarative syntax to setup tasks and optionally your dag of dependencies and targets: you then do whatever you want with it. This gives it simplicity and flexibility and power, which are similar qualities that I think make nails.

I do think it would gain from being a stand alone executable so at least you were not expected to have python installed on your machine to run it.


What is so "terrible" about it? I think the rule syntax is as simple as it gets.


Required tabs, lines are each their own "script" instead of blocks (allowing variables), not allowing other executors (i.e. python, TCL, etc would be better than sh).


Actually you can tell GNU Make not to create a new shell for every command using .ONESHELL[0] and you can also select a different shell executable to run said commands[1]

[0]: https://www.gnu.org/software/make/manual/html_node/One-Shell...

[1]: https://www.gnu.org/software/make/manual/html_node/Choosing-...


Tclmake, a partial Make clone in pure Tcl:

https://github.com/TclLab/tclmake


Check (GNU Make) .RECIPEPREFIX, (GNU Make) .ONESHELL, (POSIX Make) SHELL=


The fact that it's likely to be pre-installed on any given linux machine is a huge plus. I use make for providing a uniform interface for my team's data science project workflow and while it has a lot of downsides, it also has a upsides too. It's probably the choice I've struggled the most to justify, but I have a harder time justifying the alternatives.


I skimmed the comments quite quickly so I may have missed it, but it seems that nobody has mentioned what I think is one of the biggest issues with make (as actually used, as opposed to some ideal): recursive runs. I have seen several packages (and I used to work with one rather large code base) which were subdivided into multiple directories, sometimes in several levels, each with its own Makefile and some rule that did

  for d in $(SUBDIRS); do cd $d && $(MAKE); done
A top-level make run would take ages just to recurse through everything to decide that nothing needed to be done, and dependency tracking did not work across modules, so the only way of making sure you got everything rebuilt was to ‘touch’ everything. (The advantage of the setup was that you could check out just part of the tree and build it separately.)

It would be really nice if make could read all subdirectory Makefiles, build one global dependency graph and then have at it.


A company I worked for used make to execute docker, which felt somewhat odd.

Correct me if I'm wrong, but I always assumed `make` to be one of those build tools that could incrementally build targets based on dependencies.

The arcane and esoteric language that constitutes `make` is almost universally avoided, surely if people need a simple task runner there could be better options?


Better? Absolutely.

More ubiquitous? Not really.

I use it for executing docker as well, because the docker command is actually about 5 commands with long lists of parameters. But it's just `make docker-stack` for everyone now.


But then the docker command is starting docker and running the same makefile again inside the container, which becomes hellish to reason about. It's probably better to just make a separate script for that instead.


That is odd. Within the docker container, I agree, you probably shouldn't do that. Esp. since Dockerfiles themselves allow you to run a list of shell commands as your entrypoint.


There was a post on HN (I believe) a couple of months ago concerning a research paper that explores "What defines a makefile" concluding that by their definition, Excel can be used as one. I've looked around the internet and can't find it, so if anyone else remembers it and has it upvoted or otherwise has the link, posting it would add to this conversation. It had interesting discussion of graphs and requirements and was overall a worthwhile read.


I'm pretty sure I found that seminal paper, "Build Systems a la Carte", via HN, but indeed can't find a highly-commented story on the topic. Perhaps the most upvoted story was this one about a blog post from one of the authors (itself very interesting): https://news.ycombinator.com/item?id=17494016

It is a very important paper, I think, mapping out the landscape of build systems designs. But I'm still waiting for something novel to emerge from it. I know the authors planned on integrating the improvements they found into (Cloud) Shake, but I don't know if that's production-ready.

Perhaps Bazel is the best we can hope for at this time.


Was it “Build Systems a la Carte” paper? I could not find mentions of it on HN after 2020 though.


Yes, thank you!

Here's the PDF in case anyone else is interested: https://www.microsoft.com/en-us/research/uploads/prod/2018/0...

I can't find any actual conversation about it on HN so I won't post the HN link


Years and years ago (before the init wars) I made an init system that was just a makefile run by a statically compiled make binary. While this was fun it outlined what I dislike about declarative run control systems (you don't see what's actually going on without having it be pointlessly verbose). But as far as that went it ran my silly micro distro pretty well.


Autogen/autotools mess however... not so pleasant when it breaks.

Curious for those who published source with autotools,etc... do you really sit down and figure that out. It's just a mystery that works or doesn't to me. I never have or would go beyond a simple make file. It really seems tedious on top of the actual code you write. A bit impressive tbh.


Yes, I used autotools[0]. It's definitely hairier than plain make, but you get a lot of useful features on top of it. There's thousands of examples all over the internet so it's easy to reference them.

I like the elegance of pure make, and do use it when appropriate, but I wouldn't really want to reimplement the things autotools does myself in it.

[0]: https://github.com/jbboehr/handlebars.c/blob/master/configur...


What's the alternative? I don't know in advance if the system's seek() supports 64 bits directly or just 32. Or if the user wants a more minimal binary without a certain feature.


Cmake?

Once I figured out how to get it to build python extension modules my life became a lot simpler. The python way (distools?) is easy enough until it isn’t then I reach for cmake.


cmake is not as powerful as autotools.


It's 500 Internal Server Error for me.

Archived copy:

https://web.archive.org/web/20220812135641/https://matt-rick...


Like democracy, make is the worst build system to the exception of every other other. Like Maven. What demented mind thought XML was a good idea compared to the simple and clean DSL of Make (or Ninja)?


Now we need to banish CMake


Yeah, there is something deeply wrong about the whole thing... But what do we replace it with? (Out of desperation, perhaps, I even dreamt of a build system based on a C library, with the usage being, say,

  tcc -run mybuild.c
:)


People keep saying that, but of the systems I've encountered that use CMake they've worked the most predictably, as opposed to autotools, meson, scons, bazel, or heavenforbid ./build.sh

*WRITING* CMake is the 11th circle of hell, but CLion is gradually getting more support for it because they use(d?) it as the first-class project definition when it launched

I would pay good money for CMake 4.x to switch to skylark/starlark


I was wondering, if you don’t build a single timestamped artefact (i.e. everything is phony) is there any reason to use makefile over a set of bash scripts?


Is there a Python library that can check a timestamp and update some files according to a lambda? I don't want to learn Make syntax again.


  if os.stat(object).st_mtime < os.stat(dependency).st_mtime:
    ...
Use a dict to contain each rule:

  rules["a.c"] = (["b.c", "c.c", "b.h", "c.h"], action)
  ...
That can be simplified as well and then run a simple parser over it that takes a simpler representation and turns it into that dict. Then:

  def make(rules, rule):
    (deps, action) = rules[rule]
    run_rule = len(deps) == 0 # if there are no dependencies, the rule always runs
    for dep in deps:
      make(rules, dep)
      if os.stat(rule).st_mtime < os.stat(dep).st_mtime: run_rule = True
    if run_rule: action()
Of course, this doesn't actually validate the DAG itself.

----------

An amendment: You'll probably want to pass both the rule name and the dependencies into the action function/lambda/object so that you can parameterize it and maybe reuse the action (like a common compiler command):

  if run_rule: action(rule, deps)


Topological sorting is in the stdlib: https://docs.python.org/3/library/graphlib.html


SCons can be used as a simple Make-style tool, in addition to the more complex stuff it supports.


my website was a gnu makefile for a minute : https://gitlab.com/acdw/acdw.net


Even more powerful: rake


Please, no more “unreasonable effectiveness” headlines.


This is a remarkably stupid comment but not everything is "unreasonably effective". Mathematics was noted as unreasonably effective for modeling the universe because most areas of mathematics were not invented for the applications they meet... like discovering that your coffee maker doubles as an exceptionally good waffle maker.

Makefiles on the other hand, are not unreasonably effective in this sense. Makefiles are, in fact, _reasonably_ effective... like a coffee maker that brews coffee well.


Does make still basically fail to handle filenames with spaces in them?

That was the deal-breaker for me last I checked.


Declarative isn't important. I wish people would stop talking about it. It's not even useful to think about most of the time because of how many ways it can be interpreted. It's a thereotical categorization, not a functional design principle.

Just make the program do useful things for the user. Make the computer work for the human rather than the other way around.


This is almost too devoid of content (opinion is not that) to usefully reply, and I suspect you're gesturing at some more specific point I might agree with, or at least understand.

Declaration is absolutely a coherent and functional design principle. A declarative system is one in which the outcome is specified and the process is not.

This has big payoffs in domains where it's natural. A good example being grammars. It also has hazards, a good example being the performance of algorithms to parse grammars.

We can see where a declarative build system might be a mixed bag, because the process itself is imperative: make is an early attempt to reconcile imperative build processes with a declaration of what circumstances require their triggering. Basically every build system since make has improved on make, but they all do more-or-less what make does.

The design, in short, is proven, as well as declarative but not purely so.

And the ability to reason about the degree to which make is declarative shows that 'declarative' is in fact a coherent idea. But you can't declare software into existence, you must compile it.


I dunno, the "declarative-ness" of `make` is a pretty important component of its usefulness. In particular, the property of `make`, wherein the structure/details of the computation is implicit in the provided configuration and invoked command rather than being explicitly written down somewhere, is central to its utility, since the alternative of just writing a shell script is anecdotally a much less popular option. If you want to propose a more appropriate word to describe such a property which is less buzzwordy, feel free, but in the context of "why does `make` have such enduring popularity", I think the article's author is being quite reasonable in bringing it up.


You could replace the word "declarative-ness" with "automation" and it'd mean the same thing. And literally all configuration of functionality implies the structure and details of computation - that's the point of configuration, to tell an already assembled program that already has structure and computational details what to do with it. There's no overt distinction between "declarative programming" and "configure a function with value X".

Makefiles are simply configuration files that use whitespace and a couple characters to create the configuration, and what Make's inbuilt functions do with that determine the extent to which the result becomes "more intelligent". Yes they are used to build a graph and execute it, but so is Dotfile notation, and software package configuration files. But we don't call those declarative programming. Many of those configuration files create multiple levels of instructions and require several passes to execute properly. But we just call them "config files" because we don't feel they are intellectually superior enough to be called a form of programming. And on the other hand, we don't call declarative programming "configuration", but they're often the same thing.

Nobody says they "imperatively configure" some software, but they do consider themselves "declaratively configuring" it. Because they've overloaded the word "declare" as if it means something other than "write down a thing I want a computer to eventually do with some automation". People bring up the declarative thing because they want to imagine there's some intellectual value to considering it, but there isn't. You're basically saying "I want to configure a program rather than write one". Which is fine. But just say that and stop pretending that's going to immediately lead to a better result.


Configuration doesn't have to be declarative, for example see https://lukeplant.me.uk/blog/posts/less-powerful-languages/, in particular the section about python configuration, where an imperative configuration language is discussed. How imperative? Very. It's a line-oriented language, where the program reads a line and changes something in an internal data structure accordingly then continue reading the file. This is imperative, the person writing the configuration file has to think about state and time while writing the file, not just abstract goal states and facts.

Declarative vs. Imperative is a spectrum. For instance, there is a declarative language hiding inside most imperative languages : Infix Math. 1+2*3/71**7 is declarative because it under-specifies the order of operation, only the data flow dependencies implied by operator precedence needs to be respected. In the precedence hierarchy I had in mind when I wrote it, You can do 2*3 first or 71**7 first, it's unspecified and irrelevant. I only ask that you do both before you perform the division of their results, and that the addition is the last operation. Meanwhile, in Forth, math is imperative, you have to unroll the expression tree into an exact sequence.

Declarative is any language that under-specifies the task being described. Therefore, every language worth using is declarative to some degree or the other. After all, that is the very purpose of a high level language : to under-specify a task by describing only the most essential of details, all the abstracted details are taken care of by either inference (compiler figures it out, possibly according to rules that you need to be aware of) or exhaustive checking (compiler generates all possible cases and code to select among them at runtime, or very generic code that can handle all cases uniformly). If, like Alan Perlis says, "A low level language is that which requires attention to the irrelevant", then every good language is already declarative in some sense, you omit things and they get taken care of automatically, that's what Decorative means.

You can say you hate buzzwords, I empathize. You can just say that make is bad software (trivially true, or we wouldn't have needed software to generate makefiles, effectively making them a machine code that isn't meant to be written by humans) and that being declarative doesn't make it any less bad. Declarative vs. Imperative are just names for design decisions, they guide a language designer but don't have the power to make a language good single-handedly.


The builder pattern is imperative configuration.


> Make the computer work for the human rather than the other way around.

Ironically is a very concise definition of `declarative`.


Which is doubly ironic as tools and languages described as "declarative" often require the human to jump through many kinds of hoops just to finally make what they've written do what they actually wanted.


You're completely right. SQL query hacking can be ridiculous for example. Still, it's a lot better than the alternative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: