Ask HN: In what creative ways are you using Makefiles? - kamranahmed_se
======
new299
My favorite use was during my PhD. My thesis could be regenerated from the
source data, through to creating plots with gnuplot/GRI and finally assembled
from the Latex and eps files into the final pdf.

It was quite simple really, but really powerful to be able to tweak/replace a
dataset hit make, and have a fully updated version of my thesis ready to go.

~~~
tajd
Please could it be possible to view a snippet - just to get an idea of how you
do this. Cheers!

~~~
graphene
I do something similar, here's my Makefile -- I have scripts that build
figures in a separate directory, /figures. I'm sure it could be terser, but it
does the job for me.

    
    
        texfiles = acronyms.tex analytical_mecs_procedure.tex analytical_mecs.tex \
                   anderson_old.tex background.tex chaincap.tex \
                   conclusions.tex cvici.tex gold_chain_test.tex introduction.tex \
                   main.tex mcci_manual.tex methods.tex moljunc.tex \
                   tb_sum_test.tex times_procedure.tex tm_mcci_workflow.tex tmo.tex \
                   vici_intro.tex
     
        # dynamically generated figures
     
     
        all: main.pdf
     
        main.pdf: $(texfiles) figures/junction_occupations.pdf figures/overlaps_barplot.pdf \
                              figures/transmission_comparison.pdf \
                              figures/wigner_distributions.pdf
            pdflatex main.tex && bibtex main && pdflatex main.tex && pdflatex main.tex
     
        figures/junction_occupations.pdf: figures/junction_occupations.hs
            ghc --make figures/junction_occupations.hs
            figures/junction_occupations -w 800 -h 400 -o figures/junction_occupations.svg
            inkscape -D -A figures/junction_occupations.pdf figures/junction_occupations.svg 
     
        figures/overlaps_barplot.pdf: figures/overlaps_barplot.py
            python figures/overlaps_barplot.py
     
        figures/transmission_comparison.pdf: figures/transmission_comparison.py
            python figures/transmission_comparison.py
     
        figures/wigner_distributions.pdf: figures/wigner_distributions.py
            python figures/transmission_comparison.py
     
        clean:
            rm *.log *.aux *.blg *.bbl *.dvi main.pdf

~~~
xelxebar
I noticed that you have some file dependencies not encoded in the targets.
Also, you might like reading up on Automatic Variables ($@, $^, $<, etc).
Anyway, just for fun I tried rewriting your script in a way that should Just
Work a little better.

    
    
        texfiles    = $(wildcard *.tex)
    
        figures_ink = figures/junction_occupations.pdf
        figures_py  = figures/overlaps_barplot.pdf        \
                      figures/transmission_comparison.pdf \
                      figures/wigner_distributions.pdf
        figures     = $(figures_ink) $(figures_py)
    
    
        main.pdf: main.tex $(texfiles) $(figures)
            pdflatex $<
            bibtex $(<:*.tex=*)
            pdflatex $<
            pdflatex $<
    
        figures/junction_occupations: %: %.hs
            ghc --make $^
    
        figures/junction_occupations.svg: figures/junction_occupations
            $< -w 800 -h 400 -o $@
    
        $(figures_ink): %.pdf: %.svg
            inkscape -D -A $@ $^
    
        $(figures_py): %.pdf: %.py
            python $^
    
        clean:
            rm *.log *.aux *.blg *.bbl *.dvi main.pdf

------
richardknop
I use Makefile as a wrapper for build / test bash commands. For example I
often define these targets:

\- make test : run the entire test suite on local environment

\- make ci : run the whole test suite (using docker compose so this can easily
be executed by any CI server without having to install anything other than
docker and docker-compose) and generate code coverage report, use linter tools
to check code standards

\- make install-deps : installs dependencies for current project

\- make update-deps : will check if there is a newer version of dependencies
available and install it

\- make fmt : formats the code (replace spaces for tabs or vice-versa, remove
additional whitespaces from beginning/end of files etc)

\- make build : would compile and build a binary for current platform, I would
also defined platform specific sub commands like make build-linux or make
build-windows

~~~
codebeaker
are you me?

------
mauvehaus
Teradata contributes the Facebook open-source project Presto. Presto uses
Docker to run tests against Presto. Since the tests require Hadoop to do much
of anything useful, we install Hadoop in docker containers.

And we run tests on 3 flavors of Hadoop (HDP, CDH, and IOP), each of which is
broken down into a flavor-base image with most of the packages installed, and
various other images derived from that, which means we have a dependency chain
that looks like:

base-image -> base-image-with-java -> flavor-base => several other images.

Enter make, to make sure that all of these get rebuilt in the correct order
and that at the end, you have a consistent set of images.

[https://github.com/Teradata/docker-
images](https://github.com/Teradata/docker-images)

But wait, there's more. Docker LABEL information is contained in a layer. Our
LABEL data currently includes the git hash of the repo. Which means any time
you commit, the LABEL data on base-with-java changes, and invalidates
everything downstream. This is terrible, because downloading the hadoop
packages can take a while. So I have a WIP branch that builds the images from
an unlabelled layer.

[https://github.com/ebd2/docker-images/tree/from-
unlabelled](https://github.com/ebd2/docker-images/tree/from-unlabelled)

As an added bonus, there's a graph target that automatically creates an image
of the dependency graph of the images using graphviz.

Arguably, all of the above is a pretty serious misuse of both docker and make
:-)

I can answer complaints about the sins I've committed with make, but the sins
we've committed with Docker are (mostly) not my doing.

------
thealistra
I wanted to download a few hundreds of files, but the server was enabling only
4 simultaneous connections.

I did a makefile like

    
    
        file1: 
            wget http://example.com/file1
    
        file2: 
            wget http://example.com/file2
    
        file3: 
            wget http://example.com/file3
    
    

And used make -j4 to download all of them, but only 4 parallel tasks at once.
It starts another download when one finishes

~~~
toomuchtodo
Check out aria2 next time.

[https://aria2.github.io/](https://aria2.github.io/)

------
Figs
I once implemented FizzBuzz in Make:
[https://www.reddit.com/r/programming/comments/412kqz/a_criti...](https://www.reddit.com/r/programming/comments/412kqz/a_critique_of_how_to_c_in_2016/cyzxqlx/?context=2)

Even though Make does not have built-in support for arithmetic (as far as I
know), it's possible to implement it by way of string manipulation.

I don't recommend ever doing this in production code, but it was a fun
challenge!

~~~
nicky0
Did you get the job?

~~~
Figs
No, I just made it for fun. Someone else in the thread said they'd be
impressed by a proper implementation in Make, so I took a stab at it.

I do wonder what the response would be like if I actually wrote something like
this in a real job interview though...

Would the interviewer like my comment style? Would they be impressed that I
have the technical skills needed to actually pull it off? Or, would they be so
horrified by it that they'd refuse to ever let me touch any of their code? :)

------
chubot
Not particularly creative, but I'm using it to generate this blog:

[http://www.oilshell.org/blog/](http://www.oilshell.org/blog/) (Makefile not
available)

and build a Python program into a single file (stripped-down Python
interpreter + embedded bytecode):

[https://github.com/oilshell/oil/blob/master/Makefile](https://github.com/oilshell/oil/blob/master/Makefile)

Although generally I prefer shell to Make. I just use Make for the graph,
while shell has most of the logic. Although honestly Make is pretty poor at
specifying a build graph.

~~~
yumaikas
What are make's weaknesses in specifying build graphs (as someone that hasn't
used a lot of make, but might be soonish)

~~~
mnw21cam
1\. You can only specify that a recipe creates multiple output files (for
instance, an output file and a separate index file) if it has wildcards. 2\.
Temporary file handling is completely broken. You can declare a file to be
temporary, so that make deletes it after all the jobs that use it have
finished. However, make randomly deletes the files at other times (like for
instance if a command fails), and fails to delete the files at other times.
3\. There is a complete inability to specify resource handling - for instance,
I want to mark that this recipe is single-threaded, but that one uses all
available CPU cores, and have make schedule an appropriate number of jobs. 4\.
If you want to have crash-recovery, then you need to make your recipes
generate the output files under a different name and then do an atomic move-
into-place afterwards. Manually. On every single recipe.

These reasons (and others) are why I gave up on make for bioinformatics
processing and wrote a replacement. I'll release and publish it at some point.

~~~
xelxebar
I know make can seem a little baroque, but this is just wrong.

1\. Multiple targets for a single recipe:

    
    
        file.a file.b file.c: dep.foo dep.bar
            ...
    

This says that the recipe makes all of file.a, file.b and file.c in one go.

2\. Make definitely doesn't _randomly_ delete files. It deletes implicit
targets.

Make by default knows how to build a lot of standard things like object files
for c programs, yacc and bison stuff, etc. These are called implicit targets.
These are considered intermediate files to be deleted. You can override the
defaults or add your own implicit targets by using pattern matching like this:

    
    
        %.foo: %.bar
            ...
    

If you want to use pattern matching for _non-implicit_ targets so they don't
get deleted, you can do that too:

    
    
        a.foo b.foo c.foo: %.foo: %.bar
            ...
    

The list before the first colon says which targets the pattern-matched rule
applies to and shouldn't contain wildcards. These targets won't get deleted.

3\. This seems like a misunderstanding of make's basic role. Make just spawns
shells when running a recipe; like bash, it shouldn't need to know how many
threads you're using to run an arbitrary command. If you want make to build
targets in parallel whenever possible, look at the `-j` option. If you want a
certain build recipe to run multi-threaded, use the proper tool for the
recipe.

4\. Not sure what you mean by crash recovery, but considering the above, I'm
pretty sure you might just be fighting make unnecessarily.

Honestly, try reading the info manual. It's kind of massive and daunting, but
the intro material is really accessible, and taken in pieces, you can easily
learn to become friends with this venerable tool.

~~~
mnw21cam
1\. That doesn't do what you think it does. From the manual: "A rule with
multiple targets is equivalent to writing many rules, each with one target,
and all identical aside from that." It does not mean one rule that creates
multiple targets. To achieve that, you need to use wildcards. For some reason,
when using wildcards, the syntax is interpreted differently.

2\. If I create a rule to create an intermediate file "b" from original file
"a", then another rule to create file "c" that I want to keep from "b", but
there is an error running the command that creates "c", then make will happily
delete the intermediate "b" (which in my case took 27 hours to create)
although it knows the final "c" wasn't created properly. This means that when
I rerun make (having fixed the problem), that 27 hour process needs to be run
again, which is a waste of my time.

3\. I want to say "make -j 64" on my 64-thread server, and not have 64
64-thread processes start. But I also _do_ want 64 single-threaded processes
to run when possible.

4\. By crash recovery, I mean that by default a process will start creating
the target file. If someone yanks the power, that target file will be present,
with a recent modified time, but incomplete. Make will assume the file was
created fine, so when I rerun make it will try to perform the next step, which
may take 10 hours to fail. I want make to notice that the command did not
successfully complete, and restart it from scratch.

~~~
fusiongyro
For #2, you can mark intermediates worth keeping with .PRECIOUS.

[https://www.gnu.org/software/make/manual/html_node/Special-T...](https://www.gnu.org/software/make/manual/html_node/Special-
Targets.html)

For #3, I think you may be misreading ps; on Linux, ps will show you threads
as if they are processes when they are not.

~~~
mnw21cam
For #2, .PRECIOUS doesn't help me. From the make manual: "Also, if the target
is an intermediate file, it will not be deleted after it is no longer needed,
as is normally done." This means that my intermediate files will _never_ be
deleted by make, even when everything that is built from them has been
completed.

For #3, no I think I know how to read ps. I don't want 64 64-thread processes
running on my 64-thread server, because that is hell for an OS scheduler, and
makes things run slower, not faster.

~~~
fusiongyro
For #2, you could always make a dependency that removes your intermediates for
you after your final use. You can't be mad at make because it deletes
intermediates and because it doesn't delete intermediates. Make isn't psychic.

For #3, I didn't mean to come across as pedantic. I haven't encountered what
you're describing, but I have personally been surprised by how Linux does
process accounting, so I apologize; I just figured you were being bitten by
the same thing.

I like make a lot, but I don't use it for everything, because sometimes there
simply are better tools for the task, and I hope you were able to figure out a
solution.

------
grymoire1
I've used it when I was doing a pentest - searching a network for leaks of
information. I wrote dozens of shell scripts that scanned the network for
_.html files, then extracted URL 's from them, downloaded all of the files
referenced in them, and searched those files (_.doc, *.pdf, etc.) for metadata
that contained sensitive information. This involved eliminating redundant
URL's and files, using scripts to extract information which was piped into
other scripts, and a dozen different ways of extracting metadata from from
various file types. I wrote a lot of scripts that where long, single-use and
complicated, and I used a Makefile to document and save these so I could re-do
them if there was an update, or make variations of them if I had a new ideas.

------
privong
I use Makefiles for two components of my research:

\- Compilation of papers I am writing (in LaTeX). The Makefile processes the
.tex and .bib files, and produces a final pdf. Fairly simple makefile

\- Creation of initial conditions for galaxy merger simulations. This I
obtained from a collaborator. We do idealized galaxy merger simulations and my
collaborator has developed a scheme to create galaxies with multiple dynamical
components (dark matter halos, stellar disks, stellar spheroids, etc.) very
near equilibrium. We have makefiles that generate galaxy models, place those
galaxies on initial orbits, and then numerically evolve the system.

------
voltagex_
To set up my dotfiles, although I'm not in enough of a routine for it to be
truly useful.

    
    
        tmux:
        	ln -s $(CURDIR)/.tmux.conf $(HOME)/.tmux.conf
        	tmux source-file ~/.tmux.conf
        
        reload-tmux:
        	tmux source-file ~/.tmux.conf
        
        gitconfig:
        	ln -s $(CURDIR)/.gitconfig $(HOME)/.gitconfig
    

cd ~/configs then make whatever. ~/configs itself is a git repository.

~~~
SpaceNugget
You should check out Stow, its an already established project that does what
you need.

~~~
voltagex_
What advantage does Stow give me over my current solution? I've always got git
and make installed (but I'm not averse to adding something else).

~~~
db48x
One advantage would be not having to write "ln -s ..." for every file you want
to link. Stow handles file trees as well.

------
INTPenis
Not exactly creative but KISS. I use only Makefile for a C project that
compiles on both Linux, BSD and Mac OS.

Point being that autoconf is often overkill for smaller C projects.

~~~
vortico
Have you made a cross-build system that compiles to all three on a single
machine, or that works when run on each of the three? I've been working on a
system based on gcc, MinGW64, and osxcross, but if that fails I'll use docker
with crossbuild.

~~~
xoroshiro
The Stockfish Makefile is probably the best example of this I've seen. It also
does some advanced stuff I'm not sure I understand yet. Profiling, setting
flags for CPU instructions and making profile builds, all with support for
different C++ compilers. Pretty neat stuff from one of the best chess engines.

[https://github.com/official-
stockfish/Stockfish/blob/master/...](https://github.com/official-
stockfish/Stockfish/blob/master/src/Makefile)

------
git-pull
This is a bit late, but in the book _The Tao of tmux_ , I delve into how I use
Makefile's to create cross-platform file watchers that can trigger unit tests.
[https://leanpub.com/the-tao-of-tmux/read#file-
watching](https://leanpub.com/the-tao-of-tmux/read#file-watching)

I use Makefile's regularly on open source and personal profiles (e.g.
[https://github.com/tony/tmuxp/blob/master/Makefile](https://github.com/tony/tmuxp/blob/master/Makefile)).
Feel free to take and use that code, it's available under the BSD license.

The creativity comes in when dealing with cross-platform compatibility: Not
all file listing commands are implemented the same. ls(1) doesn't work the
same across all shell systems, and find on BSD accepts different arguments
than GNU's find. So to collect a list of files to watch, we use POSIX find and
store it in a Make variable.

Then, there's a need to get a cross platform file watcher. This is tricky
since file events work differently across operating systems. So we bring in
entr(1) ([http://entrproject.org/](http://entrproject.org/)). This works
across Linux, BSD's and macOS and packaged across linux distros, ports, and
homebrew.

Another random tip: For recursive Make calls, use $(MAKE). This will assure
that non-GNU Make systems can work with your scripts. See here:
[https://github.com/liuxinyu95/AlgoXY/pull/16](https://github.com/liuxinyu95/AlgoXY/pull/16)

------
sannee
Not something I have personal experience with, but I have heard a story about
a Makefile-operated tokamak at the local university. Apparently, the operator
would do something like "make shot PARA=X PARB=Y ..." and it would control the
tokamak and produce the output data using a bunch of shell scripts.

------
cperciva
I have "make Makefiles", which uses BSD make logic to create portable POSIX-
compliant Makefiles.

------
lantastic
I once used make to jury-rig a fairly complex set of backup jobs for a
customer on a very short notice. Jobs were grouped and each group was allowed
to run a certain number of jobs in parallel, and some jobs had a non-overlap
constraint. The problem was well beyond regular time-based scheduling, so I
made a script to generate recursive makefiles for each group that started
backups via a command-line utility, and a master makefile to invoke them with
group-specific parallelism via -j.

File outputs were progress logs of the backups that got renamed after the
backup, so if any jobs failed in the backup window, you could easily inspect
them and rerun the failed jobs just by rerunning the make command.

Fun times. Handling filenames with spaces was an absolute pain, though.

------
a3n
Miki: Makefile Wiki [https://github.com/a3n/miki](https://github.com/a3n/miki)

A personal wiki and resource catalog. The only thing delivered is the
makefile, which uses existing tools, and a small convenience script to run it.

------
alexatkeplar
Until recently we used them at Snowplow for orchestrating data processing
pipelines, per this blog post:

[https://snowplowanalytics.com/blog/2015/10/13/orchestrating-...](https://snowplowanalytics.com/blog/2015/10/13/orchestrating-
batch-processing-pipelines-with-cron-and-make/)

We gradually swapped them out in favour of our own DAG-runner written in Rust,
called Factotum:

[https://github.com/snowplow/factotum](https://github.com/snowplow/factotum)

------
rrobukef
I use it to setup my programming environment. One Makefile per project, semi-
transferable to other pcs. It contains

    
    
        * a source code download, 
        * copying IDE project files not included in the source, 
        * creating a build folders for multiple builds (debug/release/converage/benchmark, clang & gcc), 
        * building and installing a specific branch, 
        * copying to a remote server for benchmark tests.

------
shakna
Lisp in make [0] is probably the most creative project I've seen. For myself,
in some tightly controlled environments I've resorted to it to create a
template language, as something like pandoc was forbidden. It was awful, but
worked.

[0]
[https://github.com/kanaka/mal/tree/master/make](https://github.com/kanaka/mal/tree/master/make)

------
rv77ax
I use makefile as the library package dependency [1], maybe like what
package.json was in node environment.

The idea is if you want to use the library, you just include the makefile
inside your project makefile, define a TARGET values and you will
automatically have tasks for build, debug, etc.

The key is a hack on .SECONDEXPANSION pragma of GNU make, which means it's
only work in GNU/Linux environment.

[1] [https://GitHub.com/shuLhan/libvos](https://GitHub.com/shuLhan/libvos)

Edit: ah, turn out I write some documentation about it here:
[http://kilabit.info/projects/libvos/doc/index.html](http://kilabit.info/projects/libvos/doc/index.html)

------
Someone
I don't use it, but your question made me think of one: I would like to see it
(mis)used as a way to bring up an operating system.

It probably will require quite a few changes, but if the _/ proc_ file system
exposed running processes by name, and contained a file for each port that
something listened to, one _could_ run make on that 'directory' with a
makefile that describes the dependencies between components of the system.

Useful? Unlikely, as the makefile would have to describe all hardware and
their dependencies, and it is quite unlikely nowadays that that is even
possible (although, come to think of it, a true hacker with too much time in
hand and a bit of a masochistic tendencies could probably use autotools to
creative use)

~~~
dragonne
You may appreciate
[https://github.com/Andy753421/mkinit](https://github.com/Andy753421/mkinit)
though it is written in mk (from Plan 9) rather than Make.

~~~
Someone
Yes, I do. Thanks.

------
regnar86
I'm developing flight software at work on various Linux pc's that have support
drivers installed for some PCIe cards. If I want to code on these PC's it's
either sit inside a freezing clean room or "ssh -X" into a PC to bring up a
editor. This sucks, so I have a makefile to rake in certain specifics of my
flight software build with additional compile time switches for flexibility to
build natively on my own computer. This allows me to essentially ignore
installed drivers/libs and work comfortably in my own environment until I
require the actual PC in the cleanroom to run my build.

------
matt4077
I'm using ruby's rake in almost every project, even when it's not ruby
otherwise.

It has much of the same functionality, but I already know (and love) ruby,
whereas make comes with its own syntax that isn't useful anywhere else.

You can easily create workflows, and get parallelism and caching of
intermediate results for free. Even if you're not using ruby and/or rails,
it's almost no work to still throw together the data model and use it for data
administration as well (although the file-based semantics unfortunately do not
extend to the database, something I've been meaning to try to implement).

Lately, I've been using it for machine learning data pipelines: spidering,
image resizing, backups, data cleanup etc.

~~~
bjpbakker
> It has much of the same functionality

You could use both to accomplish the same thing, sure. But their concepts are
quite different.

Rake works on tasks, which you define (or import from some gem).

Make works with file targets more than tasks. You define how it can make a
certain (type of) file and it does the job.

Personally I mostly use Make if I want to generate files from something else.
Otherwise I find small scripts easier than Rake or equivalent.

~~~
steinuil
Actually, Rake has both. You can define file targets using "file", but I found
that for smaller projects it just becomes a more verbose make.

~~~
bjpbakker
True, Rake also has file tasks. There still are tasks and not much like file
targets in Make.

~~~
shoo
i think Rake is one of the various newer re-implementations of make that more
or less miss the point of what is good about make.

make is pretty neat if you think about it as a framework to help you
compute/derive values from other values. each value happens to be stored in
the filesystem as a file.

in contrast, many of the newer build-tool make replacements seem to miss the
whole value of values and either push or force you in a direction of doing
actions with side effects.

------
unmole
Not mine but here's a Lisp interpreter written in Make:
[https://github.com/kanaka/mal/tree/master/make](https://github.com/kanaka/mal/tree/master/make)

------
BenjiWiebe
I have a makefile I use for all of my AVR projects. It has targets to build,
program, erase, and bring up a screen on ttyS0 and maybe more. I add targets
whenever I realize I'm doing anything repetitive with the development
workflow.

------
xemoka
I haven't, but one of the cool uses that I've seen lately is how OpenResty's
folks are using it for their own website, they convert markdown -> html, then
with metadata to TSV, finally loading it into a postgres db. They then use
OpenResty to interface with the DB etc. But all the documentation is
originally authored in markdown files.

Makefile:
[https://github.com/openresty/openresty.org/blob/master/v2/Ma...](https://github.com/openresty/openresty.org/blob/master/v2/Makefile)

------
DanHulton
I use Ansible for deployment and Ansible Vault for storing encrypted config
files in the repo. Of course, it's always a bit of a nightmare scenario that
you accidentally commit unencrypted files, right?

Well, I have "make encrypt" and "make decrypt" commands that will iterate over
the files in an ".encrypted-files" file. Decrypt will also add a pre-commit
hook that will reject any commit with a warning.

This is tons easier than trying to remember the ansible-vault commands, and I
never have to worry about trying to remember how to permanently delete a
commit from GitHub.

------
gopalv
To generate 100 Terabytes of data in parallel ... on Hadoop

[https://github.com/hortonworks/hive-
testbench/blob/hive14/tp...](https://github.com/hortonworks/hive-
testbench/blob/hive14/tpcds-setup.sh#L116)

The shell script generates a Makefile and the Makefile runs the hadoop
commands, so that the parallel dep handling is entirely handed off to Make.

This make it super easy to run 2 parallel workloads at all times - unlike
xargs -P 2, this is much more friendly towards complex before/after deps and
failure handling.

------
cmcginty
I used a Makefile for managing a large number of SSL certificates, private
keys and trust stores. This was for an app that needed certs for IIS, Java,
Apache and they all expect certificates to be presented in different formats.

Using a Makefile allowed someone to quickly drop in new keys/certs and have
all of the output formats built in a single command. Converting and packaging
a single certificate requires one or more intermediate commands and Makefile
is setup to directly handle this type of workflow.

------
stephenr
I guess it depends what you consider creative?

I use one to build my company's Debian Vagrant boxes:
[https://app.vagrantup.com/koalephant](https://app.vagrantup.com/koalephant)

I use one to build a PHP library into a .phar archive and upload it to
BitBucket

My static-ish site generator can create a self-updating Makefile:
[https://news.ycombinator.com/item?id=14836706](https://news.ycombinator.com/item?id=14836706)

I use them as a standard part of most project setup

------
rurban
I'm creating a config.inc makefile during make to store config settings,
analog to the config.h
[https://github.com/perl11/potion/blob/master/config.mak#L275](https://github.com/perl11/potion/blob/master/config.mak#L275)

Instead of bloated autotools I also call a config.sh from make to fill some
config.inc or config.h values, which even works fine for cross-compiling.

------
natebrennand
We use Makefile "libraries" to reduce the amount of boilerplate each of our
microservices have to contain. This then allows us to change our testing
practices in bulk throughout all our repos.

[https://github.com/Clever/dev-
handbook/tree/master/make](https://github.com/Clever/dev-
handbook/tree/master/make)

------
dvfjsdhgfv
The main question to ask if you really need to use make. If you do, there
practically no limit of what you can do with it these days, including
deployment to different servers, starting containers/dedicated instances etc.
But unless you are already using make or are forced to, it's better to check
one of newer build systems. I personally like CMake (it actually generates
Makefiles).

------
peterbraden
I have a makefile that sets up a brand new computer with the software I need.
It means I can be up and running on a new machine in a few minutes.

------
haspok
[https://erlang.mk/](https://erlang.mk/) \- need I say more? :)

------
accatyyc
One "creative" use is project setup. Sometimes, less technical colleagues need
to run our application, and explaining git and recursive submodules takes a
lot of time, so I usually create a Makefile with a "setup" target that checks
out submodules and generates some required files to run the project.

------
gkelly
I use Makefiles that run "git push $branch" and then call a Jenkins API to
start a build/deploy of that $branch. This way I never have to leave vim; I
use the fugitive plugin for vim to "git add" and "git commit", then run
":make".

------
Da_Blitz
i use it to solve dependency graphs for me in my program language of choice,
at the moment this involves setting up containers and container networking but
i throw it at anything graph based

make seems to be easier to install/get running than the myriad of non
packaged, github only projects i have found.

~~~
Da_Blitz
Have also seen a Full Certificate authority implemented using a makefile that
was one of the easiest i have ever used

i am also currently using it with rsync to implement a poormans dropbox on a
vps host with a systemd timer unit to clean files after 30 days for sharing
files with customers, a simple wrapper script dumps it in the right folder,
invokes make and causes rsync to run. the makefile also haandles setup of the
account like ssh-add (with restricted commands), key generation and config
options (via include files)

------
rahi374
I use it to generate my latex CV. In my case I have multiple target countries,
so I have pseudo-i18n with pseudo-l10n, and different values like page size,
addresses, phone numbers, and then I just make for the target country like
make us or make ja.

------
fusiongyro
I used it to make a blog once.

[http://old.storytotell.org/blog/2009/07/13/how-to-manage-
a-w...](http://old.storytotell.org/blog/2009/07/13/how-to-manage-a-website-
destructively.html)

------
johnny_1010
I use makefile to gen my static website. Also my CV, latex and make works well
together.

------
Mister_Snuggles
I've used Makefiles to determine what order to run batch jobs in so that
dependencies can be met. Instead of describing what order to run things in,
you describe what depends on what.

It's pretty cool, but not ideal.

------
leksak
Nowadays I mostly use Tup. If I use make it is usually for when I'm working
with other people on LaTeX documents, and often times it's enough to just call
rubber from make x)

------
dakerfp
I use it to run Verilog testbenches and start a Riscv simulator.

------
tripa
I use make as a poor man's substitute for rsync (well, local rsync. Like cp
-r), when I need to add some filtering in between.

------
tmaly
I use it to build all my Go micro services, run test suite, compile SaSS,
minify css, minify js

------
yabadabadoo
I use make to pre-compile markdown into HTML for a static website.

~~~
JdeBP
I use redo to build the menus for a GOPHER site. It's the same principle. Make
changes; run redo; menus get updated automatically. I also use it to rebuild
the indexes for package repositories after making changes.

See gopher://jdebp.info/1/Repository/freebsd/ under "how this site is built"
and gopher://jdebp.info/1/Repository/debian/dists/ .

See also gopher://jdebp.info/h/Softwares/djbwares/guide/gopherd.html .

------
jmurphyau
I use make to make things

