I used make for years before I understood pattern rules, even though they're pretty simple. I kept trying to use foreach loops all over the place. If you're using foreach loops, check whether a pattern rule can do the job.
> The popular clamour is that the autovars are too short & confusing.
I finally realized, after years of using make and then learning pattern rules, that the three autovars listed here are visual mnemonics. $@ looks like a target, a ring with a bullseye. $< is pointing left, so it's the first prereq. $^ is like a horizontal bracket that groups all prereqs.
> I finally realized, after years of using make and then learning pattern rules, that the three autovars listed here are visual mnemonics. $@ looks like a target, a ring with a bullseye. $< is pointing left, so it's the first prereq. $^ is like a horizontal bracket that groups all prereqs.
I've been illuminated, sincerely! Meanwhile, bmake has some more memorable synonyms for them like ".TARGET" and ".ALLSRC".
It's a shame that there is no name for .ALLSRC shared between bmake and gmake. An alternative is to use a variable, but then you lose VPATH compatibility:
foo: foo.o bar.o
$(LD) foo.o bar.o ...
My other gripe with bmake is that this rule is needed as there is no implicit rule for linking multiple object files, or so it seems.
What I've always loved about Make and related tools is how directly they expose their fundamental concept in building artefacts: the DAG. I don't do that much JS, but I've not found this kind of clarity in any of the JS frontend build tools I've used.
Honest question for the webpack power-users here.
What are some important webpack features that you lose by using a Makefile like this? Because it seems like pretty much every part of a regular webpack buildchain has a command-line equivalent that you could potentially use in a Makefile.
You wouldn't want to use it as a replacement for webpack, you'd use it alongside webpack. Webpack does a ton of stuff for Javascript compilation - heavily cached and optimized recompilation, module rolling, hot reloading, etc.
Make is for performing any variety of commands, most of which don't fit into webpack - deploy scripts, git scripts, whatever. It's a unified entrypoint for scripts project-wide. It's not a great resolver for JS dependencies, but it's a great resolver for "I need freshly-compiled JS baked into a docker container and uploaded to my docker repo".
Having something like Make is especially important in large teams in my experience. Under-the-hood tooling can often change (NPM -> Yarn, needing different test suite arguments, new docker commands, whatever) and having a single expected API for developers across your teams is particularly useful when you're the person working on dev tooling. Knowing `make test` should prepare and run a test suite or `make run` should run an app no matter which repo you're working on gives devs a consistent development workflow.
I don't know enough about make to know if this is correct, but the post last week on using make for js started of by saying "If you are writing a frontend app and you need code bundling you should absolutely use Webpack" which, aren't the majority of people using webpack writing frontend apps and need bundling?
The other issue is that essentially no one uses make in frontend development. So you go and convert to make from something that was working fine and is the standard in that part of the industry and what's the pay off? Now no one in your team or who you hire knows how to change the build chain? Great.
> aren't the majority of people using webpack writing frontend apps and need bundling?
Webpack is pretty new. Lots and lots of websites were developed before webpack existed, and some of them still use make for builds. The majority of new web projects are probably webpack now.
> The other issue is that essentially no one uses make in frontend development.
I don't know how many people do or don't use make, but I've had two web jobs that did. One was my own company and I wrote the makefiles, so that's not exactly fair. :P The other was an established web app. In both cases, the biggest reason for the makefile was to integrate the Google Closure Compiler into the build. Last time I checked, Webpack didn't have support for the Closure compiler. I am aware that the Closure compiler isn't exactly the most popular js minification tool today, and isn't generally necessary with webpack.
> So you go and convert to make from something that was working fine and is the standard in that part of the industry and what's the pay off?
I agree with you generally, but I'm old enough that this also makes me chuckle a little. make was working fine and was the industry standard for decades before webpack or npm.
Just before Webpack there was Gulp, and Gulp is essentially a re-invention of unix pipes and lots of the unix tools. Build systems made with gulpfiles have to bend over backwards to get right make's basic concepts like dependency evaluation, building a target only once, and skipping already built targets.
While not nearly as popular as Webpack, Broccoli.js (used by Ember, among others) exposes the DAG concept. Webpack is fine if your task roughly matches the purpose Webpack was built for, but when using it I miss the composability of Broccoli. There's a good architecture overview here:
> What are some important webpack features that you lose by using a Makefile like this?
Transparent cross-platform usage, specifically on Windows. Yes, you can set up Make on Windows, but it's not really the norm, especially for web developers.
What does building a JS app take? Linting maybe, then concatenating files, minifying them. Then there should be some maintaining tasks, like updating dependencies, testing, deploying. Am I missing anything (serious question, I'm rather unfamiliar)?
They can also rename function and variable names with shorter ones, and compile code to be shorter (marginal gains since it's all going to be served compressed).
This looks very similar to the Makefile I'm using except that beside -MMD I also have -MP flag. I created this Makefile a couple years back and I'm reusing it for every new project, but I cannot recall really why I needed to add -MP.
There's no problem with replacing, other than losing the ability to glance at an object and determine which language it came from, and name conflicts with `util.c` and `util.cpp` in the same directory (believe me, it happens).
The thing people miss a lot about makefile, apart from what has already been mentioned is the fact you can have "local" version of the "global" variables, per rules.
It's very very powerfull, clear to read, and simplify all these if/else/end no end...
I've thought about adding a couple of sentences about vpath (in a cautious style akin to "think twice before using it" [0]), but ultimately decided that the topic is too obscure; besides I practically never use vpath myself.
I use snakemake[0] as a replacement for make. The debugging tools in snakemake are really good - it can output the DAG of your workflow as a dotfile, and you can add breakpoints and use pdb to debug your workflow. It also integrates really nicely with conda, so you can have different environments for different rules.
The killer feature of snakemake for me is that it can run a workflow on a single core or as many as you give it, or in the case of a cluster it can actually submit jobs to a queue and wait for them to finish before submitting more.
The tradeoff is that you need to install it (conda install -c bioconda snakemake), and that it can be very slow if you have thousands of target files. But for most workflows it's the right tool for me.
> The killer feature of snakemake for me is that it can run a workflow on a single core or as many as you give it, or in the case of a cluster it can actually submit jobs to a queue and wait for them to finish before submitting more.
Other than your cluster example isn't this the same as passing -j to make to run jobs in parallel?
Yeah, -j just about does it, though with Snakemake you can also specify other resources (memory, I/o, etc) for each task, and it will keep all of them under quota. So if you tell snakemake that task A takes six cores, but B-K take only one each, it will solve the knapsack problem to come up with some kind of optimal order for them, and on your 8-core machine you could have A, B, and C running, or B-K.
For a while, I muddled through with Make and a custom cluster job-submitter script, but the real Snakemake killer feature for me was multiple, independent wildcards. So I can, in one rule, have dir/{databasename}/{query}.tsv and rapidly hit multiple databases at once. If you are very clever (and rule one of Make is don’t be clever), you can have a GNUMake wildcard get parsed into multiple wildcards with the wimpy pattern substitution rules, but this is not a fun way to relax.
POSIX make has the virtue of simplicity. GNU make : POSIX make :: C++ : C. It's even worse than that because the way GNU make extends the syntax is completely at odds with core make syntax. It's why to write a looping construct in GNU make you need the contortion of using eval, whereas a loop in the BSD family of makes is much more simple and concise.
I'm not advocating to use a BSD make, nor saying not to use GNU make. But appreciating GNU make's points of departure from POSIX make helps keep your makefiles simple and to the point. Which is important because another virtue of make generally (GNU make included) is that it'll likely still be around long after all these alternatives have been long forgotten. But time is never kind to complexity.
That's fair.
I didn't realise it was not available everywhere, it's one of those things I keep thinking are in coreutils until it's not there on an OS (funny enough, I keep thinking half the commands I use are like that ... until it breaks)
I'm an `entr` fan, personally. Super easy to compose with other tools, steps around common minor mistakes (backs off when there are rapid changes, can abort running processes when changes occur or ignore, etc).
In all these discussions about make, everyone argues about using it for everything. Personally, I don't know another tool that makes it straight forward to glue together all kinds of different build tools (for example webpack for frontend and maven/sbt for the backend).
Even if you don't directly use make, it makes for great, verifiable documentation.
Yes! Thank you! This needs to be banged into peoples heads when talking about build systems. Every time you add an edge, you're automating an otherwise manual task. Very important realization.
I've been working on a small custom build system (of the "re-editable" kind) for my own software project, and have been working on a little writeup. It's not finished, and I think there was something in the implementation that I was still unhappy about. But I'll take this opportunity to jump on the bandwagon. I think for larger project it is worth it to just make your own system, so here are my notes: http://jstimpfle.de/blah/buildsystem.html
If you’re feeling mischievous, Make has divers ‘secret’ vars & targets for you. For instance, you don’t have to ‘struggle’ w/ the ancient bash, for it’s possible to program recipes in any language that supports evaluations straight from the command line.
In addition you can have different shell for each rule with the rule specific macros if you do desire:
The author has put an offensive banner (https://imgur.com/a/84cuN) in front of his blog page for Russian IP addresses. I just wonder why there is no such thing in the article above
Wait. Did I read it wrong or didn't an article a few days back say that we can use make with file names consisting of spaces, its just that we shouldn't? Although that article talked about using a non-breaking space as a hack.
Make doesn't handle filenames with spaces (ASCII 32) at all! I wrote the article (http://boston.conman.org/2018/02/28.2) about using the non-breaking space (NBSP, Unicode 160) in filenames, and make (as well as nearly everything else I tried) accepted it just fine.
But popular consensus (from here and Lobsters) seemed to be "avoid any type of space at all cost and do not speak of it again!". Pity.
I think the main point of handling file names with character code 0x20 in them is for situations where you don't get to pick the file names.
I have a project in which I'm juggling a large amount of files imported from external sources, which need to be variably processed and used as-is and in either case must have their names preserved. I wanted to use Make to handle the dependency chaining, but munging file names when inducting them means that I have to unmunge them on every output. Suddenly you can't just zip or grep -H a selection of some of the files anymore; you have to do a whole dance of copy-and-unmunge-name, clean-up-the-temporary-directory, and so on and so on and.
If I could just assign my own names, I could just as well avoid spaces, if it were purely an aesthetic issue. But it really isn't. You don't often need “something that displays as a space but you still get to decide its representation”, you need “passing through exactly the ASCII space without having to worry about it”, or the interoperability concerns still explode in your face one way or another (if they exist in the first place).
If I were to write a makefile for something where I can't restrict the filenames beforehand I'd probably just have a target for running `detox` and then at the very end possibly an another target that changes underscores to spaces and creates some dummy file as the real target.
But how does this address anything like the zip or grep cases I mentioned? File names go everywhere. You will have spurious nonbreaking spaces (or underscores, or whatever else) in outputs from here to /dev/null and back. Yes, it is theoretically possible to work around it, but it's hardly ergonomic for either the writer or the user of the Makefile.
Oh yes. I got so pumped up by your article(and partially by losters' comments of never using it again) that I went on my own to replicate the scenario. Only then I was able to understand what you meant by :
> It also requires a filesystem that can support Unicode (or in my case, UTF-8) and a command line that also supports Unicode (or again in my case, UTF-8).
> And it was also not that easy to create the filename and Makefile with the non-breaking space..
Looks like setting up the compose key on linux did the trick.
I posted my own recipe for naming files using non-breaking spaces[0] after someone asked for it. A beginner here so the steps may look a little too primitive but it gets the work done. Thanks for introducing me to a hack. Now I can have a folder with two files whose name look exactly the same(ASCII 32, NBSP).
It's not generally possible to do with hand-coded makefiles, because you want to use variables. If you are generating your makefiles, like with cmake, then you can deal with spaces (at least, you can build in a path with spaces in it, and deal with input filenames which contain spaces) because all the paths wind up hardcoded in the makefile (this is one contributing reason to why you cannot move a cmake build directory).
One thing I sometimes wish was possible in make is using the automatic variables in the dependency part of the rules. The one that I most often have desired to use is $(@D) so I could depend on a file with specific name in the same directory when the directory name isn't known beforehand. I got around this with some touch trickery but it would be just easier to read if those automatic variables were available at that point.
There is DJB's redo[1], which has some implementations. Also, one could probably modify make to use hashes. But personally I think that mtime is a nice compromise, as it's often a correct heuristic, and hashing files is not cheap, especially for big projects that'd have a performance penalty.
> While you can theoretically use it on windows, nobody does.
What do you use instead?
I think a lot of people use make on Windows. I've had several jobs with a Windows dev environment, and they've all used make. My current dev env is cmake on Windows, so I don't touch the makefiles, but it's still using make on Windows.
There aren't that many options for cross-platform projects. A lot of people use cygwin or the Windows Subsystem for Linux with make and/or cmake native.
I also go directly to make on either cygwin or unix for one-off builds all the time, in non-C/C++ projects. make is super handy for resizing images or running a batch of tests on a program generating graphs. Anything where you need to generate files that only update when another file is touched is a good job for make.
make -j is extra handy on windows for some situations since it's hard to get gnu parallel to run in cygwin and the parallel perf wasn't very good last time I checked.
I do. I live in MSYS2 on Windows. Can't stand VS or powershell, and I've invested so heavily in the GNU toolchain over the years that having the option on Windows is a no-brainer for me.
I use Make for everything from frontend builds (don't like grunt/gulp) to rust apps. The better I get at Make, the more I love it.
No it doesn’t. Makefiles define inputs, outputs and the commands that transform input to output. Make then compares the last modified timestamp of inputs with outputs on each run and reruns the commands for the targets that are out of date.
Make is in no way limited to C or C++ projects and there is absolutely nothing wrong in using Make for a JavaScript project.
The only thing that sucks a bit is maintaining the list of inputs and outputs if you have a lot of files.
Well, that and the fact that Makefiles are not generally portable between GNU and BSD Make so if you use both Linux and a BSD you need to install either gmake on BSD or bmake on Linux and then remember to invoke that rather than just make for your project. Of course you could have a bash function that will look and pick GNU make if there is a GNUmakefile in your directory and BSD make if there is a BSDmakefile, but maintaining such things is a hassle as well.
It always comes down to picking the right tool and sometimes Make will be it and sometimes it won’t but whether the output is JavaScript or a PDF (e.g. from LaTeX documents) or JPEGs or whatever has very little to do with it.
Perhaps if it's a project that only you and maybe a few other people will work on. But I think there's value in using tooling that other contributors from that ecosystem will be comfortable using.
I say this as someone who doesn't really like the JS ecosystem and its churn. I suppose you could argue the JS community itself has valued novelty over using tooling others will be familiar with. But if you have two tools that do roughly the same thing, one known to a majority of contributors and one known to a minority, I would go with the more popular.
> Make is in no way limited to C or C++ projects and there is absolutely nothing wrong in using Make for a JavaScript project.
The fact that you have to acknowledge this proves how unorthodox it is, and ignores team concerns. So no, there is something wrong with using Make for a JavaScript project: people don't normally do it.
> It always comes down to picking the right tool
You're right, stick to the ones people actively use and don't pretend features exist in bubbles.
I can't believe you've been down-voted for these two posts. If you convert your js projects to make you are going to be the only person on your team who ever touches them again for no pay off.
Notably though how bambataa hasn't been downvoted with pretty much the same point, except conceding that that they are limited situations where it might work instead of blanket-declaring it always bad.
I've generally found that criticism, even if a reasonable sentiment, on Hacker News trends downwards. It seems to be more favorable to support something or stay silent, than to give critical commentary. It could also just be my tone.
I'm sure my comments would be more popular if I ignored the realities of software development and only praised the novelties of it.
“there is something wrong with using xxx for a yyy: people don't normally do it” seems like a logical fallacy even without considering the details, which even more people are not interested in.
how is that a logical fallacy at all? do you know what a logical fallacy is? it's a measurable fact nowhere does make show up on the top of the list of build tools for this ecosystem during regularly published analytics for the industry
im astounded so many idiots have such high karma counts on hn, what a wonderful place to have discourse
If you’re about my karma, then it is not high as far as I care, nor am I too smart, as you noticed. Most of upvotes I got were stupid rants and complaints about modern ui and javascript with few points that I personally find important. I think we all differ in intelligency, understanding, style, etc, so my thought on karma is... it doesn’t matter; what matters is how one reacts to others meaning, both structured or emotional.
I don’t see why your comments have to be grey at all, if that helps, just pointed out where few may have ‘misheard your tune’. But if it is unthoughtful, then it’s their problem, not yours. They’ll use make and suffer, isn’t it fair?
so is gradle, maven, xcode, it does not matter what target is or what source is (as language, file etc)
they have tasks/recipes and dependencies/prerequistes
imho JS's stack of build system still not good at all. changing to new system every 2-3 years. (note that i am not a js-dev, just outside point of view)
which is kind of a not-stable-ecosystem feeling. I see small agencies spending days maybe weeks to port their 'themes' to new stacks (long ago jquery, then gulp, now webpack) where as haven't encountered single "porting to new makefile format" case, because, of course solid history/background of make.
yeah what's an even better idea is not to introduce a new build system, but to introduce an old one primarily used by an entirely different language ecosystem
I used make for years before I understood pattern rules, even though they're pretty simple. I kept trying to use foreach loops all over the place. If you're using foreach loops, check whether a pattern rule can do the job.
> The popular clamour is that the autovars are too short & confusing.
I finally realized, after years of using make and then learning pattern rules, that the three autovars listed here are visual mnemonics. $@ looks like a target, a ring with a bullseye. $< is pointing left, so it's the first prereq. $^ is like a horizontal bracket that groups all prereqs.