Hacker News new | comments | show | ask | jobs | submit login
The Makefile I use with JavaScript projects (olioapps.com)
544 points by theothershoe 56 days ago | hide | past | web | favorite | 511 comments

I used make heavily in the 80s and 90s, but haven't much since then. Recently I started a project that had source files getting processed into PDF files, for use by humans. Since this is the 21st century, those files have spaces in their names. At a certain point, I realized that I should be managing this processing somehow, so I thought of using a simple Makefile. A little searching reveals that the consensus on using make with files with spaces in their names is simply "don't even try." In the 21st century, this is not an acceptable answer.

Demanding the support of spaces in filenames significantly complicates code as simple space delimination no longer works and other delimination schemes are much more error prone -- forgetting balancing quotes, any one? While you are allowing spaces, you probabaly are allowing all possible code points or maybe even a null byte? Thinking about it gives me headaches.

I hate hearing people using 21st century or modern as reasons for inflating complexities. Without rein on complexity (whether it is hidden or not), the future is doomed, whatever you are building. While I am not saying we should avoid complexity at all cost, I am insisting that all complexity should be balanced with merits.

The merits of filenames with spaces is they read better in a GUI explorer. Whether that merit balances out all the complexity it brings is individual dependent. For me, that merit ranks very low and I avoid spaces in my filenames at all opportunities. For some, they need those filenames to be readable. And there are solutions. One solution, from those who don't code, seems to demand ("developers", paid or not) that every tool that deal with files should handle this additional complexity, regardless of their context and what they think. Another solution would be to add an additional pre-step of copying/renaming/linking/aliasing. With the latter solution, the complexity is confined.

I guess for some, it only matters with "I do work" or "they do work" rather than the big picture. That is fine. However given the context of you are working with Makefiles, then you are a developer at some level, you are supposed to do some work.

> And there are solutions. One solution, from those who don't code, seems to demand ("developers", paid or not) that every tool that deal with files should handle this additional complexity, regardless of their context and what they think.

Users expect computers to work in a non-surprising ways.

It isn't natural to use dashes or underscore in file names. Training users to be afraid of spaces is just teaching them one more way that computers are scary and unpredictable.

Meanwhile over in Windows land, all tools have been expected to deal with spaces in them for what is approaching 20 years.

> Meanwhile over in Windows land, all tools have been expected to deal with spaces in them for what is approaching 20 years.

That is an excellent example that deserves a second look from a different aspect.

... it has trained a crop of computer users that are afraid of command lines and with an attitude of anything beneath the GUI interface is owned by and of someone else's problem. They are scared of computers more than ever. They are very scared of it and having it heavily disguised as an appliance is mandatory.

> it has trained a crop of computer users that are afraid of command lines and with an attitude of anything beneath the GUI interface is owned by and of someone else's problem.

There is nothing beneath the Windows UI interface. The GUI is the primary subsystem of Windows, the command line is a separate subsystem. Windows has traditionally been built up around the Win32 API, which is GUI first.

It is of course possible to do things from the command line, and some advanced stuff may need the command line, but users should never need to drop out of the UI unless something has gone horribly wrong.

The Windows UI is incredibly powerful, with advanced diagnostic and logging capabilities, incredible performance monitoring tools, and system hooks everywhere that allow the entire UI to be modified and changed in almost any way imaginable.

The way Services work on Windows is not some (hotly debated) config files. It is through a UI. Management of hardware takes place through a UI. Pretty much everything short of running Ping has a UI around it.

I love the command line, if I am developing it is how I primarily work. And while doing so, younger devs look at me like I am crazy because they have customized the living daylights out of their UI tools (VS Code, Atom), to do amazing things that exceed what a command line can do, and of course those editors have a command line built in for when that is the best paradigm!

> and having it heavily disguised as an appliance is mandatory.

Something working so well that it becomes an appliance isn't a bad thing. It means the engineers who made it have put in so many fail safes and worked to make the code of high enough quality that it doesn't fall apart around itself all the time.

Heaven forbid I can upgrade my installed software and not have some random package break my entire system.

> There is nothing beneath the Windows UI interface.

To clarify, beneath the GUI interface is the actual code that implements that interface.

> Something working so well that it becomes an appliance isn't a bad thing.

Not at all. I don't attempt to call or think my phone as an computer. Window's users, on the other hand, still call their PC computers. I guess that is ok if computers are appliances. It is just that there are still a big camp of people who use a computer as computer. That causes some confusions in the communication between.

> Window's users, on the other hand, still call their PC computers. I guess that is ok if computers are appliances.

It seems that 90% of computer use has moved into the web browser. Heck outside of writing code, almost everything I do is in a browser, and my code editor of choice happens to be built as fancy skinned web browser...

> To clarify, beneath the GUI interface is the actual code that implements that interface.

I'd say that everything moving onto the web has once again made the code underneath it all accessible to the end user, if they so choose.

(Ignoring server side)

> It seems that 90% of computer use has moved into the web browser.

This is an extremely (web) developer-centric viewpoint IMHO.

Try telling 3D modellers/sculptors, games programmers, audio engineers that 90% of their computer use has moved into a browser. They will look at you with a blank face since they all require traditional "fat" desktop apps to get their work done at professional level.

And those are just the examples I can find off the top of my head, I'm sure I could think of more.

Do all of those computer users constitute more than 10% of computer users? I don't think there's even 5% of computer users there.

Talking about 3D modelling, game programming and producers like they're the majority is a technologist-centric view.

Most computer use is by people who think the web is the internet and use 'inbox' as a verb.

> Do all of those computer users constitute more than 10% of computer users? I don't think there's even 5% of computer users there.

So as stated that was the list of the top of my head. I just pulled from my personal list of hobbies, things that I use a computer for other than programming or automation.

Within 50 feet of me at work there are a whole bunch of electrical engineers who spend 90% of their time in some fat app for designing, I dunno, telco electrical stuff.

In the fishbowl next to that, are 50 network operations staff who spend 90% of their day in "fat" network monitoring applications.

I'm just pointing out if you look far enough there are plenty of people using apps outside a web browser for their daily work and hobbies.

In my nearly 25 career in IT I have never heard people use 'inbox' as a verb (as in 'inbox me?'). Sure some people must say it sometimes, but I think this is overstated and another example of programmer cliche or hyperbole.

Most computer use is by people who think the web is the internet and use 'inbox' as a verb.

I think most of those people dont use general purpose computers anymore

> Try telling 3D modellers/sculptors, games programmers, audio engineers that 90% of their computer use has moved into a browser. They will look at you with a blank face since they all require traditional "fat" desktop apps to get their work done at professional level.

I'm talking about overall computer use. For certain professional fields, yes, apps still matter. But I'd take a guess that the majority of screen time with computers now days involves a web browser.

Heck as a programmer (not web), I am probably 50% in browser looking things up.

At most offices, computers are used for web browsing, and Microsoft Office.

> Heck as a programmer (not web), I am probably 50% in browser looking things up.

I should have put web in square brackets eg. [web] to indicate "optional". You seem to fall into the category I was describing.

Use an IDE (other than Atom), eg. Visual Studio, Eclipse, IntelliJ? All fat apps.

Use VMware workstation or Virtualbox? Fat apps.

> At most offices, computers are used for web browsing, and Microsoft Office.

So is that 90% web browsing and 10% MS Office? BTW, Office365 is still fat application (or suite of them) last time I looked.

Those PC-as-appliance people are better served in the mobile space and it would be best if they left us and our computers alone

>There is nothing beneath the Windows UI interface.

Yes there is .. Microsoft! For instance if you have a dir named "user" it will show up in your GUI as "User". In fact User, USER, user are all the same for you because MS transliterate on the fly. Did you know that you cannot access some directory because MS prevent it .. event with the 'dir' command from the DOS prompt .. only an 'ls' non-MS command can do that.

> There is nothing beneath the Windows UI interface.

> Windows has traditionally been built up around the Win32 API, which is GUI first.

What did I just read?

Which is not a problem. The command line was only one, historical UI. Not the be-all end-all of UIs, and there's no reason it should be of any real interest to modern desktop users (non devs).

And I cut my teeth as a developer on DOS, Sun OS (pre-Solaris), and HP-UX, and early Linux back in the day.

Almost every CLI program I've ever used in Windows has no problem with spaces in filenames, so I don't exactly why he's fixated on the GUI... But I had forgotten, computers aren't useful as tools to accomplish work, but as mechanisms to assuage intellectual inferiority complexes. He should advocate for punch cards again, since that would certainly stop morons from using computers.

> Almost every CLI program I've ever used in Windows has no problem with spaces in filenames, so I don't exactly ...

Just to clarify on what we think as problem could differ:

    C:\Users\hzhou>ls *.txt
    new  2.txt

    C:\Users\hzhou>ls new  2.txt
    ls: new: No such file or directory
    ls: 2.txt: No such file or directory

dir works fine for that.


I actually didn't know that dir supported multiple globs for filenames! I've never had a need for that.

Super cool.

Um... no it doesn't. It takes each space delimited name as a new name. You will need to add "" and quote the names -- but Windows shell only has one level of quoting, (") which means you can't easily type the command you need. Unix shell is a bit better. Unix only appears worse because people do attempt scripting.

Directory of C:\Users\fred

12/14/2017 04:44 PM 1,556 new 2.txt 1 File(s) 1,556 bytes 0 Dir(s) 75,989,876,736 bytes free

C:\Users\fred>dir new 2.txt Volume in drive C has no label. Volume Serial Number is BA05-C445

Directory of C:\Users\fred

Directory of C:\Users\fred

File Not Found


> Um... no it doesn't. It takes each space delimited name as a new name. You will need to add "" and quote the names -- but Windows shell only has one level of quoting, (") which means you can't easily type the command you need. Unix shell is a bit better. Unix only appears worse because people do attempt scripting.

Ah I see, you have a file "new 2.txt", I was a bit confused.

Not sure what you mean by only 1 level of quoting being a problem, sorry.

> Ah I see, you have a file "new 2.txt", I was a bit confused.

This is highly ironical, given this thread.

Some people seems to advocate for programs to be better than humans at globbing filenames.

That's a great point. Computers sure have the potential to deal with spaces just fine. But if textual interaction is a requirement, we can only have one of arbitrary filenames and clutter-free syntax.

Window and Linux shells have the same ideas about spaces in file names, which is that if they appear they need to be quoted or the space character needs to be escaped.

Outside of Make which has long and boring historical reasons for not supporting spaces well, just about every program is fine with spaces.

In UNIX shell, IFS variable can be set to an other characters, e.g. newline and tab, or to empty string.

To be fair to modern desktop users, a command line that doesn't redefine the word "command" to fit the constraints of the system would look something like this:

$ take this list of transactions and for each group of duplicate names show me the name and total dollar amount spent, sorted from highest to lowest

$ I think I have a SQL command for that. Are you feeling lucky? [y/n] y

Foo $25 Bar $20 Blah $15

$ Was this what you were looking for? [y/n]

> Which is not a problem. The command line was only one, historical UI. Not the be-all end-all of UIs,

A similar trend is happening in electronic communication: from letter to email, to text, to emoji's. The text is just one historical communication way and there's no reason it should be of any real interest to modern communicator as long as we have emojis ... wait, we've been there before.

Only the text can express things that emojis cannot. This is not true to GUI vs command line.

The GUI, which can include arbitrary number of text boxes for command line style entry, is a superset of what the command line can do.

This kind of interface is horrific for the use-case:


There is such a thing as a cluttered graphical interface, and some complexity cannot just be abstracted away.

> The GUI, which can include arbitrary number of text boxes for command line style entry, is a superset of what the command line can do.

That would be like saying that because I am able to write "First open this app by clicking on the upper-left icon, then click File, new File", would mean that text is a superset of the GUI because it can describe visual cues. An abstraction can always replace another, it does not mean it is a superset of it.

Take the source code for your GUI application, this is pure text. In the early days, there was shell code that would describe terminal GUI of sort to configure the linux kernel. This shell code is still commands read token per token with matching behavior. Because an abstraction can be traded for another (which could be argued to be a founding principle of computer science), does not mean that one is a superset of the other.

> Users expect computers to work in a non-surprising ways.

"Users" is a very broad category with a bunch of partitioning.

The particular subgroup in question are authors of build automation systems for medium to large scale software projects. They (we) have a rather different perspective on "surprising".

> authors of build automation systems for medium to large scale software projects

For desktop software, a build automation system needs to handle files that are included in the installer, installed on user’s machines, and in some cases be discoverable by end users of the software.

I would be very surprised if my build system wouldn’t support spaces in file names.

Windows land also had https://en.m.wikipedia.org/wiki/8.3_filename which was never confusing.

"users" of Makefiles are actually developers, not secretaries, pensioners or architects. They should at least understand tradeoffs and choose accordingly.

Developers are used to not using spaces: oneStupidClass or AnotherStillStupidClass or others_do_it_like_this or bIhatehungariannotation or daxpy .

As you can see it is not that bad and it has been a widely used practice for decades.

Since you are comparing specifically to windows land, perhaps it is their focus on using spaces in file names that made them always be worst than unix like systems in terms of stability, sequrity or performance.

Users aren't programmers and programmers write things no user should ever see. Only the machines do.

Human beings who name things are going to use spaces. That spaces were used as delimiters for computers is somewhere between unfortunate and a colossal mistake.

But to use that as evidence of why spaces should not be supported in filenames is putting the cart before the horse. The goal of software is not to perpetuate whatever mistakes have been made in the past. It's to solve problems for human beings.

And human beings have been using spaces to delimit words since long before computers existed.

On the command line, using spaces to delimit words is also quite natural; that's why they get used as a delimiter:

    mv old-file new-file
Spaces separate the verb, the direct object, and the indirect object. Using commas or colons instead would painfully artificial.

So the question is whether you will favor naturalness on the command line or in the GUI. It's no surprise that a Unix build tool favors the command line.

True, but on the command line I don't have to worry about the spaces. I'm just going to tab-complete to simultaneously ensure I'm in the context I expect to be in, ensure I'm not making any typos, and save some time to boot.

What if your file system didn't distinguish between underscores and spaces, your GUI displayed them as spaces, and your command line displayed underscores?

Then this problem goes away entirely and you instead have the problem of not being allowed to have "this_file" and "this file" in the same directory. A problem which probably doesn't matter.

That hides the problem from the user, and will consequently lead to command line users typing spaces where they should be typing underscores.

Not all languages have (or had) spaces.

I apologize if I implied that they did, because that has nothing to do with my intended message.

To clarify our views:

The words and sentences are inside the file. The filenames are identifiers to the files. Identifiers' purpose is mainly to identify, rather than to communicate.

On the other hand, we could imagine a user interface that users don't see (actual) filenames at all.

Identification is a form of communication.

The identifier assigned to me at birth contains two spaces. Most people use one of the shorthand forms, but still.

I'm intrigued. Your first name has two spaces in it?

That is, I think it is safe to say that we expect we can include spaces in the names of things. The movie is named "Star Wars", after all. That seems natural.

For people, though, we have grown used to someone having multiple parts of their name separated by a space. If given out of order, a comma.

Even in the "naming things" category. We are used to having to put quotes around names that have spaces. Or otherwise using a convention like Capitalized Identifiers to show something goes together. Probably isn't a clearcut rule.

> I'm intrigued. Your first name has two spaces in it?

No. But my name does.

> For people, though, we have grown used to someone having multiple parts of their name separated by a space. If given out of order, a comma.


Ok. Something about the way I read the first post, I thought you meant your simple name. No reason it couldn't have spaces. My wife knew of someone named 1/2. Insisted it be a single character, too. Was amusing how hard that was for most payroll systems.

And I wasn't saying this isn't allowed. What we are used to is not the total of what is allowed. The fringes are full of mistakes, though. At some point, you make a statistical choice.

What does the merit of spaces in file names matter? You still will need to deal with files with spaces in the real world. If your tooling doesn't support it, it's a non-starter for many.

Your tools support filenames with tabs? With any unicode char?

A real world possibility.

Every project I develop is unit tested with strings containing invalid unicode text, containing null bytes, and other random stuff.

If a user can’t paste any byte sequence, and expect it to work, then the tool is broken. I handle these cases.

You can do it. The question is whether you should if you can avoid it.

When writing a shell script or makefile to deal with repetitive tasks if is easier to assume some things.

I'd prefer it if the writer of the shell or make didn't make too many assumptions about my assumptions.

Even bash lets you escape unusual filenames for when you need them. Make will always explode your strings, and doesn't even attempt to let you escape them. I don't think it's unreasonable to expect a time-tested Unix tool dealing with files to actually handle all possible files.

So in order to deal with the real world possibility of spaces in filenames you choose to ditch a real world, useful tool?

Perfectly reasonable if spaces in filenames are a hard req.

In lots of scenarios they are not, and I prefer to assume that filenames have no spaces, and stick to make.

works in bash:

> touch "with<Ctrl+v><Tab>tab"

Exactly as easy as without tabs. Now do that the whole day, sometimes with tabs, sometimes without.

What about real world is that in real world, it is not context-free. There are real world situations -- such as selling a word processor to general public (especially) including clueless -- you need to deal with files with spaces, period. But there are real world situations -- such as Makefiles -- requires users to learn the tool, to be able to handle certain tricky situations themselves and excludes incompetents. And then there are real world situations that the application only concerns within a company, or within an office, or within one-self.

What about real world is, it is complicated.

I'm bemused by this repeated insistence that supporting spaces in file names is somehow difficult.

The word was complicated, not difficult. It is not difficult to handle spaces, it is more complicated.

That's a worthwhile clarification - thank you. While that's certainly true in the strict sense, I remain bemused.

I'm bemused by this repeated insistence that supporting colon in file names is somehow difficult.

Who on earth insists that? Only Windows programmers, I expect! But it's easy: make sure they're expressible in your syntax, then pass them through as part of the file name. Just like spaces. If there's a problem, you'll get an error back from the OS.

Yes, of course you're right. My apologies, it was a stupid comparison.

Though I still feel it's a bit of "choose your poison". In any language, certain symbols will have special meaning. You pretty much have to pick which ones, and come up with rules about how to let their natural meaning bleed through. With unix shells, one such character is the space, and there's a number of rules about how to let their natural meaning bleed through - but they're all somewhat awkward, and they all need special care.

And nulls? And quotes? And dollars? And any unicode char?

I expect you mean NUL (ASCII 0) - but that's typically not a good choice, because POSIX decrees it invalid, along with '/'. But anything else is fair game, sure. Handling every Unicode char might be a pain, but that's the filing system's job, not yours!

Most programming languages manage to get this right. You have a quoting syntax with an escape character that lets you express the quotes, the escape character, and any arbitrary character (as a byte sequence, as a Unicode codepoint in some encoding, etc.) as well. Now you can do everything. Why not do this?

I'm not going to say this won't be a pain if you decide to write out every file name possible, because it will be, inevitably. But you can supply alternative syntaxes by way of more convenient (if limiting) shorthand - Python has its r"""...""" notation, for example, allowing you to express the majority of interesting strings without needing to escape anything.

You might argue that I've just punted the problem on to the text editor and the filing system. You'd be quite right.

Depends very much on your programming environment: on Python I don"t care, but when doing bash scripting or makefiles (very often) I very much care.

I have recently decided to stop using $ in passwords (defined by me) for that same reason: of course they are valid char, but they are such a big pain to support in usual contexts that it is simply not worth it.

And regarding slashes: are you bemused that unix does not support them in filenames?

(Thanks for correcting the nul reference)

No, not really - I have no particular opinion about what POSIX chooses to support or not.

But I'm going to go back to my original point. What I do have an opinion about is how reasonable it is for tools not to support a character that is valid in file names, when that character is straightforward to support with everyday, well known syntax.

And when that character is ' ', the standard English word separator, as straightforwardly supported by approximately ever kind of quoting or escaping syntax ever, my opinion about a lack of support is: it's crap.

Please write a makefile that uses an environment variable as a mysql password to connect to a mysql server and perform an arbitrary administrative task.

Now do the same, assuming that the password can have a dollar in it.

You can do it. It is not as easy, readable or maintanable. Avoid it if you can.

Why do it from a makefile instead of a script called by the makefile? This plays well to the strengths of both.

Just because of the dollar? Exactly my point.

No, not just because of the dollar. Because it's easier to test in isolation and allows you to use things like here docs which are useful for dealing with sql.


Somehow we started with a single dollar in a string, and now I am dealing with sql, here docs, and TDD?

There is a place for everything in life: for small helper makefiles, which I write simply to support me in my workflow and even to remember some interesting commands that I need to run for a certain project, I can assure you that making some simple assumptions helps me staying halfway sane.

This represents a problem with make, not any other part of the system.

Not a problem if you use a my.cnf file...

And I need that because of ... a dollar!

Extra complication to avoid if possible.

You’re trying to “sell” make. It is lacking a feature many of its competitors have. That’s a tough sell :)

You solution is a “boil the oceans” one typically proposed by engineers. You can re-program computers. You can’t re-program millions of people. Every natural language uses spaces to separate things.

You can accept that or you can keep tilting at windmills.

In every branch of science, reality wins. If your model can’t accomodate reality it’s either completely wrong or it needs adjustments, at least.

> Every natural language uses spaces to separate things.

Exactly. To separate things. Which, incidentally, happens to also be precisely what make, and the traditional UNIX shells do :-)

The problem isn't that space in itself is a particularly difficult character. The problem is that its meaning is overloaded and ambiguous. No matter what you do, computers will have difficulty with ambiguity. You'll always have the problem that the separator is special, but hey, I'd be all for using 0x1C instead ;)

How exactly is this different to, say, maths, where there is an assumed precedence and when you need to either make that clear or change the order you use parentheses to encapsulate the inner calculation?

What it sounds like is that the Unix shell syntax was established how it was, everyone built on it with all of its syntactical conveniences, and suddenly there's 100% buy-in to the idea that a computer just can't handle a filename with spaces in a shell.

    ./cmd do-something-with --force file with spaces

    ./cmd do-something-with --force 'file with spaces'
That's one of the main problems solved. If you're expecting to run an executable with spaces in it, like this:

    ./do something with cmd --force 'file with spaces'
Then it's another problem but one that can be solved by convention. A GUI can happily execute `./do\ something\ else` but if you're in the shell you've got completions, aliases, functions, symbolic links...

And if that's not ideal, then `./'do something with' cmd …` should be good enough right?

You can prevent your tools from having to deal with the fact, by preprocessing.

> Every natural language uses spaces to separate things.

Nope. You don't space out each word when speaking. If you were speaking about writing systems, not all writing systems use spaces as word delimiters. See: https://en.wikipedia.org/wiki/Space_(punctuation)

Not handling spaces is a symptom of a much deeper problem in common with a lot of 'unix' utilities: not actually structuring data apart from in ad-hoc string encodings. The fact that data can be so easy conflated with program structure is the cause of so many obscure bugs and overhead in using utilities that suffer from it it's a wonder anyone is defending this approach going forward.

>Demanding the support of spaces in filenames significantly complicates code as simple space delimination no longer works


So human beings should stop using allowed file names because it's too hard for you?

The fact that Unix tools have trouble with spaces in filenames is absolutely a problem with Unix. If the Unix ecosystem had better support for this, then it wouldn't be a problem.

I don’t quite see how Unix tools have trouble with spaces in filenames. Could you detail some cases where the space handling is not due to the shell, as opposed to the program being invoked?

There's this program called make...

My question was aimed at ops generic statement. As for Make, it originally wasn’t clear to me where the issue was supposed to lie. Lines in rule bodies are handed off to the shell, and that whitespace in rule dependencies need escaping didn’t seem surprising since it’s a list (though it’s probably a bug that whitespace in target names must be escaped, since it’s just one token that ends in a colon). But I see now that the expansion of list-valued automatic variables is probably a real Make-endemic issue.

> though it’s probably a bug that whitespace in target names must be escaped, since it’s just one token that ends in a colon

It's perfectly fine for a rule to have multiple targets, e.g.

  output.txt error.txt: source
    build source > output.txt 2> error.txt

Interesting! So in that case one could defend the need for escaped whitespace. Still leaves automatic variables, I guess.

Well it could be rewritten to use 0x1f (i.e. unit separator) to separate items. I mean it already has significant tabs. Though invariably people would be like "How is it acceptable for make to not support filenames which contain unit separators? It's 2020 guys, get with the program!"

Unix tools have no trouble with space in file names. "-", "\0", "\n" and bad Unicode are troublesome. Lazy programmers can forget to put quotes around variables in shell scripts, but it's not a problem of the tool.

Additionally, not every shell expands variables the way Bourne-based shells do.

You're talking about implementation complexity. The only argument you can throw at the user is feature complexity. Handling spaces in filename isn't a complex feature at all, and users don't care that the simplistic implementation you're using makes it a problem.

I've been working with strings with spaced since before the 21st century. I've also worked with strings with special characters.

I've even worked with variables with spaces and special characters.

I don't see why filenames are so much more special, except a lot of old tools never got updated to world beyond ASCII

I am old-school and hate spaces. They do occasionally show up on my computer. But never in anything I'd be touching with a Makefile.

> At a certain point, I realized that I should be managing this processing somehow, so I thought of using a simple Makefile.

I don't understand why one would think that make would be a good tool for this...

> Demanding the support of spaces in filenames significantly complicates code as simple space delimination no longer works and other delimination schemes are much more error prone -- forgetting balancing quotes, any one?

Yes this is a limitation of make, but not a million other tools out there.

Make is not a panacea, no tool is - pressing a tool into a job it is un-suited for because you understand it is "Not good" (trade mark and copy right pending).

This is the classic case of having a hammer and a screw -

> I don't understand why one would think that make would be a good tool for this...

Why not? To me, it sounds like a perfect use of Make. You have a number of files that should be processed somehow (presumably by invoking a certain tool for each and every file) and produce another set of files.

Whether it's C-files to compiled binaries, or some source files to PDF:s, make seems very well suited for the job. Except yeah, perhaps, spaces.

Yeah make seems like a great first tool to grab in this case. But apparently some of the requirements of this current usecase don't play well with make. Too bad, but it doesn't mean make is somehow a bad tool...it just doesn't fit all problems.

Given that make isn't working well in this case due to spaces, I would personally probably use something like find and sed in a bash script to just get all the files and convert them to pdf. (Obviously this won't work if your system doesn't have these tools though...)

Nothing an intake script and a little gsub() can't handle

> The merits of filenames with spaces is they read better in a GUI explorer.

Even this merit is debatable. Is foo bar two things or one? I know foo-bar is one.

yeah and then is 田中 one or two things? It all depend on the domain. To me 'é' and 'ê' mean different thing that are both outside the make domain. In French you also have half white space (diacritic that latex can handle) for terminal '?!..' Those are simply outside make domain .. so bothering about space encoding as: ' ', '+', "%20" or '_' seem futile.

It all depend of domain convention. Make align on variable naming convention .. that is all

get out of here spaces.

I was able to to it. Here's the source code "hello world.c":

    #include <stdio.h>
    int main(void)
      puts("Hello, world!");
      return 0;

And here's the minimal Makefile to generate the output:

    hello world:
Of course, I did have to swap out the ASCII SP (character 32) for the Unicode non-blank space (code 160) to get this to work, but hey, spaces!

>I did have to swap out the ASCII SP (character 32) for the Unicode non-blank space (code 160)


EDIT: Ok, now I got it. Boy that was a wild ride.

I did not do the method below. Instead, I wrote a script to generate the filename with the non-breaking space where I needed it.

Edit: rewording and typos.

Any way I can see the script? Looks like I am still learning and ended up taking a longer route.

It's Lua 5.3:

    h = "hello" .. utf8.char(160) .. "world"
    f = io.open("Makefile","w")
    f = io.open(h .. ".c","w")
    #include <stdio.h>
    int main(void)
      puts("Hello, world!");
      return 0;
Pretty straightforward.

Could you explain how you did it?

Replacing breaking space with non-breaking space :

1. Set up your compose key on linux(Top right corner settings->system settings->keyboards->shortcut->typing->Compose key(choose an appropriate one)).

2. Whenever you need to type breaking space instead type (compose key + spacebar + spacebar). This puts a non-breaking space there instead. That's it.

3. Create file -> sudo nano hello(compose key + spacebar + spacebar)world.c

4. Paste the code to execute.

5. Makefile-> hello(compose key + space + space)world:

6. make

7. ./hello world (pressing tab will recognise the executable automatically).

NOTE: Don't do this ever in any production code or just ever. This hack can take up hours to resolve and a lot of frustration which could have better been spent on fixing something meaningful. This is a frowned upon practice. The only cool part is that your directory can have two files with whose name look exactly the same.

Solution: create the pdfs with spaces replaced by underscores. Then as the very last command in the relevant makefile section, insert a bash command to replace those underscores with spaces.

What if the filename is a mixture of underscores and spaces?

Escaping an escape character is not a new problem in programming. There are solutions :-)

"Premature generalization is the root of all evil."

Is there any clone of make that's aiming to address the issues with file name spaces?

Being incapable of handling spaces is a bug that's been marked Minor since 2002 - https://savannah.gnu.org/bugs/?712

(plus I find double quotation marks easier to read and write than escaping every space in a path)

Edit: https://stackoverflow.com/questions/66800/promising-alternat... (they aren't really clones tho)

I deal with this shape of problem quite a bit. After using scons and make in the past I recently tried using ninja, and it really works well.

Specifically, a python configure script using the ninja_syntax.py module. This seems like it's a bit more complicated, but has a lot of nice attributes.

File names with spaces should just work (unlike make). The amount of hidden complexity is very low (unlike make or SCons); all the complexity lives in your configure script. It's driven by a real, non-arcane language (unlike make). Targets are automatically rebuilt when their rules/dependencies change (unlike make).

It's more difficult to install than make, but only marginally.

Perhaps you just chose the wrong tool for the job. Just because you are able to do similar things with make, doesn't mean it is has to be suited to your chosen use case. It's a tool that was created with a specific purpose in mind, with specific constraints, and it works fine for thousands (I assume) of people every day. You can't blame it for not being a general-purpose programming language. Make isn't beyond building other tools that you can write yourself - and use in the very same makefiles - to assist in handling cases like this, however.

Spaces in filenames create troubles with almost every command line tool. cut(1), awk(1), find(1), xargs(1), what not. How do you quote them, do you need to use \0 as a separator instead, do other commands on the pipeline support \0 separators? What happens after a couple expansions, passing stuff from one script to another?

And what the heck happened on 31 dec 1999 that the world became a different place where suddenly people realised: there were these space things, quite useful they were, why don't we tuck them into every file name and URL and who knows what?

People have better things to do than dealing with these things.

> Spaces in filenames create troubles with almost every command line tool

Then that's a shortcoming which should be addressed with the tools, because humans everywhere use spaces in filenames.

For every command-line tool I make (Windows & Linux), I ensure it handles such trivial use-cases. I can't see why such a simple task is seemingly impossible to get done in GNU coreutils.

Very good of you indeed, but it's a hard task to retroactively change how a system and the greater community around it behaves and how standards like POSIX have defined field separation syntax for decades. I wouldn't mind if make supported spaces in filenames, but the thing is it's a bit late now and the problem is too unimportant to bother solving, frankly.

I think the reason make is both so controversial and also long-lived is that despite how everyone thinks of it, it isn't really a build tool. It actually doesn't know anything at all about how to build C, C++, or any other kind of code. (I know this is obvious to those of us that know make, but I often get the impression that a lot of people think of make as gradle or maven for C, which it really isn't.) It's really a workflow automation tool, and the UX for that is actually pretty close to what you would want. You can pretty trivially just copy tiresome sequences of shell commands that you started out typing manually into a Makefile and automate your workflow really easily without thinking too much. Of course that's what shell scripts are for too, but make has an understanding of file based dependencies that lets you much more naturally express the automated steps in a way that's a lot more efficient to run. A lot of more modern build tools mix up the workflow element with the build element (and in some cases with packaging and distribution as well), and so they are "better than make", but only for a specific language and a specific workflow.

> It's really a workflow automation tool,

That's true.

> and the UX for that is actually pretty close to what you would want.

That is so not true. Make has deeply woven into it the assumption that the product of workflows are files, and that the way you can tell the state of a file is by its last modification date. That's often true for builds (which is why make works reasonably well for builds), but often not true for other kinds of workflows.

But regardless of that, a tool that makes a semantic distinction between tabs and spaces is NEVER the UX you want unless you're a masochist.

> Make has deeply woven into it the assumption that the product of workflows are files, and that the way you can tell the state of a file is by its last modification date.

I've always wondered whether Make would be seen as less of a grudging necessity, and more of an elegant panacea, if operating systems had gone the route of Plan 9, where everything is—symbolically—a file, even if it's not a file in the sense of "a byte-stream persisted on disk."

Or, to put that another way: have you ever considered writing a FUSE filesystem to expose workflow inputs as readable files, and expect outputs as file creation/write calls—and then just throw Make at that?

> everything is—symbolically—a file

How are you going to make the result of a join in a relational database into a file, symbolically or otherwise?

On plan 9, you'd do something like:

     ctlfd = open("/mnt/sql/ctl", OREAD|OWRITE);
     write(fd, "your query");
     read(fd, resultpath);

     resultfd = open(resultpath, OREAD);
     read(fd, result);
This is similar to the patterns used to open network connections or create new windows.

And how would you use that in a makefile?

Something like this would be ideal.

      echo "<sql query>" > /mnt/sql/myjoin
It's just representing writing and reading a database as file operations, they map pretty cleanly. Keep in mind that Plan 9 has per process views of the namespace so you don't have to worry about other processes messing up your /mnt/sql.

I think you're missing the point. I have a workflow where I have to perform some action if the result of a join on a DB meets some criterion. How does "make" help in that case?

In that case, it doesn't help much. For Plan 9 mk, there's a way to use a custom condition to decide if the action should be executed:

    rule:P check_query.rc: prereq rules
Where check_query may be a small shell script:


     # redirect stdin/stdout to /mnt/sql/ctl
     <> /mnt/sql/ctl {
            # send the query to the DB
            echo query
            # print the response, check it for
            # your condition.
            cat `{sed 1q} | awk '$2 != "condition"{exit(1)}'
But I'm not familiar with an alternative using Make. You'd have to do something like:

    .PHONY: rule
     rule: prereq rules
          check_query.sh && doaction

> You'd have to do something like:

That's exactly right, but notice that you are not actually using make at all any more at this point, except as a vehicle to run

check_query.sh && doaction

which is doing all work.

It's being used to manage the ordering of that with other rules, and to run independent steps in parallel.

OK. So here's a scenario: I have a DB table that keeps track of email notifications sent out. There is a column for the address that the email was sent to, and another for the time at which the email was sent. A second table keeps track of replies (e.g. clicks on an embedded link in the email). Feel free to assume additional columns (e.g. unique ids) as needed. When some particular event occurs, I want the following to happen:

1. An email gets sent to a user

2. If there is no reply within a certain time frame, the email gets sent again

3. The above is repeated 3 times. If there is still no reply, a separate notification is sent to an admin account.

That is a common scenario, and trivial to implement as code. Show me how make would help here.

I wouldn't; I don't think that make is a great fit for that kind of long running job. It's a great tool for managing DAGs of dependent, non-interactive, idempotent actions.

You have no DAG, and no actions that can be considered fresh/stale, so there's nothing for make to help with. SQL doesn't have much to do with that.

> How are you going to make the result of a join in a relational database into a file, symbolically or otherwise?

A file that represents the temporary table that has been created. Naming it is harder, unless the SQL query writer was feeling nice and verbose.

You would probably need another query language, but that would come with time, after people had gotten used to the idea.

With that said, there are NoSQL databases these days whose query language is easily expressed as file paths. CouchDB, for example.

A very simple approach is to use empty marker files to make such changes visible in the filesystem. Say,

    dbjoin.done: database.db
        sqlite3 $< <<< "your query"
        touch $@

you could mount the database as a filesystem

With respect to Make, does the database (mounted as a filesystem) retain accurate information that Make needs to operate as designed (primarily the timestamps). To what level of granularity is this data present within the database, and what is the performance of the database accessed in this way? Will it tell you that the table was updated at 08:40:33.7777, or will it tell you only that the whole database was altered at a specific time?

You're talking about a theoretical implementation of a filesystem with a back-end in a relational database. The question is only whether the information is available.

Say directories map to databases and files map to tables and views. You can create new tables and views by either writing data or an appropriate query. Views and result files would be read-only while data files would be writable. Writing to a data file would be done with a query which modifies the table and the result could be retrieved by then reading the file -- the modification time would be the time of the last update which is known.

Views and queries could be cached results from the last time they were ran which could be updated/rerun by touching them or they could be dynamic and update whenever a table they reference is updated.

> but often not true for other kinds of workflows.

Examples? I mean, there are some broken tools (EDA toolchains are famous for this) that generate multiple files with a single program run, which make can handle only with subtlety and care.

But actual tasks that make manages are things that are "expensive" and require checkpointing of state in some sense (if the build was cheap, no one would bother with build tooling). And the filesystem, with its monotonic date stamping of modifications, is the way we checkpoint state in almost all cases.

That's an argument that only makes sense when you state it in the abstract as you did. When it comes down to naming a real world tool or problem that has requirements that can't be solved with files, it's a much harder sell (and one not treated by most "make replacements", FWIW).

> Examples?

Anything where the relevant state lives in a database, or is part of a config file, or is an event that doesn't leave a file behind (like sending a notification).

Like, for example?

To be serious, those are sort of contrived. "Sending a notification" isn't something you want to be managing as state at all. What you probably mean is that you want to send that notification once, on an "official" build. And that requires storing the fact that the notification was sent and a timestamp somewhere (like, heh, a file).

And as for building into a database... that just seems weird to me. I'd be very curious to hear about systems that have successfully done this. As just a general design point, storing clearly derived data (it's build output from "source" files!) in a database is generally considered bad form. It also introduces the idea of an outside dependency on a build, which is also bad form (the "source" code isn't enough anymore, you need a deployed system out there somewhere also).

I need to send an email every time a log file updates, just the tail, simple make file:

  send: foo.log
          tail foo.log | email

  watch make send
Crap, it keeps sending it. Ok, so you work out some scheme involving temporary files which act as guards against duplicate processing. Or you write a script which conditionally sends the email by storing the hash of the previous transmission and comparing it against the hash of the new one.

That last option actually makes sense and can work well and solves a lot of problems, but you've left Make's features to pull this off. For a full workflow system you'll end up needing something more than files and timestamps to control actions, though Make can work very well to prototype it or if you only care about those timestamps.


Another issue with Make is that it's not smart enough to know that intermediate files may change without those changes being important. Consider that I change the comments in foo.c or reformat for some reason. This generates a new foo.o because the foo.c timestamp is updated. Now it wants to rebuild everything that uses foo.o because foo.o is newer than those targets. Problem, foo.o didn't actually change and a check of its hash would reveal that. Make doesn't know about this. So you end up making a trivial change to a source file and could spend the afternoon rebuilding the whole system because your build system doesn't understand that nothing in the binaries are actually changing.

How would you fix that with your preferred make replacement? None of that has anything to do with make, you're trying to solve a stateful problem ("did I send this or not?") without using any state. That just doesn't work. It's not a make thing at all.

Lisper was replying to the OP who suggested using Make for general workflows. Make falls apart when your workflow doesn't naturally involve file modification tasks.

With regard to my last comment (the problem with small changes in a file resulting in full-system recompilation), see Tup. It maintains a database of what's happened. So when foo.c is altered it will regenerate foo.o. But if foo.o is not changed, you can set it up to not do anything else. The database is updated to reflect that the current foo.c maps to the current foo.o, and no tasks depending on foo.o will be executed. Tup also handles the case of multiple outputs from a task. There are probably others that do this, it's the one I found that worked well for my (filesystem-based) workflows.

With regard to general workflows (that involve non-filesystem activities), you have to have a workflow system that registers when events happened and other traits to determine whether or not to reexecute all or part of the workflow.

I mean you're just describing make but with hashes instead of file modification times. It's probably the most common criticism of make that its database is the filesystem. If file modification times aren't meaningful to your workflow then of course make won't meet your needs. But saying the solution is 'make with a different back-end' seems a little silly, not because it's not useful, but because they're not really that different.

GNU make handles multiple outputs alright but I will admit they if you want something portable it's pretty hairy.

I love Tup, and have used it in production builds. It is the optimal solution for the problem that it solves, viz, a deterministic file-based build describable with static rules. To start using it, you probably have to "clean up" your existing build.

I don't use it anymore, for several reasons. One is that it would be too off-the-wall for my current work environment. The deeper reason is that it demands a very static view of the world. What I really want is not fast incremental builds, but a live programming environment. We're building custom tooling for that (using tsserver), and it's been very interesting. It's challenging, but one tradeoff is that you don't really care how long a build takes, incremental or otherwise.

    send: foo.log
          tail foo.log | email
          touch send

Correct, that works for this example. But if you have a lot of tasks that involve non-filesystem activities you'll end up littering your filesystem with these empty files for every one of them. This can lead to its own problems (fragility, you forgot that `task_x` doesn't generate a file, or it used to generate one but no longer does, etc.).

> you'll end up littering your filesystem with these empty files for every one of them

These files are information just like files that are not empty.

You're misusing make here. This should be a shell script or a program that uses inotify/kqueue, or a loop with sleeps and stat calls.

just make "send" not be a phony target.

How about "touch send"?

Now "touch -t" will allow you to control the timestamp.

md5sum, diff would be your friends.

Anyway, my C compiler doesn't provide that info, anyway.

What about, for example, a source file that needs to be downloaded and diffed from the web? What about when you need to pull stuff from a database? You can hack your way around but it's not the most fun.

WRT the web file, curl can be run and only download a file if the file has been modified after the file on disk.

DB are harder (yet possible) but not a common request that I’ve seen.

curl -z only works if the server has the proper headers set - good luck with that. The point is, it's great to be able to have custom conditions for determining "needs to be updated", for example.

You can always download to a temporary location and only copy to the destination file if there is a difference. You don't need direct support from curl or whatever other tool generates the data.

A language that uses lots of parens to delimit expressions is incredibly bad UX, especially when you try to balance a complex expression, but hopefully there are tools like Paredit to deal with that, so that I can write my Emacs Lisp with pleasure about every day. Similarly, any decent editor will help you out with using correct indentation with Makefiles.

Last modification date is not always a correct heuristic to use, but it's quite cheap compared to hashing things all the time.

Make is a tool for transforming files. I wonder how it's not quite natural and correct for it to assume it's working with files?

> Make has deeply woven into it the assumption that the product of workflows are files

You're referring to a standard Unix tool, an operating system where EVERYTHING is a file.

Sometimes things in workflows are sending/retrieving data over a network. It may be turning on a light. It could be changing a database. Make has no way of recognizing those events unless you've tied them to your file system. Do you really want an extra file for every entry or table in a database? It becomes fragile and error prone. A real workflow system should use a database, and not the filesystem-as-database.

> Sometimes things in workflows are sending/retrieving data over a network. It may be turning on a light. It could be changing a database. Make has no way of recognizing those events

Why should Make violate basic software design rules and fundamental Unix principles? Do you want your build system to tweak lights? Setup a file interface and add it to your makefile. Do you want your build system to receive data through a network? Well, just go get it. Hell, the whole point of REST is to access data as a glorified file.

The filesystem is, and has always been, a database.

But it's not true that everything is a file. A row in a relational database, for example, is not a file, even in unix.

> A row in a relational database, for example, is not a file, even in unix.

Says who? Nothing stops you from creating an interface that maps that row to a file.

That's the whole point of Unix.

Heck, look at the /proc filesystem tree. Even cpu sensor data is available as a file.

Ha, even eth0 is a file! You can open a network connection by opening this file! Erm... no, that doesn't work.

Then a process! You spawn a process by opening a file! Erm... again, no.

You want me to continue?

In 9front you can. You can even import /net from another machine. Bam, instant NAT.

> Nothing stops you from creating an interface that maps that row to a file.

That's true, nothing stops you, though it is worth noting that no one actually does this, and there's a reason for that. So suppose you did this; how are you going to use that in a makefile?


This seem downvoted, but I would second the opinion. If you're capable of representing a dependency graph, you should be able to handle the tabs. If `make` does your job and the only problem is the tabs, it's not masochism, just pragmatism.

HN has a pretty strong anti-make bias. People here would much rather use build tools that are restricted to specific languages or not available on most systems. Using some obscure hipster build tool means it's a dependency. Though these people who are used to using language-specific package manager seem to take adding dependencies extremely lightly.

> It actually doesn't know anything at all about how to build C, C++, or any other kind of code.

I guess it depends on how you define "know", but there are implicit rules.

    $ cat foo.c
    #include <stdio.h>
    int main() {
      return 0;
    $ cat Makefile
    foo: foo.c
    $ make
    cc -O2 -pipe    foo.c  -o foo
    $ ./foo

Fun fact: your Makefile above is redundant. You can delete it entirely, and the implicit rules you're using here continue to work just fine.

Not quite: it does declare "foo" as the default target. Without the Makefile, it would be necessary to type `make foo` instead of just `make`.

The built in rule to copy 'build.sh' to 'build' and make it executable is also interesting.. confused the hell out of me

That doesn't work for me. Tried with empty Makefile, no Makefile, with make (PMake) and gmake (GNU Make).

Don't know why that would be. I'm using GNU Make 4.1, but this has worked for years and years as far as I knew. Not a particularly useful feature, so I it doesn't really matter, but you messed up my fun fact.

  dima@fatty:/tmp$ mkdir dir

  dima@fatty:/tmp$ cd dir

  dima@fatty:/tmp/dir$ touch foo.c

  dima@fatty:/tmp/dir$ make -n foo
  cc     foo.c   -o foo

  dima@fatty:/tmp/dir$ make --version
  GNU Make 4.1
  Built for x86_64-pc-linux-gnu
  Copyright (C) 1988-2014 Free Software Foundation, Inc.
  License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
  This is free software: you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.

I had fun learning about that.

Try "make foo" instead of "make"

Ah, that works!

Then you did something wrong — it definitely works with GNU make (can’t speak for PMake):



   make foo.c

Should just work. No makefile needed.

No, `make foo`. You need to state the target, not the input.

You're correct.

   make foo


   make foo.o

Yeah. There is a metric crap-ton of the design of Make that is solely for the purpose of compiling and linking and document processing. That's actually part of what makes it annoying to use it for projects other than C or C++, when you don't need to compile or transform or depend on different formats.

The core of make is really just a control flow model that understands file dependencies as a first class thing, and permits arbitrary user supplied actions to be specified to update those files. All those default rules around how to handle C files are really more like a standard library and can be easily overridden as desired.

IMHO what makes it annoying for projects other than C or C++ is that there isn't an equivalent portion of makes "standard library" that applies to e.g. java, but this is largely because java went down a different path to develop its build ecosystem.

In an alternate reality java tooling might have been designed to work well with make, and then make would have a substantial builtin knowledge base around how to work with java artifacts as well as having a really nice UX for automating custom workflows, but instead java went down the road of creating monolithic build tooling and for a long time java build tooling really sucked at being extensible for custom workflows.

The thing about Java is that it has its own dependency system embedded in the compiler. This design decision made it difficult to integrate with a tool like make.

I don't think having dependencies built into the language and/or compiler means it needs to be difficult to integrate with something like make. In fact gcc has dependency analysis built into it. It just knows how to output that information in a simple format that make can then consume.

I feel like this choice has more to do with early java culture and/or constraints as compared to unix/linux. With "the unix way" it is really common to solve a problem by writing two separate programs that are loosely coupled by a simple text based file format. When done well, this approach has a lot of the benefits of well done microservices-style applications built today. By contrast, (and probably for a variety of reasons) this approach was always very rare in early java days. It seemed for a while like the norm was to rewrite everything in java and run it all in one giant JVM in order to avoid JVM startup overhead. ;-) The upshot being you often ended up with a lot more monolithic/tightly coupled designs in Java. (I think this is less true about Java today.)

> There is a metric crap-ton of the design of Make that is solely for the purpose of compiling and linking and document processing.

Not really. The bit being pointed out here certainly isn't. It's not any special design going on, it's just a built-in library of rules and variables for C/C++/Pascal/Fortran/Modula-2/Assembler/TeX. These rules are no different than if you had typed them in to the Makefile yourself. And if you don't like them, you can say --no-builtin-rules --no-builtin-variables.

The only actual bit of C-specific design I can think of is .LIBPATTERNS library searching.

Thank you. It's nice to know that I'm not alone in this dark, dark world.

It's so depressing when people use arguments like "it's old", "it uses tabs", and "it's hard to learn". As described by one Peter Miller, "Make is an expert system". If a tool is the most powerful, standard, and expressive among its peers, its age or the fact that it uses tabs should be inconsequential.

If anything, the fact that it's decades old and used in every major software project is a testament to its effectiveness, not a drawback.

And if "learning Make" is a barrier, that to me is a sign that someone cares more about complaining than about their project. The same way people learn Git when it's clear that it's the best tool, people learn Make. It really isn't that hard. Even the basics are enough to reap huge benefits immediately.

> If anything, the fact that it's decades old and used in every major software project is a testament to its effectiveness, not a drawback.

Part of the reason is because people see the superficial issues (like the discussion regarding spaces) before they see the value of years (or decades) of work. It doesn't help that when folks bring this up many times you get "you're holding it wrong" type responses.

I sympathise with both sides of the argument. I don't know the solution but it's unfortunate seeing folks reinventing the wheel and struggling with problems solved in the past.

I definitely think Gulp and Webpack have a place. Where you have to write a Makefile fresh every time for every type of project, Gulp and Webpack come prepared with JS-specific stuff. That's perfect for a throwaway project or a small codebase where build performance and maintenance really don't matter.

My issue is that people who need serious solutions forgo Make because "tabs, man", or because Webpack has pretty console output.

The tools are not the same on every platform. That’s reason enough for me to not use it with my JavaScript projects.

The bigger reason though is that it’s not very idiomatic for JavaScript projects to use Make. It sounds like the only reason that some people go out of their way to use it is because they actually don’t want to learn something.

> "The tools are not the same on every platform."

Any common examples?

> "It’s not very idiomatic for JavaScript projects to use Make."

While I agree that popularity is a factor in picking a tool, it shouldn't be a deciding factor. Going by popularity is precisely how we end up with a new Build System of the Year(TM) every few years. The fact that we've gone through 4 fairly prominent tools (Gulp, Grunt, Broccoli, Webpack), all of which contending to "fix" the previous, and none of which have proper DAG or incremental builds (which Make has had for decades) is damning evidence.

In other words, I think Make could be (and I wish it was) idiomatic for JS.

This comment points out the kind of problems that can occur just on Unix systems - https://news.ycombinator.com/item?id=16485637

And then there’s Windows...

Anyway, the fact that things change quickly in JS-land is more of a testament to how popular it is than anything else IMO. If C were used in the same environments as JS, I’m sure that you'd see just as much churn.

C is used in a lot of platforms.

Js only in the browser (thankfully mostly converged) and in node.

JavaScript is in the server, browser, mobile apps, desktop apps and embedded devices.

Basically, it’s used everywhere that C is used and then some.

> "Basically, it’s used everywhere that C is used and then some."

It's the other way around. Every JavaScript program is run by a C/C++ program.

But if you consider browsers to be a "platform", which I think they are in fairness, then the GP comment could still apply. Whether the browser is written in C is pretty irrelevant to whether it can host it.

I conjecture that an upper-level "platform" that's on top of a lower-level "platform", for some definition of "platform", will be less varied and fragmented than the lower-level one.

What is the runtime? Always the same.

More than C? Ha! The platform where your js is running is probably written in C!

Pffffft. Nobody wants to touch C these days.

That's why many, many, many, many, many more people are writing server, browser, desktop and mobile apps with JS and not C.

There's way more to software engineering than GitHub. Example: every company that hosts their own code repositories.

The TIOBE index, while not entirely accurate either, does reflect usage by a much more broad set of engineers.

Here's another survey that shows what kinds of languages programmers actually want to use and JS tops the list while C comes in below even Bash/Shell programming and PHP.

Incorrect. TIOBE shows you what companies are making their employees do. Github numbers show you what programmers actually want to do.

However, looking at job ads these days I hardly see any for C relative to JavaScript.

My point was that TIOBE is simply more all-encompassing than GitHut. It gives you a clearer picture of the most searched programming languages, which includes both GitHub as well as non-GitHub stats.

Also, GitHut counts repositories, not lines of code. Not sure which of C or JS has more lines of code being written every year, but my guess is that it's C (Java might have more than both though).

Yes and my point is that GitHub numbers show you a much better picture of programming languages that programmers actually want to use.

Are we just going to go back and forth saying the same thing over and over though at this point?

Nah, then it's pretty simple. My rebuttal is that I think "programming languages that programmers actually want to use" (besides still being a debatable claim) is an insignificant metric. The industry doesn't pay for what you want to use.

When you are responding to a comment that says that programmers don’t want to use C...

K, thx bye now.

I think the comment was that JS is more popular than C. ;)

Yes, popular = the thing people want to use. And it was my comment and it’s quite obvious upon reading again, that my implication is very clear. That's why github numbers alone are better because it shows what programmers actually want to use.

It's the same thing I've been saying to you for our last 20 interactions here. But I'll keep going because I'm not going to let you get the last word.

SO let's keep going I guess.

The GitHut numbers show repository count. I'm sure there all sorts of small JS repos with 50 lines of code, boosting project porfolios of people everywhere. Of course lines of code also don't mean something good: some languages are a bit more verbose than others. The TIOBE index shows what programmers search for, which is likely unbiased by LoC or repository count. My goal really is to help show you that JS is popular, but not nearly as widely used (or desired) as people make it out to be. As uncool as Java and C are, they still dominate software development. If you're dead-set on your position however, and don't wish to believe those data, then by all means let's disagree. :)

My position is that JavaScript is way more popular than C with programmers and that the popularity of JavaScript on GitHub proves my point.

Nothing that you have said has refuted that.

However, I do find your comment about "JS repos with 50 lines of code" to be particularly humorous since you can do more with 50 lines of JS than you could possibly dream of doing with 50 lines of C. LOL :) Thanks for a good laugh.

Care to try again?

You can use gnu make on platforms where the usual make is not gnu. For example I install it from ports on *bsd and have an msys gnu make on Windows.

What is the build tool used in the javascript world this week? Broccoli? Grunt? Gulp? Talp?

Just dealing with breaking changes in any one of those is a full time job!

Pro-tip: you don’t have to switch tools the minute that something new comes out.

You do, since different communities use different build tools. And they even switch tools, for the new kid on the block.

Bullshit. You're arguing as if every JS developer has to interact with every JS community.

Stop presenting these ridiculous false dichotomies.

You often do though. For example if you want to do open source development on JS projects, this means that you might have to learn many different build systems.

Yeah, where “often” is defined as “this one example that I just conjured up”.

Yep, you're right. It's anecdotal evidence, of which you have 2 data points on this thread. We're extrapolating here from what we've seen, for sure. If you have better data, please do share.

I know I'm right. Thanks.

Aw, I was hoping you'd actually follow up with some meaningful data. :(


Our anecdotal evidence trumps your no evidence. ;)

Haha yes you do, because your manager does, every time

Make's interface is horrible. Significant tabs. Syntax which relies on bizarre punctuation... If only whoever authored Make 40 years ago had had the design acumen of a Ken Thompson or a Dennis Ritchie!

We're stuck with Make because of network effects. I wish that it could just become "lost" forever and a different dependency-based-programming build tool could replace it... but that's just wishful thinking. The pace of our progress is doomed to be held back by the legacy of those poor design decisions for a long time to come.

Maybe I'm in the minority, but I've always found its syntax to be quite nice (though admittedly a departure from most modern languages). Then again, I find using JSON or not-quite-ruby to configure a build incredibly bizarre and confusing, so I guess I'm just set in my ways...

In all seriousness, what's wrong with it? Significant tabs aren't great, but I feel like that's a relatively minor wart. The simple things are _very_ simple and straightforward. The more complex things are more complex, but usually still manageable...

I've seen plenty of unmanageable Makefiles, but I haven't seen another system that would make them inherently cleaner. (I love CMake, but it's a beast, and even harder to debug than make. If it weren't for its nice cross-platform capabilities, I'm not sure it would see much use. It's also too specialized for a generic build tool. Then again, I definitely prefer it to raw Makefiles for a large C++ project.)

"In all seriousness, what's wrong with it?"

1. Claiming a rule makes a target, but then fails to make that target, ought to be a runtime fatal error in the makefile. I can hardly even guess at how much time this one change alone would have saved people.

2. String concatenation as the fundamental composition method is a cute hack for the 1970s... no sarcasm, it really is... but there's better known ways to make "templates" nowadays. It's hard to debug template-based code, it's hard to build a non-trivial system without templates.

3. Debugging makefiles is made much more difficult than necessary by make's default expansion of every target to about 30 different extensions for specific C-based tools (many of which nobody uses anymore), so make -d output is really hard to use. Technically once you learn to read the output it tends to have all the details you need to figure out what's going wrong, but it is simply buried in piles of files that have never and will never be found in my project.

4. The distinction between runtime variables and template-time variables is really difficult and annoying.

5. I have read the description of what INTERMEDIATE does at least a dozen times and I still don't really get it. I'm pretty sure it's basically a hack on the fact the underlying model isn't rich enough to do what people want.

6. Sort of related to 2, but the only datatype being strings makes a lot of things harder than it needs to be.

7. Make really needs a debugger so I can step through the build, see the final expansions of templates and commands, etc. It's a great example of a place where printf debugging can be very difficult to make work, but it's your only choice.

That said, I'd sort of like "a fixed-up make" myself, but there's an effect I wish I had a name for where new techs that are merely improvements on an old one almost never succeed, as if they are overshadowed by the original. Make++ is probably impossible to get anybody to buy in to, so if you don't want make you pretty much have to make something substantially different just to get people to look at you at all.

Also, obviously, many of the preceding comments still apply to a lot of other build tools, too.

(I'm making a separate comment from the INTERMEDIATE explanation)

I'm a big fan of Make, but appreciated your detailed criticism, and found myself nodding in agreement.

#3: I like to stick MAKEFLAGS += --no-builtin-rules in my Makefiles, for this reason. This, of course, has the downside that I can't take advantage of any of the builtin rules.

#7: There is a 3rd-party GNU Make debugger called Remake https://bashdb.sourceforge.net/remake/ https://sourceforge.net/projects/bashdb/files/remake/ It comes recommended by Paul Smith, the maintainer of GNU Make.

> I have read the description of what INTERMEDIATE does at least a dozen times and I still don't really get it.

It took many reads, but I think I get it.

As a toy example, consider the dependency chain:

   foo <- foo.o <- foo.c
             `- candidate to be considered "intermediate"
Depending on how we wrote the rules, foo.o may automatically be considered "intermediate". If we didn't write the rules in a way that foo.o is automatically considered intermediate, we may explicitly mark it as intermediate by writing:

    .INTERMEDIATE: foo.o
So, what does "foo.o" being "intermediate" mean? 2 things:

1. Make will automatically delete "foo.o" after "foo" has been built.

2. "foo.o" doesn't need to exist for "foo" to be considered up-to-date. If "foo" exists and is newer than "foo.c", and "foo.o" doesn't exist, "foo" is considered up-to-date (if "foo" was built by make, then when it was built, "foo.o" must have existed at the time, and must have been up-to-date with foo.c at the time). This is mostly a hack so that property #1 does not break incremental builds.

These seem like useful properties if disk space is at a premium, but is something that I have never wanted Make to do. Rather than characterizing it as "a hack on the fact the underlying model isn't rich enough", I'd characterize it as "a space-saving hack from a time when drives were smaller".


If, like me, you don't like this, and want to disable it even on files that Make automatically decides are intermediate, you can write:

Which tells it 2 things:

a. never apply property #1; never automatically delete intermediate files

b. always apply property #2; always let us hop over missing elements in the dependency tree

I write .SECONDARY: for #a, and don't so much care for #b. But, because #1 never triggers, #b/#2 shoudn't ever come up in practice.

Concerning expansion, I guess the article is not claiming that it is good, it doesn't even mention this "feature". I guess it's just saying: "make can do everything your gulp can, it's better, faster and more readable", that's without using any variable expansion feature.

As soon as more JavaScript developers start writing very very simple Makefiles, tooling will improve and maybe someone will come up with a better make.

The other option is to let people keep using webpack and gulp until they come up with another JS-based build-system, webpack 5 or 6, and grulpt or whatever comes after grunt/gulp.

I'm happy to see this kind of detailed criticism. I would be happy to use a new tool if it is similarly general, and has a declarative style. Other commenters brought up Bazel, which I am looking forward to learning about.

With the debugging expansion thing you're mentioning, now I'm craving a make built on some minimalist functional programming language like Racket where "expand the call tree" is a basic operation.

I've been writing Makefiles regularly for maybe 15 years and I always end up on this page every time I need to write a new one: https://www.gnu.org/software/make/manual/html_node/Automatic...

$< $> $* $^ ... Not particularly explicit. You also have the very useful substitution rules, like $(SRC:.c=.o) which are probably more arcane than they ought to be. You can make similar complaints about POSIX shell syntax but at least the shell has the excuse of being used interactively so it makes sense to save on the typing I suppose.

That's my major qualm with it however, the rest of the syntax is mostly straightforward in my opinion, at least for basic Makefiles.

give pmake a shot sometime.. the syntax/semantics are much more 'shell-like' imho and some things are just much more possible.. (e.g. looping rather than recursive calls to function definitions)

manual: ('make' on a bsd is PMake)


but most linux flavors have a package somewhere..

> $(SRC:.c=.o)

I use

  $(patsubst %.c,%.o,$(SRC))
instead, which I find easier to remember.

Isn't that a GNU extension? Here's an other problem right there, figure out which dialect of Make you're using and their various quirks and extensions.

I think you can compile gmake for most platforms.

> I've seen plenty of unmanageable Makefiles, but I haven't seen another system that would make them inherently cleaner.

Not to push the particular product, but the approach:

FAKE (https://fake.build/), is an F# "Make" system that takes a fundamentally different tack to handling the complexity. Instead of having a new, restricted, purpose built language they've implemented a build DSL in F# scripts.

That yields build scripts that are strongly typed, just-in-time compiled, have full access to the .Net ecosystem and all your own code libraries, and are implemented in a first class functional language. That is to say: you can bring the full force of programming and abstraction to handle arbitrarily complex situations using the same competencies that one uses for coding.

As the build file grows in complexity and scope it can be refactored, and use functionality integrated into your infrastructure code, in the same way programs are refactored and improved to make them manageable. The result is something highly accessible, supportive, and aggressively positioned for modern build chains... If you can do it in .Net, you can do it in FAKE.

I don't like the syntax much, but I love the programming model. I think people who are used to imperative languages are put off by the declarative programming model of make.

> If only whoever authored Make 40 years ago

Make was created by Stuart Feldman at Bell Labs in 1976. The fact that it is still in use in any form and still being discussed here is a testament to what an amazingly good job he did at the time. Whether it is the right tool for any given modern use case is up to the people who decide to use it or pass it by. I still work with almost daily, and it's in wide use by backend system engineers if my experience is any guide. Yes, it's quite clunky but also quite powerful and reliable at the few things it does. Its also pretty much guaranteed to already be installed and working on every nix system, and that's not nothing.

Look at the git makefile: https://github.com/git/git/blob/master/Makefile

You know what you have to do to build git? Type make. It's amazing.

If fixing things in this makefile is not your job, then it really is amazing.

Honestly its like any other tool we use: once you know the rules governing its behavior it really isn't that hard to debug issues. Make is very consistent in most cases. There are plenty of traps, and they're made easier to fall into given the archaic syntax, but you don't typically have to fall into them over and over again :).

Yes but some tools have... shall we say, irregular rules. Tell me how CMake's argument quoting works for example.

Actually, you might want to glance at the first few hundred lines of platform specific stuff you are supposed to set manually before the main makefile begins. Literally meant to be set by hand, but the Git developers made educated guesses about the specifics of each platform.

> The fact that it is still in use in any form and still being discussed here is a testament to what an amazingly good job he did at the time.

Not necessarily.

> It’s also pretty much guaranteed to already be installed and working on every nix system, and that's not nothing.

First mover advantage.

The fact that no modern language, basically nothing outside of C/C++ uses it, says a lot. And even those are moving away, see Cmake & co.

> basically nothing outside of C/C++ uses it

That's how it has always been, though. In the '90s you didn't need to run Perl or Tcl through it because you weren't compiling anything. The Venn diagram of "Platforms that have make" and "Popular compiled languages" comes up with only asm/C/C++.

Many languages want to do things their way, such as Erlang, Common Lisp, Java, etc. Ruby and Python are interpreted and also don't need a build process. JS, until recently, was interpreted. Lots of NIH going around.

> And even those are moving away, see Cmake & co.

Cmake is closer to autoconf/automake/libtool. If you have serious cross-platform needs, then Cmake is a fine tool. But it's hardly less archaic than make (and only slightly less so than autoconf) and I'm dubious that too many people are really moving away rather than just picking up the newer, shiny tool for newer projects.

If I were doing a small static website or something that required a build with standard *ix tools, vanilla make would be my tool of choice, hands down. Tools like autoconf and, as the author pointed out, webpack provide a more specialized need.

> That's how it has always been, though. In the '90s you didn't need to run Perl or Tcl through it because you weren't compiling anything. The Venn diagram of "Platforms that have make" and "Popular compiled languages" comes up with only asm/C/C++.

Pascal/Delphi were wildly popular in the late 80's, early 90's, though. I don't remember it being built with make, though.

> Make's interface is horrible. Significant tabs.

True, one should not edit makefiles with notepad. Proper editors have support for editing them, though.

> Syntax which relies on bizarre punctuation...

Well documented, though.[1]

> a different dependency-based-programming build tool could replace it... but that's just wishful thinking

Use prolog[2], you should be able to write this in about 42 lines. But you'll end up with the same complaints ("the syntax, the magic variables!") because, in my experience, those are just superficial: The real problem, imho, is that declarative and rule based programming are simply not part of the curriculum, especially not for auto-didactic web developers. OTOH, it only takes an hour or two to grok it, when somebody "who knows" is around and explains. It really is dead simple.

[1] http://pubs.opengroup.org/onlinepubs/009695399/utilities/mak...

[2] https://en.wikipedia.org/wiki/Prolog

FWIW $< is < from redirecting input... a mnemonic for inputs.

$@ is target, because @ looks sort of like a bullseye.

significant tabs are gross, yes, but this is a one-time edit to your vimrc then you can forget about it forever.

it’s also installed everywhere and has minimal dependencies. it could be a lot worse. (see also: m4, autoconf, sendmail.cf)

But how do you remember when to use $^ and $<

Make is such a horrifically awful thing to work with that I just end up using a regular scripting language for building. Why learn another language with all its eccentricities and footguns when I already know several others?

Because, like many other things in programming, you'll end up with a half-baked and buggy implementation of make anyways.

Incremental builds by looking for changed dependencies, a configuration file with its own significant identifiers (i.e. a build DSL shoehorned into JSON or YAML), generalized target rules, shelling out commands, sub-project builds, dry runs, dependencies for your own script, parallelization, and a unique tool with (making an generalization here) insufficient documentation.

If you're really unlucky, you'll even end up with the equivalent of a configure.sh to transpile one DSL and run environment into the DSL for your custom tool.

> Because, like many other things in programming, you'll end up with a half-baked and buggy implementation of make anyways.

I'd argue that make is a half-baked and buggy implementation of make - so that's not really a drawback so much as the status quo.

E.g. I have scripts that exist mainly to carefully select the "correct" version of make for a given project to deal with path normalization and bintools selection issues on windows - and none of these ~3 versions of make work on all our Makefiles. One of those versions appears to have some kind of IO bug - it'll invoke commands with random characters missing for sufficiently large Makefiles, which I've already gone over with a hex editor to make sure there wasn't some weird control characters or invisible whitespace that were to blame. So, buggy and brittle.

I'd argue that there are some major concerns with the makefiles if they require the use of 3 different versions of make to get it all working - a situation I've never personally seen before. I'd suggest prioritizing fixing that before attempting tracking down the cause of other issues. As it stands, there are too many points of interaction to attribute any bugs to any one program.

That said, Windows has never been a strong platform on which to run make (or git, gcc, or any other [u/li]nix originated CLI tools). When I hear of folks using make, I tend to make the assumption that they're running on a [u/li]nix or BSD derivative.

> I'd suggest prioritizing fixing that

I've already had one upstream patch rejected on account of the additional complexity fixing it introduces, and would rather not indefinitely support my own fork of other people's build setups.

Or if I am going to indefinitely support my own fork, I might as well rewrite the build config to properly integrate with the rest of whatever build system I happen to be using - at least then I'll get unified build/compilation options etc.

> path normalization and bintools selection issues on windows

make was never intended to be a cross-platform tool. If you face problems using it on Windows, then that's on you.

All the more reason to use a regular scripting language for building, then, since there are several that are plenty cross-platform.

That is, unless the folks who maintain and champion make want me to use make. It's on them to court me, the cross-platform developer, not the other way around.

Again, if you're a cross-platform developer, make is not for you. Do you also expect zsh enthusiasts to court Windows devs?

I kind of disagree, it's quite possible to use gnu make on Windows with msys (though requires some scripting discipline and probably not for everyone).

I'm currently doing this for a cross platform c++ project. Same makefile used for Linux, Mac, and windows+msys+cl.exe. (yes, fair amount of extra variables and ifdefs to support the last one...)

Incremental builds. It's a pain in the ass to write this in a good, generic way yourself. If your build tools don't already understand it, then make (and similar tools) makes for a nice addition versus just a script that invokes everything every time.

EDIT: Oh, and parallel execution, but smart parallel execution. Independent tasks can be allowed to run simultaneously. Very useful when you have a lot of IO bound tasks. Like in compilation, or if you set it up to retrieve or transmit data over a network.

It's not too hard to do that in your custom script, but more care is required because until you custom script reaches make level internal complexity you will have to manually track dependencies or make sub-optimal assumptions.

I very much prefer Rake for orchestrating multiple build systems in a web project or dependency installation or just any sort of scripting. It comes with a simplified interface to shelling out that will print out the commands you are calling.

If for some reason the project is ruby allergic, I'll try to use Invoke [0].

Sometimes I feel like people's usage of Make in web projects is akin to someone taking an axe and hand saw out to break down a tree for firewood when there are multiple perfectly functioning chainsaws in the garage.

[0] http://www.pyinvoke.org/

This is why I like redo; it handles all of the dependency stuff for you, but your build scripts are written in pure sh

I have been meaning to spend some time with redo!


always been curious about redo. which variant do you use, and what do you use it for?

I use apenwar's because I installed it a long time ago and haven't needed anything more; it does look like it may be abandoned though.

I use it for anything where I need job scheduling around creating output files; even my DVD ripping is managed with redo calling into ffmpeg.

What's your solution for mutliple-output processes like yacc/bison (which creates both an .h and a .c file?)

I don't run into those too often.

When I do, there are two cases:

1) All dependencies are on only one output file (e.g. link stages that generate .map files, compile phases that generate separate .o and debug files). I just treat these as normal

2) I may need to depend on each of the files (I don't use yacc/bison, but it sounds like this would qualify). I select one file to be the main output and have the .do file for that ensure that the secondary outputs go to a mangled name; I then have the .do files for the secondary outputs rename the mangled file.

quick example for generating foo.c/foo.h


    generate-file --cout "$3" --hout "$2-mangled.h"

    redo-ifchange "$2.c"
    mv "$2-mangled.h" "$3"

If the h output changes but the c doesn’t, this rule will miss it.

I once solved it by making a tar file or all the outputs and extracting it to both .c and .h but that’s incredibly kludgy, still looking for a better solution

As long as the timestamp changes on the C file, that's fine, right? At least with the version of redo I use timestamps are used by default, not file contents.

Do you honestly think Make has no advantages over conventional scripting languages when it comes to building software? I suspect you know that it’s designed for that task and has been used for that task for several decades. Presumably you respect the community over those decades sufficiently to have a strong prior belief that there are good arguments for Make (as well as downsides), even if you can’t be bothered to research them.

I honestly think any advantages it has are significantly outweighed by all its disadvantages.

And no, I don't respect the community. The community very often makes "The Way Things Are Done" its personal religion and refuses to ever change anything for the better.

Another advantage of using a scripting language is that the hard work of portability will already have been done for you by the authors of the scripting language.

Make, in contrast, works within a shell and invokes programs which may be either wildly or subtly incompatible across platforms. Add in the lacking support for conditionals in POSIX make and portability is a nightmare.

On the other hand, make invokes well known shell commands; I already know what they do, when I see them used.

If you are invoking your functions (or functions imported from random library), I have to check them for what they do.

it is possible that if you need to do that much in a makefile that you are running into shell incompatibilities then perhaps you are attempting to cram too much complexity into your build system and perhaps should be using something like Docker to decrease incidences of “works on my machine/os/distro/et c”.

(I am aware that this advice does not hold true in all cases; just that for many of them overly complicated build systems is a code smell.)

To be fair, at the core much of what you're going to be doing is invoking command-line tools. I mean, most compilers are invoked as command-line tools.

because then people new to your project can see a makefile and know that “make” and “make test” and “make install” probably work without having to learn your homebrew, one-off build system.

This argument for Make is exclusively based on network effects. Fine. we accept that we're stuck with it. It still sucks.

I disagree. It’s an argument for convention. This is the same reason we have package.json or Dockerfile or Pipfile or Rakefile - it tells us the standard interface for “install this”. It’s not specific to make.

Docker is new and didn’t have any network effect until recently.

How many of those languages are declarative, not imperative? The advantage of Make is that you only need to give it the inputs and the outputs.

Compared to the autotools I'd say make is fairly decent! But yeah, it's a mess. I wonder if it's really possible to improve significantly over the status quo if you're not willing to compromise on flexibility and design the build system alongside the language itself, like Rust does with cargo for instance.

Make on the other hand is completely language agnostic, you can use it to compile C, Java, LaTeX, or make your taxes. Make is a bit like shell scripts, it's great when you have a small project and you just want to build a few C files for instance[1] but when it starts growing there always comes a point where it becomes hell.

[1] And even then if you want to do it right you need compiler support to figure header dependencies out, like GCC's various -M flags.

It's not like C is famous for having particularly good syntax. If anything, it's the worst thing about it.

They must have got something good though. Else why would there entire families of C-like languages ?

Plus, to me C syntax is particularly good. You're writing real words and the computers does the things you tell it to. To the letter.

> Else why would there entire families of C-like languages ?

C became popular because Unix became popular, and when designing a language that intends to become popular one aims for a ratio of 10% novelty and 90% familiarity. Like, ever wonder why Javascript's date format numbers the months starting from zero and the days starting from one? It's because Eich was told to make JS as much like Java as he could, and java.util.Date numbers the months from zero and the days from one, which Java itself got from C's time.h. (Not coincidentally, Java and JS are both in the C-like language family.)

> You're writing real words

In C? Not compared to ALGOL, COBOL, Pascal, and Ada, you're not. :)

> the computers does the things you tell it to. To the letter.

As long as you're not using a modern compiler, whose optimizations will gleefully translate your code into whatever operations it pleases. And even if one were to bypass C and write assembly code manually, that still doesn't give you complete control over modern CPUs, who are free to do all sorts of opaque silliness in the background for the sake of performance.

To be fair with myself I was mostly snarking around op comment who dismissed the C language as if it was some ancient relic.

But even without accounting for compiler optimizations and cpu architecture, the C language just straight up lies to its user. You could code something entirely with void*.

PS: what I meant by "real words" is that you're naming functions and calling them by their name. Which in itself is very powerful.

> They must have got something good though. Else why would there entire families of C-like languages ?

Network effect. If you wanted to write for unix, you almost had to use C originally.

> Plus, to me C syntax is particularly good.

It's meh. It's not the most straightforward when defining complex types like arrays or function pointers.

> You're writing real words and the computers does the things you tell it to. To the letter.

So yeah, about that...

Everybody I know found it confusing at first, but its logical, and modular. I don't know a language with a better type declaration syntax. It gets impractical when you define function that return functions that return... because these expression grow on the left and right simultaneously. But realistically, you don't do that in C, and other than that, I find it easy to read the type of any expression...

    const int *x[5];
    *x[3];  // const int
    x[3];  // const int *
    x;   // ok - there is first a decay of x[5] to *x, so: const int **
Is there any other syntax that has this modularity?

> But realistically, you don't do that in C

IMHO, that's a self-fulfilling prophecy. If it were reasonably easy to do that in C, it would be done more.

It is quite reasonably easy enough compared to any real practical programming task. For example:

  // your ordinary function declarations
  int plus2(int i) { return i + 2; }
  int times2(int i) { return i * 2; }

  typedef int (*modfun)(int);
  // a very reasonable syntax altogether
  modfun get_modfun(bool mul) { return (mul ? times2 : plus2); }
Nests as well if you ever wanted to:

  modfun get_modfun_no_mul(bool mul) { return plus2; }

  typedef modfun (*modfun_getter)(bool);
  modfun_getter get_getter(bool allow_mul_opt) { return (allow_mul_opt ? get_modfun : get_modfun_no_mul); }

C is a systems programming language. It doesn't have closures or other features from functional programming. In other words, you can't "create functions at runtime". That is why you basically never see a function returning another function.

So, go is also a "systems" language, so the terms more or less meaningless now. Assuming you mean a language we can easily compile to an independent binary capable of being run directly on a microprocessor with no support, I offer you rust as a counter-example.

Also, functional programming and returning functions does not mean they are created at runtime.

It's a fact that C doesn't have closures. That is my point. I happen to like that fact, but you don't have to agree with me.

And "creating functions" means: closing over variables ("closures"), or partial application. I think it takes at least that to be able to return interesting functions.

(and whether go is a systems programming language is at least debatable. I think the creators have distanced themselves from that. It depends on your definition of "systems". You can't really write an OS in go).

> It's a fact that C doesn't have closures.


> That is my point.

Not unless your first two sentences have absolutely nothing to do with each other. Your point appears to be that because it's a low-level language it doesn't have these features, which is false.

> I happen to like that fact, but you don't have to agree with me.

Or I just think you don't have any experince using better languages. That isn't to say that other language could supplant C, but just that it's difficult for me to image actually liking the C type system (or lack thereof) and lack of first class functions. It's incredibly limiting and requires a lot of hoops to be jumped through to do anything interesting.

> And "creating functions" means: closing over variables ("closures"), or partial application.

Well, you said at runtime. Of course you can "create" functions at compile or programming time!

> Your point appears to be that because it's a low-level language it doesn't have these features, which is false.

I would think that first class closures do indeed _not_ belong in a low-level language. They hide complexity, and you want to avoid that in low-level programming. Not necessarily for performance reasons, but more from a standpoint of clarity (which in turn can critically affect performance, but in subtler ways).

> Or I just think you don't have any experince using better languages.

Nah, I have experience in many other languages, including Python, C++11, Java, Haskell, Javascript, Postscript. The self-containment, control, robustness and clarity I get from a cleanly designed C architecture is just a lot more appealing to me. The only other language I can stand is Python, but for complex things, it becomes actually more work. For example, because it's so goddamn hard to just copy data as values in most languages (thanks to crazy object graphs).

> It's incredibly limiting and requires a lot of hoops to be jumped through to do anything interesting.

It depends on what you are doing. It's a bad match for domains where you have to fight with short-lived objects and do a lot of uncontrolled allocations and string conversions. My experience in other domains (including some types of enterprise software) is more the opposite, though. Most software projects written in more advanced languages are so damn complicated, but do nothing impressive at all. They are mostly busy fighting the complexity that comes from using the many features of the language. But those features help only a bit (in the small), and when you scale up they come back and bite you!

Here's a nice video from a guy who gets shit done, if you are interested: https://www.youtube.com/watch?v=khmFGThc5TI

> Well, you said at runtime. Of course you can "create" functions at compile or programming time!

Closures and partial application are done at runtime. The values bound are dynamic. So in that sense, the functions are indeed created at runtime. I sense that you were of the impression that a closure would actually have the argument "baked" in at compile time (resulting in a different code, at a low level) instead of the argument being applied at runtime. That's not the case, unless optimizations are possible. If that was really your impression, this makes my point regarding avoiding complexity and that closures do not belong in a low-level language. (Look up closure conversion / lambda lifting)

I'm really not sure what you mean by "modularity". There's a lot of languages with much more readable and composable type declarations than C. For example in OCaml:

  let x : (int list ref, string) result option =
    let x1 : int = 0 in
    let x2 : int list = [ x1 ] in
    let x3 : int list ref = ref x2 in
    let x4 : (int list ref, string) result = Ok x3 in
    Some x4
The declaration of List.map:

  val map : ('a -> 'b) -> 'a list -> 'b list

I've got no issue with C syntax either, but to be clear, syntax != semantics.

To cut to the chase:

* syntax = structure

* semantics = meaning

C syntax would include things like curly braces and semicolons, whereas C semantics would include things like the functionality of the reserved keywords.

This SO answer gives a more detailed explanation:


The nice thing about C is that it's a great cross-platform assembly language. I wouldn't call its syntax good or bad.

> The nice thing about C is that it's a great cross-platform assembly language. I wouldn't call its syntax good or bad.

Is it though? It's definitely available on a huge number of platforms, but are the implementations compatible?

And even if they are, there is so much undefined behavior that taking advantage of the cross-platform nature is not nearly as easy as it should be.

Exactly. If C hadn't been invented then someone would have invented it later under a different name.

Because it's obvious people need a minimalistic portable assembler.

Before C was invented there were already other companies writing OSes in high level languages, but yeah thanks to its victory now it gets all the credits.

History is re-written by winners as usual.

I wasn't thinking in terms of winners.

But what were those other OSs and languages? Sounds interesting!

You can start here,



Some examples, out of my head.

- Burroughs, now being sold as Unisys ClearCase, used ESPOL, later replaced by NEWP, which already used the concept of UNSAFE code blocks;

- IBM used PL/8 for their RISC research, before switching to C, when they decide to go commercial selling RISC hardware for UNIX workstations

- VAX/VMS used Bliss

- Xerox PARC started their research in BCPL, eventually moved to Mesa (later upgraded to Mesa/Cedar), these languages are the inspiration for Wirth's Modula-2 and Oberon languages

- OS/400, nowadays known as IBM i, was developed in PL/S. New code started to be replaced by C++. It was probably the first OS to use a kernel level JIT with a portable bytecode for its executables.

BitSavers and Archive web sites are full of scanned papers and manuals from these and other systems.

Most languages become popular because they have a reputation of necessity. Which is to say, they are popular due to marketing, not due to quality. (As most things are.)

Isn't C just following the syntax of ALGOL? Or are we referring to the parts specific to C?

C is a simpler Algol, yes. It messed up the dangling-else problem, and lost nested procedures, among other things.

It also lost call-by-name parameters, which should be a considered a good thing.

Sort of related -- I'm sure this made it to HN when it was new -- is https://beebo.org/haycorn/2015-04-20_tabs-and-makefiles.html.

I'm really glad software development has largely moved away from the terse names and symbols that used to be so common. I mean patsubst? What a terrible function name! At the very least I would have added the 'h' in path.

> I mean patsubst? What a terrible function name! At the very least I would have added the 'h' in path.

The "pat" in "patsubst" means pattern, not path, so adding an h would be incorrect. (https://www.gnu.org/software/make/manual/html_node/Text-Func...)

Why would you add an 'h' to 'pattern'?

I think that just supports my argument, really. I've never really used makefiles. The blog post talked about using patsubst to change a path. The name of the function essentially gives no clue at all what it does.

"the blog post talked"

Isn't that the problem with using blog posts as educational sources?

(I'm happy I was learning all my toolkit long before blogs became a thing. It's hard to open and read Physics book, when someone on YT talks funny about its second chapter.)

I smell a 8 character limitation somewhere :)

>and a different dependency-based-programming build tool could replace it...

Make one that's generic and provides significant enough improvements that people care.

> Significant tabs

.RECIPEPREFIX option has been available for 7 years. Stop whining about non-issues.

After going through a few build systems for Javascript, I realized they were all reinventing the wheel in one way or the other, and pulled out venerable Make from the closet. It turned out to be way more expressive and easy to read.

One target to build (prod), another to run with fsevents doing auto-rebuild when a file is saved (instant gratification during dev), then a few targets for cleaup & housekeeping. All said, the file is 1/4 the size of any other JS-based build systems.

The reason I don't like doing this is portability. Since the steps within the makefile are going to be run through a shell, it is going to behave differently on different systems.

If your makefile fixes up a file using sed and your system has gnu sed, your makefile may fail on a system with BSD sed (e.g., a mac). If you rely on bash-isms, your makefile may not work on a debian system where it will be run with dash instead of bash. And so on.

I say, you JS guys doth protest a bit too much.

If you look in your package.json, you'll surely see a dozen or so "scripts" lines that run through the same shell that make does and have all the problems you just mentioned.

I'd also like to point out that Linux and almost certainly your production environment (because it's most likely *ix) will be case sensitive. Your macOS or Windows file system? Not so much. Point is, you're already up to your neck in portability issues. My macOS coworkers often forget this detail.

If an external package has scripts in it, those scripts have very likely been run by somebody on a mac and somebody on linux and worked in both cases. That is totally different than writing a line of script that only you have ever run and assuming that because it works on your laptop, it will run everywhere.

> That is totally different than writing a line of script that only you have ever run and assuming that because it works on your laptop, it will run everywhere.

huh?? No it's not. That's the whole discussion we are having now. I even specifically mentioned my macOS coworkers who develop on their Mac and are oblivious to case-sensitivity issues. Because "it worked for me."

> If an external package has scripts in it, those scripts have very likely been run by somebody on a mac and somebody on linux and worked in both cases.

Oh, I'd love for that to be true. But also not what I was referring to. Most large projects have a "scripts" section and those are no different than Makefile commands. If you're paranoid about your own Makefiles then you're going to need to be paranoid about your own package.json.

True but I should specify that my targets do the job in 1 to 3 lines. No magic bash.

Also, I use javascript where it makes sense: I rely on Browserify to resolve the web of `require`d files.

You don't shell out complex crap in a makefile, create a configure script and pass vars

Doesn’t autoconf address these issues?

Wow if you consider Makefiles easy to read...

Simple makefiles couldn't be more simple.

    task: dependency
        list of

That's about as easy as it gets.

Real world makefiles are not simple like this. The simplest makefile I can think of is still hundreds of lines long:


Maybe this isn't "real-world" but I've made this cookiecutter template which generates projects with Makefiles less than 100 lines. It works decently well for my personal projects and for playing around. https://github.com/MatanSilver/cookiecutter-cproj

Until a stray space finds its way into your command list indentation...

Every popular text editor handles this already. People manage make, python and other whitespace-specified languages just fine.

They don't though. Vanilla Vim/Vi doesn't for instance.

Regardless of whether the editor can or not, I just don't see how insisting on tab and not space is remotely defensible.

> Vanilla Vim/Vi doesn't for instance.

Vim does handle this. Here is Vim (and I've removed my .vimrc file, so this is not a setting I have set personally), on a Makefile where the third line erroneously uses spaces:


> I just don't see how insisting on tab and not space is remotely defensible.

This is like insisting that people haven't been having religious wars over this sort topic for years. You can see the Wiki[1] page on it (which starts "This is one of the eternal Holy Wars") for arguments as to why someone might prefer tabs. I personally like them because they allow the reader of the code to choose the visual layout of the indentation.

Now, while I love tabs, I work in Python all day. But good editors, like Vim, allow you to customize the indentation settings sufficiently to handle both cases like Python (spaces for indent) and Makefile (tabs) gracefully. So "Indent" and "Dedent" just do The Right Thing™ and it mostly never matters.

[1]: http://wiki.c2.com/?TabsVersusSpaces

If someone is really paranoid about white space issues in a file you’re editing on vim, you can can always use `:set list` to display those characters.

or, you can write your recipes as

    target : requisites ; recipe
with a semicolon instead of a tab.

However, tabs are beautiful, and every occasion to use them should be cherised as precious.

I'm glad it works for you. It didn't work for me.


> remotely defensible

Feel free to write your own make superset that supports spaces. It's an irrelevant factor to the usability of make as a whole.

It's something that almost everyone trips over.

Multiple times.

Actually writing a preprocessor for Make sounds like a pretty good idea.

ninja, cmake, premake.

FWIW, the more experienced a programmer is, the more likely they are to prefer spaces over tabs.

My pet theory is that the more experience you have, the more likely you are to have worked with Python, which is a lot easier to work with if you just always s/\t/ / (replace tabs with spaces).

I’ve written about Makefiles for the web before, you can’t say they can’t be readable: http://blog.gnclmorais.com/makefiles-are-for-the-web

Something about the Makefile in your post is a bit weird though: You only declare "test" as a PHONY target, but every other target in the file is just as PHONY. And since you don't even explain that line, it could be a bit confusing.

It's certainly better than the 200+ lines of JavaScript something like Webpack spits out when you statr a new project. I had to debug some of our project's build files and realized halfway through that my teammates didn't write this Webpack config monstrosity, Webpack generated it by /default/.

Webpack, by itself, doesn't generate configs, unless you're referring to the Webpack CLI tool.

FWIW, Webpack 4 (which just went final a few days ago) now has "zero-config" defaults out of the box. You may want to give that a shot.

Eh, I tend towards “let’s not touch this if it works”. I’ve already had to upgrade from Webpack 1 to 2, and 2 to 3. If it ain’t broke, don’t fix it.

Sure, I can sympathize. That said, I've seen a bunch of "here's my results after upgrading to Webpack 4" tweets going around, and they all indicate much faster builds and somewhat smaller bundle sizes, with very minimal config changes needed.

Next to a non-toy webpack or gulp configuration, makefiles are pretty clean and simple.

Not to mention that gulp, grunt (do people still use this?) and webpack configs are written in JavaScript, so they're likely to have very hard to debug errors in them on the first 5 revisions.

The errors shouldn't be any harder to debug than the program itself after 5 revisions.

The author forgot some great features of make:

* Parallel execution of build rules comes for free in a lot of implementations. This is really noticeable when you do heavy asset pre-processing.

* Cleanly written build rules are re-usable across projects as long as those projects have the same structure (directory layout).

* Cleanly written build rules provide incremental compilation/assembly for free: You express intermediate steps as targets and those are "cached". I put the "cached" in quotes here, because you essentially define a target file which is regenerated when it's dependencies are updated. Additional benefit: Inspection of intermediate results is easy - they are sitting there as files right in your build's output tree.

Thank you for these points! I think that parallel execution is especially appealing. I edited the article to mention that.

Yes! I recently used Make on some JS projects and my coworkers looked at me like I was insane. Even if you don't know advanced Make-fu it's a really good way to run all of your build steps in the right order without some crazy JSON config format.

The only build systems that I'm aware of that are monadic are redo, SCons and Shake-inspired build systems (including Shake itself, Jenga in OCaml, and several Haskell alternatives).

One realistic example (from the original Shake paper), is building a .tar file from the list of files contained in a file. Using Shake we can write the Action:

    contents <- readFileLines "list.txt"
    need contents
    cmd "tar -cf" [out] contents
There are at least two aspects I'm aware of that increase the power of Make:

- Using `$(shell cat list.txt)` I can splice the contents of list.txt into the Makefile, reading the contents of list.txt before the dependencies are parsed.

- Using `-include file.d` I can include additional rules that are themselves produced by the build system.

It seems every "applicative" build system contains some mechanism for extending its power. I believe some are strictly less powerful than monadic systems, while others may turn out to be an encoding of monadic rules. However, I think that an explicitly monadic definition provides a clearer foundation.


The concepts behind make are quite good, but the interface it provides is not decidedly not. It reminds me of Git in that respect. I'd think replacing the opaque symbols used everywhere with more descriptive words would be helpful. I don't suppose anyone knows if modern Make versions support alternatives to $@, $%, $?, etc. that can be read rather than memorized?

You _could_ do something like this:

   make -f- << sed 's/MAKE_TARGET/\$@/' Makefile.descriptive
But for the sake of compatibility I wouldn't.

No need for ugly sed error-prone trickery.

Use on the Makefile:

    MAKE_TARGET := /some/path
And invoke

    $ make MAKE_TARGET=/other/path

His point was that all of $@, $%, $? have more descriptive names as well, you don't need to use these single character macros.

Also you probably meant ?= instead of := as the command you gave would still use /some/path instead of /other/path if you use :=.

I have used make for years and am very familiar with it. My two major complaints are:

1. The assumptions that it makes. Everything in and out are files (phony files notwithstanding). It is hard and painful if you want outputs to rely on and rebuild from configuration in the makefile itself. It's not impossible to implement, but it's difficult to precisely implement it: often times I've seen systems that just rebuild everything after configuration changes.

2. The mix of declarative and imperative styles, while useful for quickly throwing a build together, gets difficult to deal with as things scale up. The make language itself is pretty restricted, too (without using $(eval)).

I know that recent versions support guile extensions and even C(/C++?) extensions, but at that point it's not giving you all that much. I have often wished the make functionality was exposed in some "libmake" for me to extend.

For this reason (and others), I've recently refactored a huge build system to use shake[1] instead. Now builds are precise and correct, and properly depend on configuration and build environment.

[1]: https://shakebuild.com/

That is not portable make, rather GNU Make.

Very true, thanks for clarifying. FWIW most of my stuff is portable make...

I use Automake and Autoconf (which generates the Makefile) for GNU ease.js:

https://git.savannah.gnu.org/cgit/easejs.git/tree/Makefile.a... https://git.savannah.gnu.org/cgit/easejs.git/tree/configure....

The nice thing with using Automake is that it gives all the standard build targets with little additional effort (for example, `make dist` for producing the distribution tarball, and `make distcheck` for verifying that it's good).

I use a much simpler one for a project at work:

https://gitlab.com/lovullo/liza/blob/master/Makefile.am https://gitlab.com/lovullo/liza/blob/master/configure.ac

> The target name `all` is special: When you run make with no target specified it will evaluate the `all` target by default.

This doesn't appear to be true, at least on GNU Make 3.81 (MacOS). Rather, the first target listed is the one that gets built on `make` with no arguments.

I'm baffled at the amount of debate in this thread. Make is good for tasks that can be expressed as a directed acyclic graph of steps, where the steps' inputs and outputs are files, and steps can be expressed in a few lines of shell script. It works pretty well for such tasks in my opinion. Yes, it will look contorted for tasks that don't fit this model.

I use Makefiles extensively for Docker, all sorts of individual projects (it's much easier to "make serve" than remember the specific invocation for getting a dev server up when you use multiple programming languages) and, of late, for Azure infrastructure deployments:

- https://github.com/rcarmo/azure-docker-swarm-cluster/blob/ma...

- https://github.com/rcarmo/azure-acme-foundation/blob/master/...

i (and people at work) do the same thing! i find it really easy to work with docker/k8s/helm/compose with Makefiles.

one tip -- remember to use .PHONY on your targets -- https://www.gnu.org/software/make/manual/html_node/Phony-Tar...

I started my career with C++ development and never used a direct makefile in any of my projects. When I write C/C++, I use CMake. My current job has me programming in Go where people seem to love makefiles, but I consistently find bugs in the implementation (usually has too many phony targets, etc.), Why don't people use makefile generators outside of the C/C++ community?

I don't use CMake for embedded systems because the syntax and options are even more obtuse for what I need to do with it. When I get down to it, every make-based build system I've ever used boils down to using the macro language to generate all the targets, and then building your output board image by dependencies. But it's the details of how to do this that vary widely depending on your application for the board.

CMake does a great job of compiling executable code and linking it using your compiler of choice, but where it falls flat is in giving me a convenient mechanism for fitting that executable into the system image.

I'd need to know more about the details to answer completely. I think CMake does have a complicated syntax, yet I think it is worth the nuisance in most cases. Many tools can use CMake's compilation database for configuration, such as clang-format and clang-tidy.

It sounds like you are looking for something like Yocto/OE, which you can bake pretty much any build script into, cross compile and deploy.

What exactly are you looking for with regards to deployment, and what tool do use, instead of CMake, to give you that ability?

Can Yocto or CMake build QNX systems or bare-metal binaries with TI's compiler? The tool I'm looking for is Make because Make gives me a ton of flexibility to build whatever freaky combination of binaries might go into whatever product I'm working on, and then combine those binaries into a system image.

The point I'm trying to make is that every build system except Make solves a really specific class of build problem(building applications, or building Linux systems using GCC) and then pretentiously claims to support everything that matters. What you're actually getting is a small sliver of what you might need in my world.

This article is another example of how Make can do something really unexpected by providing really simple features and letting you decide what matters to you. I've yet to see another build system that is as generally useful.

Why is it that each time a JS dev discovers something most other fields have been using for decades, it has to be "The Lost Art", "Superpower" (https://medium.com/@wesharehoodies/typescript-javascript-wit...) or something similarly tacky?

There's no Lost Art, no superpowers. It's just the JS scene _finally_ slowly getting up to speed.

Why is it that some commenters can't understand that all of humanity isn't at the exact same level of knowledge on everything?

People learn things every day, acting like it's "slow" or you are better than them is a very toxic elitist point of view that is better off not said. You think your way of doing things is better, then you should encourage others to do it that way, don't put them down when they just start using it! I'm glad the ALGOL programmers of the 60's and 70's didn't spend their time laughing and putting down that new-fangled C language when it inevitably made some of the same mistakes.

And besides, very rarely is something in life a complete upgrade. Make is nice, but it has some very real pain-points (as evidenced by the handful of utilities that are makefile-generators, because getting make to do certain things or work on all platforms is so difficult). Gulp is great too, but it also has some very big issues in some areas. There is no universal "right" answer to any of this, and assuming that everyone that's not doing it your way just doesn't know any better isn't just naive, it's also wrong.

> People learn things every day, acting like it's "slow" or you are better than them is a very toxic elitist point of view that is better off not said.

You must've misunderstood me. I don't have any problem at all with people discovering things late. I am often a slow learner myself. What irritates me is when people make spectacular "discoveries" with tacky headlines. If the article were written in a bit more casual and technical tone, I'd be happy, much more so than with this "let me re-introduce you this amazing but forgotten lore you don't know about" nonsense...

> And besides, very rarely is something in life a complete upgrade. Make is nice, but it has some very real pain-points

I agree, but that's all the more reason to not make such "discoveries" ...

We use make for standardizing the way docker containers are built, pushed, tested, and run (debug and production modes). I even prefer it to docker-compose at this point, because it is more programmable.

Do you handle dependencies (images, networks, volumes), and if you do - how?

I'm of an opinion that Makefiles with Docker are mostly a fancy way to write `case "$1" in build) ... run) ... esac` except that one needs to have make(1) installed to run them (in my experience, Bourne shell is much more commonly available than any make variant)

I've generally had to only do this for more-or-less stand alone webservices, but there's nothing preventing making extra targets for setting up networks. To date, I'm still using docker compose for that.

I started doing makefiles as a reaction to having a bunch of bash scripts cluttering up my directory. More or less just a collection of useful commands. But as I continue to do this, I find it encourages making consistent build routines/versioning. Also, being able to set up dependencies ( like "run relies on build") can also be helpful. Finally, variable substitution is useful for versioning.

I think what makes makefiles great are you can do as much or as little as you want with them. I put an example of a common one in another reply.

If you're able, I'd be very interested in seeing some examples.

  build: clean
  	docker build -t webservice:$(VERSION).$$(date +"%y-%m-%d") .
  	docker run -it -v $$(pwd)/src:/opt/app \
  	-p 8888:8888 \
  	--name webservice \
  	-e SECRET1=$$SECRET1 \
  	-e SECRET2=$$SECRET2 \
  	webservice:$(VERSION).$$(date +"%y-%m-%d") \
  	docker run -it  \
  	-p 8888:8888 \
  	--name webservice \
  	-e SECRET1=$$SECRET1 \
  	-e SECRET2=$$SECRET2 \
  	webservice:$(VERSION).$$(date +"%y-%m-%d")
  	docker tag webservice:$(VERSION).$$(date +"%y-%m-%d") webservice:latest
  push: tag
  	docker push webservice:$(VERSION).$$(date +"%y-%m-%d")
  	docker push webservice:latest
  	-docker kill webservice
  	-docker rm --force webservice
  	docker run -it \
  	--name webservice \
  	webservice:$(VERSION).$$(date +"%y-%m-%d") \
  	python3 tests.py
Couple of notes:

- I do major.minor.yy-mm-dd versioning, so this handles that automatically

- shell is for dropping you into a prompt with your present directory mapped to the working directory. I find this super helpful for debugging and testing containers

- run is for testing your CMD and entrypoint scripts

Tidy and to the point.

Thanks for sharing! I'll be using this as inspiration in the future.

Not the OP, doing a bit less that it sounds like the OP is doing, but we built out a relatively handy make-based build system for building a set of images in correct dependency order.

Another member of the team subsequently taught make about the reverse dependencies so that you can split jobs across Travis nodes by the top-ish level image they depend on.

My favorite addition was the ability to generate a dependency graph using graphviz straight from the Dockerfiles.

N.B. Project is now moribund, the team was disbanded. May not build at all. Don't know if any of the forks are active.

That's pretty interesting about Travis. Make is great because you can do as much or as little with it as you want, but it generally always improves organization.

Thanks for chiming in. That all sounds very interesting and this thread has got my gears turning.

I regret not considering Make for Docker administration months/years ago. I've taken to using bash scripts or, worse, tagged bash comments to recall commonly used complex commands.

I’ve had a lot of fun recently writing a Makefile for a Go and Lambda app:


It started very verbose, one target for every Go program, until I figured out target patterns.

I’m also enjoying the -j flag to do things in parallel.

Now it’s 3 lines of Make to build 10 go programs in parallel in seconds.

Parallel is also enough job control to run the development server and to watchexec rebuilding all go programs on code change.

Looks good, I think you need to add PHONY targets

One thing I've found useful about gulp and webpack is the fact that it's multiplatform.

What is the best approach to make sure your Makefile will work as expected on every platform, meaning, windows included?

For example: Can we write file paths using forward-slashes, or is it still an issue? I imagine running it via git bash (bash provided by the git installer on windows).

Good point - cross platform issues are a concern. I've noticed in the past that OS X machines tend to run older versions of Make (unless the owner has installed a newer version with Homebrew). It is a good idea to pay attention to the Make features that you use, understand which features were added in recent versions, and be aware of which features are specific to GNU Make. In my projects I recommend that users make sure that they are in fact using GNU Make.

Some systems might not support the `-p` flag in `mkdir -p`, or the `-rf` flags in `rm -rf`. There are scripts distributed on NPM called `mkdirp` and `rimraf` to work around such issues.

You can use forward slashes for paths on Windows in your Makefile. They are actually required if you want to use wildcards.

Make works on Windows. You can get it standalone, or as part of MinGW (and then even other Unix tools will work).

Ah yes, the horror of autoconf and automake...

Btw Windows has supported slashes forever. Just don't mix the two.

CMake! No, I'm kidding, don't really use CMake...

Downvote?! Looks like we have a CMake fan...

A long time ago, in a developer's paradise called the 1970's there was an automated build tool called "make".

It had this and only this syntax:

If a line begins at character 0, that is a list of files whose timestamps should be checked, if any are 'old', then execute the series of command lines beneath, identifying them as starting with a tab character.

That was it, the entire syntax of "make", and it was complete. Then some smart people fucked it up and we have that piece of shit we call "make" now.

>some smart people fucked it up

This does seem to happen in our industry over and over, another example is the way we turn every small, well designed language into Java.

You clearly were not there at the start. Java was a perfectly nice, well designed language.

Then some other people fucked it up...

That's true, but I guess only after Java people started to notice this effect.

I still remember how Java was hailed as a perfect replacement for C++ as the language of choice when writing applications - it truly was.

Today though I was faced with a class name that was 46 characters long. There's a reason why "Fizz Buzz Enterprise Edition" is written in Java.

> Then some smart people ...

not sure which make you are criticizing here..

BSD Make (aka PMake) is imho light years beyond GnuMake in terms of coherence of the extensions to the base language and usability..

unfortunately most people in linux land think make==gnumake, and so it is hardly used outside of the base build system of BSD systems..

I did not use Make at this time (I wasn't alive!), but it seems that you'd have to declare dependencies too. How did you determine if it was "old"?

"If a line begins at character 0, that is a list of files whose timestamps should be checked"

The list of files would be the declared dependencies.

Ah I see.

the first file in the list is the one whose timestamp is compared against all the other files in the list after it. If the timestamp of any file is newer than that first file, the following lines starting with a tab are executed.

That almost describes how I still use "make" -- and smart people pretty much always fuck up my makefiles sooner or later.

I once (back in the mid-1980's) spent several weeks hunting down a makefile bug. In the end it turned out to be a SPACE character that preceded a TAB character.

It's some weeks of my life I will never get back.

I hate make and it's 'entire syntax'.

Back in that 1970's developer's paradise, it was not possible to lose a space next to a tab, because we did not have WYSIWYG editors, we had "vi" which has an easy toggle to show white space characters. "Back in the day" everything had a command line interface, and non-developers were afraid of computers. Ah, the good old daze...

Having been a (moderately power) vi user since the early-80's, Today I Learned about this "show white space" toggle. Thanks!

Nowadays there are build tools that can determine dependencies automatically, by running the compilers inside a sandbox (e.g. linux strace).

strace? I was under the impression strace is used to monitor system calls, how does this related to compilers inside a sandbox?

It's not technically a sandbox, but monitoring syscalls is very closely related.

Working somewhere that really embraced Makefiles is actually very nice. If you keep your dependencies simple, the Makefiles to build stuff are pretty simple too, and it works for all the languages you might write software in. You can use it to deploy to servers too.

For me, the problem with all make replacements is that they don't just solve makefile's issues in knowledge-compatible way, but reinvent their own cryptic syntax and structure from scratch. Minor issues that you know about (and know how to avoid) are never worth fixing in this way, since most people don't have time to learn infinite variations of superior tool #1 that has its own flaws. They are fine with #2 that works for them.

When someone says "there are numerous alternatives", I just look out the window and smile.

Almost all issues mentioned itt can be solved in barely-compatible, but easily transition-able way. If this tool exists, its name is welcome.

Anyone here uses makefiles for anything other than compilation and build? One can use it to add some level of concurrency to shell scripts.

Anything that gets automated, and can also be shortcutted depending on how much of it has been cached already. That really is the beauty of Make to me - it's very easy to express what parts of a job need and need not be done, because the last run produced a state that was good enough to use again.

I once built a World of Warcraft addon downloader & manager (including patching) with Make. My friends thought I was insane. Personally, I wondered how else you'd do it :-)

(actually, looking back at it, I apparently built it on plan9's mk (https://9fans.github.io/plan9port/man/man1/mk.html), but it's close enough)

I use it for almost everything. Take ETL type tasks. It's almost impossible to remember where I downloaded certain data files six months after initially creating a project. I put that into the Makefile. Everything I do on the command line, I put into a Makefile. That way, 6 months later when I need to download new data, do some transformation on that data and load it into the database, 'make' re-traces my steps from 6 months ago. Broadly, my ETL makefiles look like this:

all: data.loaded data2.loaded

data.csv: wget 'some url'

%.sql: %.csv sed/awk/custom-python-script $< > $@

%.loaded: %.sql psql < $< && touch $@

At one point I had a makefile to build large latex documents. The use case was still essentially compilation, but it got me thinking outside the box with make which was neat.

You can look at data reduction tasks as dependency networks. Generate/retrieve data, reduce data, plot data. If the intermediate artifacts are files, you can use make to run the data pipeline.

If part of the data changes, or new data arrives, you just run make again, and the whole pipeline re-does whatever is new. Very slick.

It has free concurrency (make -j N). I've used this effectively several times for medium-scale (~GBs) science data processing.

I've built a batch job scheduling system out of 'make' and a handful of supporting tools.

It works surprisingly well, but it is definitely not the right tool for the job. After a number of years of adding/removing/changing jobs, maintaining it is turning into a nightmare.

I use it for programming microcontrollers, FPGAs, and doing additional signal testing on those devices. I also use it to control some processes that keep some hobby IoT devices in sync. It's just a really handy way to do parallel jobs like that.

Only really makes sense when you need to generate targets from a series of inputs. The targets don't have to be files, but it helps if they are to preserve the timestamp.

A few decades ago, while at the university, I used it for automating generation from LaTeX documents.

However expressive or standard on other platforms, Make is not the correct tool for building web projects. Putting aside fundamental technical differences between Make and say, Webpack (of which there are many) there needs only one argument: Make is not idiomatic on web projects. Most web developers are not going to be productive authoring or maintaining a Make build process. If this guy wrote a Make build process while working for me, I'd ask him to rewrite it using Webpack.

I have used Makefiles for a lot of little things over the years. One of the latest things I have found it useful for is automating deployment of websites via Jenkins jobs.

It is a lot easier and more manageable if your Jenkins job is set up to just run a series of generic Make commands for a project, where any specific steps for a particular project are defined in the Makefile.

This way I do not need to know or care about how any particular project or site is built when configuring the job to deploy it. The Makefile takes care of all that.

There's actually a much better general purpose build tool: https://bazel.build/ I consider it Make 2.0.

The thing I like about Makefiles is the declarative structure you get by defining the targets. It really can serve as a form of documentation about your different stages of development.

CMake is so nice compared to vanilla makefiles. I wonder if it can be used this way with Javascript.

Make may seem crude these days but has manageable complexity, I've never run across a build issue I could not debug. Its model is so simple, I can have it fully in my mind and be confident this is how it works. Moreover, this model simplicity allows me to build reliable systems on top of it.

CMake is a nightmare of implied complexity. I've run into multiple situations (usually but not always involving cross-compilation) where it was simply incomprehensible. No amount of time invested would let me figure out why CMake blew up. There is no way I can build something a little out of the ordinary on top of CMake and be confident that it's solid. It's just too complicated.

This sort of balancing, upfront ease of use vs hidden complexity comes up often and I have been burned enough in the past that experience dictates to pretty much always go for model simplicity and pay the upfront costs. I don't see this often however. A lot of the time people will go for what feels or looks right, superficially, without bothering to look beneath the surface or think about model complexity costs. It's an attitude that has led to many disasters in this space.

Fair enough but it could be that CMake isn't the right tool across the board. It works well for the projects I've used it for. Maybe one day I'll come across a case that makes me hate CMake though who knows ;)

I do embedded work. I agree CMake is horrible when cross-compiling and runs into trouble. The errors make no sense. It took me days to get OSG cross-compiling. It was a horrible experience.

I'd much rather a Makefile where I can override the compiler and flags for my configuration. I can easily figure out what I need to do based on compiler and linker errors for missing headers and libraries.

Maybe I'm wrong, but I always assumed CMake was optimized for the C language (thus the name)? That's at least what I've always seen it used for.

What would you gain if you're using it for other, less supported languages, like Javascript?

It lets you generate makefiles in a less verbose way. Like if you have a lot of files in a project or a bunch of targets Makefiles can get a bit unwieldy by themselves.

I've only ever used it for C++, but I don't see why it (or something like it) couldn't be used for other languages too.

CMake is mostly dedicated to C/C++ projects, but also has built-in support for assembly, Fortran, and Java. CMake has templates for adding a new language as well. The benefits are: cross-platform "shell" commands (you can copy files without using an explicit cp), standard lookup for programs, and easy target generation from a collection of files. Most languages are pretty similar to C/C++ when it comes to the build cycle. Portability (i.e. testing for existence of headers) is less of a concern for something like Javascript, but everything else still applies.

I'll second this. I had to use CMake for the first time recently and was pleasantly surprised by the experience.

This tutorial was making the rounds (on HN and elsewhere) last week, but in case anyone missed it, here's a link: https://github.com/pyk/cmake-tutorial I found it to be a great primer/refresher.

I like this idea a lot. I personally don't like writing makefiles and so having nice build tools like cmake generate them for me is about as close to writing them as I want to get. Letting make stay as a low level building block for high level build tools seems like the right way to go.

As a distro packager, Autotools may be a mess, but debugging a package that won't build is fairly straight-forward. If the package uses CMake though, and there's a weird build issue, debugging that can be a nightmare! Everything is so opaque, with tons of implied complexity! CMake may be nice to write, but it's hell to debug.

Gotta love all caps options with a prefix(?) that only shows up on the command line...

cmake is good but it is quite heavily geared towards C/C++ development. I would love to have some modern makefile with a bit nicer syntax. Fitting into 80 columns is no longer a must.

In case you didn't know, the 80 column limitation has its origins here


Interesting. How why the size of the card affect the size of the screen? I have assumed that it was kind of a good number for size of screens at that time.

It most probably wasn't a limitation so much as a convention carried over from one generation of technology to the next in order to ensure the adoption of the latter.

I actually usually have a general-use Makefile that I work from as a starting point [1]. It takes all c files from a src/ folder and builds their objects to build/ and then links them all in one go. No need for me to specify the files. It also works recursively.

[1]: https://pastebin.com/b1tr9th3

I searched and didn't find a single soul using Makefile to orchestrate a set of build steps. I have been doing that for a long time: `make test`, `make build_docker`, `make push_docker`, `make deploy`. Now suddenly everyone shows up in one place here. Long live HN.

Apache Cordova's site and docs, although currently using Gulp, have a Makefile that mirrors the Gulpfile, and which nobody uses. :(

I learned a lot from this, and whether or not one should be doing this, I simplified:


I opted out of checking for modification dates on node_modules and yarn.lock, because that seems like exactly what Yarn itself is for. I let it manage itself.

I also let Webpack do the heavy lifting.

So in short: I don't really need the Makefile at all and could just add the clean and dev-server commands to a script block in package.json

I still like it though

Make has dated syntax and conventions. When developers use emojis in command line, support for the spaces and other quirks look worse than a decade ago. Despite all of its expressive power and speed, Make is not going to attract many frontend developers.

If the concepts behind Make are repackaged in a more hipster way, the resulting tool may get far more appeal. Shake https://shakebuild.com is a build system library that can naturally express even the dependencies that require Makefiles go meta and generate new Makefiles. It can be a robust backend for any build system DSL.

I like functional programming languages and Haskell is something that interests me.

But you can't be serious when you say that make is out-of-touch with the average frontend developer, and then link to this:


I suggested making new build system that has the power of Make or Shake and is still oriented toward frontend development.

You are right that Shake and most general purpose build tools are out-of-touch with frontend. Even for people who are comfortable with these tools, they are not necessarily better than Webpack for the typical frontend tasks. Shake place in this hypothetical tool will be at its backend - like LLVM is the backend of many compilers.

I find NPM script parameter as the most reasonable build tool. You can write any kind of shell commands or invoke shell or other scripts in javascript script into it without learning the strange syntax of Make.

make is scary because people use autogenerated Makefiles.

After you've tried to naïvely read a Makefile for a medium-sized C project you'll never want to write a Makefile yourself.

That said, there's no tool that works so simply and so beautifully as make when you have a lot of build steps that generate lots of intermediate files with complex dependency links between them. That situation may happen everywhere, even if you're generating a bunch of PDFs or rendering HTML templates or making images from .dot files or whatever. Use make.

Or is there an alternative tool for these situations also?

I believe $< is only the first dependency and $^ is all of them

With such an easy-to-remember intuitive syntax how could you forget?

We remember it the same way we remember System.out.printLn(), undefined == null, and other trivia in other languages.

As a non-programer the only experience I have with makefiles are unpleasant ones of running ./configure then ./make only to go down an endless rabbit hole of missing dependencies. I am very grateful that yum/apt-get, etc. have come such a long way, they grab all your dependencies, you don't have to wait long periods of time for the code to compile.

I am sure Make still has it's uses, but I am sure glad package managers of have made makefiles irrelevant to me.

This is not really a fair comparison though. Makefiles and package managers are orthogonal and the latter has not really replaced the former. Makefiles are like a recipe, package managers are food delivery. Somebody has still to cook the food.

> This is not really a fair comparison though.

I'd argue it's entirely fair with other examples (e.g. compare and contrast to, say, Rust's cargo build.) You can write a Makefile which fetches and builds your dependencies. Makefiles might be like a recipe - but these days we're building recipes for building entire OS images, including grabbing the dependencies in the first place. A single software project by comparison is trivial.

The problem is Makefiles are a jack of all trades, and master of none. You have to write or copy a distressing number of rules yourself, per project, per subproject, and know the underlying build chain in relative depth to do even some rather basic things. I'd argue makefile authors not automating their dependency fetching is a symptom of this.

As a programmer, I avoid them, because I'll end up stuck maintaining them and end up lowering the build system's bus factor. With perhaps the exception of a couple of simple rules that forward to a "proper" build system, because I've written enough of them in the past that "make" still feels like the right default action that should generally work. But if it all possible, never for the meat of the build system.

Of course. But I would not compare cargo build or CMake to package managers because again, they are developer tools and unless you are a developer you should not need them.

I definitely think that we can do better than Makefiles, but again they have the benefit to be extremely versatile so they can be bent to many uses. (personally I do not use them, I use CMake and generate ninja files)

And the dependency tree (kudzu?) can be blamed on the "chef" preparing said recipe.

Autotools (the ./configure) is generally a pain to work with. It essentially spits out a massive script and incomprehensible makefile that allows your program to be used by the three people who still run HP-UX or AIX.

Good makefiles are simple, and should Just Work™. The dependencies should (theoretically) be listed in the project documentation, though it often isn't.

Package managers are definitely better for non-developers, though. Build systems generally aren't geared toward regular users, so users will find them confusing.

Make is awesome. I often did things with it earlier however now if I want to do something with NodeJS I generally use npm scripts tag. And sometime it gets tedious to do so then I use nps [0]. If I need something big then I go with Webpack or Parcel.

[0]: https://github.com/kentcdodds/nps

A tiny improvement that could be added - instead of locating Babel by adding your project's `node_modules/.bin` to your path or directly linking to it, you can always write:

`npx babel`

This will use any installed version of babel in your node_modules, or, if not installed, will temporarily install it for the duration of the command.

> or, if not installed, will temporarily install it for the duration of the command.

Wait... doesn't that make typosquatting even more of a danger? People get used to typing "npx <command>", they mistype the command once in a while, an attacker uploads packages named after the most likely typos, and done - the attacker wins.

I question the wisdom of that shortcut.

A lot of good arguments about using and not using make. Count me in the not using make camp. IMO it's just simply overkill. JS work nowadays is so modular if we were talking about configuring a monolith service at build time... sure yeah. Meanwhile I just wanna make this div purple.

I presume you're running simple small js files. This post is about larger projects that need to transpile newer js code down to browser compatible code, combine multiple files into 2 or 3 http requests, and pack every possible byte out of the js resources we end up sending to clients.

Yup I'm a JS Dev and I use webpack (3). I get the entire thing. However I think this is still all so complicated for no sake but for a small benefit. I'm currently looking for a tool to find out how much is seen/used vs how much is delivered.

I went through a Makefile phase but ultimately setting on Jakefile[1] for more approachable syntax and type-checkability via TypeScript.

[1] https://github.com/jakejs/jake

I like this a lot. We preach the mantra of reuse all the time, but we practice it so rarely - writing everything from the ground up every few years rather than trying to fit the existing tools to the new(er) processes.

Make works great for JavaScript but it seems very few people do so.


Wish this article was about just make rather than JavaScript and using it in tandem

Noted! I wanted to provide concrete examples, and I did not want the post to become too long. So I focused on one kind of project.

IMO there's a particularly dire need to free the JS world from the tyranny of the ever-shifting Build System of the Day(TM).

If you need to write in C or C++, use CMake. Otherwise, use whatever modern build tool your language provides. It's 2018, and Make should have died in the 70s,

A lot of "modern" build tools lack necessary features.

The problem with make is that people build makefiles such that it is later referred to as a Lost Art.

People do this with all build systems. The difference is that Grunt/Gulp/Broccoli files are a Lost Art in 3 years, and Makefiles have been around for decades.

I'm not a web dev, but this article has uncovered yet another thing in the js ecosystem that just seems crazy to me.

This makefile snippet:

    lib/index.js: src/index.js
        mkdir -p $(dir $@)
        babel $< --out-file $@ --source-maps
The source and the output are both called index.js? Why, God, WHY???

Because `require` and `import` look at your package's `main`, which (if it's a directory) implies `dirname/index.js`. Building to an `index.js` entry file is, then, the most standard name that allows `require 'modulename'` to work.

The capitalized rending-of-garments is silly. This is transpiling; file names should remain the same from input to output.

Why not? They're both entry points (hence 'index'), both JavaScript (hence '.js') but in different directories. Do you have difficulty distinguishing between /home/me/.vimrc and /home/somebodyelse/.vimrc too?

It's because there is always lag between the latest .js spec and what is supported by browsers. So transpilers (like babel) have to dumb it down from fancyNew.js to somethingIE6MightRun.js

It is not realistic/optimal to try to write your javascript to be supported by all browsers or runtime clients. It's better to write using the latest spec, and just have it dumbed down for you by transpilers at build time

Some more explanation here: https://www.excella.com/insights/typescript-vs-es6-vs-es2015

Because babel compiles javascript to javascript, so you'd expect the input to be a javascript file and the output to be a javascript file.

Of course that sentence contains it's own level of craziness, but the problem does not lie in the build step.

I find the editorialized title a little bit harsh on author's intentions...

Used make for building javascript bundles back in 2013. Worked incredibly well.

Tangential, but I cringe everytime I see the word "transpiler". A compiler is a compiler is a compiler.

Previous discussion on this: https://news.ycombinator.com/item?id=15154994

I think it’s useful to distinguish compilation whose target is another language that was originally intended to be human-readable (transpilation) from compilation to a form intended exclusively for machines (compilation to machine code or bytecode).

Compilation is the act of going from one language to another, machine or not. This is what I was taught at least. "Machine code" or "bytecode" holds no special distinction.

In this article's case, there a reasonable exception to made when JavaScript is being "compiled" into JavaScript itself. For that, I don't know what the term is (or if there even is one).

> [...] there a reasonable exception to made when JavaScript is being "compiled" into JavaScript itself. For that, I don't know what the term is (or if there even is one).

Well, given

trans : (+ acc.) across.


ipse ipsa ipsum : himself, herself, itself.

I propose we call that ipsumpilation ;)

More seriously though if the input is js and the output is js, and it’s not different versions of the language — e.g. es6 -> es5 (because imo that qualifies as different languages) — I’d call it an optimizer or a packer depending on what it does.

For that, I don't know what the term is (or if there even is one).


Transpiler has been around as a term in CS for a while now. Initially it was used for compilers that compiled from one dialect of assembly to another.

That was retconned. They didn't call it a 'transpiler' at the time.

"Transcompiler" was the older term for assembly -> assembly.

I don't think it's too much of a stretch to shorten that to "transpiler". I think people hate the term because they associate it with JavaScript hipsters who have no sense of CS history and presume they've invented something new. The same as how you cringe when someone refers to the "#" character as "hashtag".

But "transpiler" is a useful term. We're in a world now where source-to-source compilers are much more prevalent than they were ten years ago, and there are real workflow differences between working with a compiler that targets a low-level language versus a high-level one. Things like how you debug the output are very different.

> there are real workflow differences between working with a compiler that targets a low-level language versus a high-level one. Things like how you debug the output are very different.

Like what? When I've had to debug GCC, I just dumped out the GIMPLE et al.; what do you do differently with a 'transpiler'?

Sorry, "debug the output" was a really confusing way to say what I meant. What I meant was users will want to debug their code, and the way they do that is often affected by what their code is compiled to.

With a transpiler, it's fairly common to actually debug using the generated output. That pushes many transpilers to place a premium on readable output that structurally resembles the original source code, sometimes at the expense of performance or code size.

Other times, users will rely on things like source maps in browsers. That gives more flexibility to the transpiler author but can make user's debugging experience weirder because things like single-stepping can get fuzzy when a "step" in their source program may correspond to something very different in the transpiled output.

In compilers to lower-level languages (machine code, bytecode, etc.) there is a strong implication that most users can't debug the output, so the compilation pipeline and tools are obligated to have pretty sophisticated debugging support.

In other words, different tools, priorities, constraints, etc. Having a separate word to help tease these apart seems useful to me.

All of that I view more as examples of how immature the tooling is on the frontend, rather than an inherent distinction between source->source and source->machine compilation.

I remember crappy C toolchains that only gave you a .map file for debugging. It was just table of where the linker put the symbols from the intermediate objects. Hell, there were a lot of really crappy C compilers then that looked way more like babel than gcc. They basically just parsed into an AST and walked it to spit out asm, hardly doing any optimizations.

Admittedly translator and transcompiler where probably the more popular terms, but the term transpiler can definitely be found in the literature of the 80's. I know last time this came up someone posted a link to an article from the 70s using the term in that context.

> I think it’s useful to distinguish

How and why? And what do you do of compilers which can do either based simply on the backend you select?

Personally, I think of transpilation as a subset of compilation. So a transpiler is also a compiler, and a compiler that has a readable language backend can do transpilation, which is a type of compilation.

u/masklinn still asks a good question though: there are tools that act on the source code for both "compilation" and "transpilation" (eg Kotlin), depending on the target platform. They do not distinguish, why should we?

For the same reasons you might ever want a more specific term?

What about assemblers or disassemblers? Given a suitably broad definition of compiler that includes transpilers, are they not also included? Wouldn't the same arguments apply?

> For the same reasons you might ever want a more specific term?

I asked why you might want this here and there's been no answer yet. Having a term for something you don't need or want to know isn't actually useful.

> What about assemblers or disassemblers? Given a suitably broad definition of compiler that includes transpilers, are they not also included? Wouldn't the same arguments apply?

Because these are actually useful qualifier, in the same way that "a C compiler" is a useful qualifier.

Do you also disapprove of the word "assembler"? (Honest question.)

Completely agree. Even the Babel project has the good taste to simply call themselves a compiler.


Babel is a JavaScript compiler.

A square is a square is a square. But it's also a rectangle.


Compiler: collects sources from various places, assembles the result into machine code.

Transpiler: collects sources from various places, converts the source to another language.

Personally I cringe at the newspeak you seem to imply we should all be applying to our lives. These words mean things - quite different things, it turns out. There's a difference between human-readable source code language, and machine-executable binary code. These tools function in different ways entirely; your optimization to the language is not only un-warranted, but leads to a desultory effect: programmers get stupider when they don't know what their tools are actually doing.

A compiler converts code in one language to another language. That’s the literal definition. In the case of machine code, the “collection” step would be performed by the linker.

gcc compiling some C and Babel compiling some JavaScript are both performing the same role. So much so the similarities in how they carry it out are striking.

I'm OK with using the word transpiler, but don't try to modify existing words. A compiler translates from language to language. If you want to say that the compilers whose target is a high level language should be called transpilers, that's ok, but that makes them a subset of compilers, not mutually exclusive.

> newspeak you seem to imply we should all be applying to our lives.


^ no match.

transpiler is the newspeak.

The original C++ is probably the most famous 'transpiler'



ohey, noone calls it a 'transpiler'.

I'm not so angry about the word transpiler; perhaps it might improve things.. but to imply that using compiler as a common term for all things here is a regression/neologism is patently wrong - transpiler is the newcomer.

But we called it lots of other things.

foldoc.org is the product of a single individuals' construction of definitions of computing terms and is by no means a complete and authoritative source of technical terms. It says so right there on the site...

However, this is a fun game, so lets play:





please feel free to attribute or posit another source w/r/t etymology within computing, I'm all ears.

I doubt you'll find 'transpiler' at the origins of computing history; see also C++/cfront post

4 refs in my post, check it. You won't find "full stack development" in the roots of computing history either; language is malleable and useful. Get over it.

    lib/%: src/%
	mkdir -p $(dir $@)
	babel $< --out-file $@ --source-maps
Just look at all those magic things. The percent signs! $<! $@!

Well, I know they are not magic ;). But why would I want them when I can actually use normal names like "deps"/"entries" and "target"?

It gets substantially worse as we go down the rabbit whole. Where webpack can easily walk the entire dependency tree by itself, we have to invoke

   src_files := $(shell find src/ -name '*.js')
Where we can use same webpack to seamlessly output the resulting file into an output directory, we need to do the (very unintuitive) pattern substitution:

   transpiled_files := $(patsubst src/%,lib/%,$(src_files))
or even

   flow_files := $(patsubst %.js,%.js.flow,$(transpiled_files))
And when we want to watch for changes? Well, we need an external program anyway.

   $ yarn global add watch
   $ watch make src/
The art of Makefiles is often lost for good reasons: because Makefile don't really cut it anymore.

"Doesn't cut it anymore" isn't the same as "has a different way of doing it". Make has many other (necessary) features that Webpack doesn't.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact