Hacker News new | comments | show | ask | jobs | submit login
The Makefile I use with JavaScript projects (olioapps.com)
544 points by theothershoe 5 months ago | hide | past | web | favorite | 512 comments



I used make heavily in the 80s and 90s, but haven't much since then. Recently I started a project that had source files getting processed into PDF files, for use by humans. Since this is the 21st century, those files have spaces in their names. At a certain point, I realized that I should be managing this processing somehow, so I thought of using a simple Makefile. A little searching reveals that the consensus on using make with files with spaces in their names is simply "don't even try." In the 21st century, this is not an acceptable answer.


Demanding the support of spaces in filenames significantly complicates code as simple space delimination no longer works and other delimination schemes are much more error prone -- forgetting balancing quotes, any one? While you are allowing spaces, you probabaly are allowing all possible code points or maybe even a null byte? Thinking about it gives me headaches.

I hate hearing people using 21st century or modern as reasons for inflating complexities. Without rein on complexity (whether it is hidden or not), the future is doomed, whatever you are building. While I am not saying we should avoid complexity at all cost, I am insisting that all complexity should be balanced with merits.

The merits of filenames with spaces is they read better in a GUI explorer. Whether that merit balances out all the complexity it brings is individual dependent. For me, that merit ranks very low and I avoid spaces in my filenames at all opportunities. For some, they need those filenames to be readable. And there are solutions. One solution, from those who don't code, seems to demand ("developers", paid or not) that every tool that deal with files should handle this additional complexity, regardless of their context and what they think. Another solution would be to add an additional pre-step of copying/renaming/linking/aliasing. With the latter solution, the complexity is confined.

I guess for some, it only matters with "I do work" or "they do work" rather than the big picture. That is fine. However given the context of you are working with Makefiles, then you are a developer at some level, you are supposed to do some work.


> And there are solutions. One solution, from those who don't code, seems to demand ("developers", paid or not) that every tool that deal with files should handle this additional complexity, regardless of their context and what they think.

Users expect computers to work in a non-surprising ways.

It isn't natural to use dashes or underscore in file names. Training users to be afraid of spaces is just teaching them one more way that computers are scary and unpredictable.

Meanwhile over in Windows land, all tools have been expected to deal with spaces in them for what is approaching 20 years.


> Meanwhile over in Windows land, all tools have been expected to deal with spaces in them for what is approaching 20 years.

That is an excellent example that deserves a second look from a different aspect.

... it has trained a crop of computer users that are afraid of command lines and with an attitude of anything beneath the GUI interface is owned by and of someone else's problem. They are scared of computers more than ever. They are very scared of it and having it heavily disguised as an appliance is mandatory.


> it has trained a crop of computer users that are afraid of command lines and with an attitude of anything beneath the GUI interface is owned by and of someone else's problem.

There is nothing beneath the Windows UI interface. The GUI is the primary subsystem of Windows, the command line is a separate subsystem. Windows has traditionally been built up around the Win32 API, which is GUI first.

It is of course possible to do things from the command line, and some advanced stuff may need the command line, but users should never need to drop out of the UI unless something has gone horribly wrong.

The Windows UI is incredibly powerful, with advanced diagnostic and logging capabilities, incredible performance monitoring tools, and system hooks everywhere that allow the entire UI to be modified and changed in almost any way imaginable.

The way Services work on Windows is not some (hotly debated) config files. It is through a UI. Management of hardware takes place through a UI. Pretty much everything short of running Ping has a UI around it.

I love the command line, if I am developing it is how I primarily work. And while doing so, younger devs look at me like I am crazy because they have customized the living daylights out of their UI tools (VS Code, Atom), to do amazing things that exceed what a command line can do, and of course those editors have a command line built in for when that is the best paradigm!

> and having it heavily disguised as an appliance is mandatory.

Something working so well that it becomes an appliance isn't a bad thing. It means the engineers who made it have put in so many fail safes and worked to make the code of high enough quality that it doesn't fall apart around itself all the time.

Heaven forbid I can upgrade my installed software and not have some random package break my entire system.


> There is nothing beneath the Windows UI interface.

To clarify, beneath the GUI interface is the actual code that implements that interface.

> Something working so well that it becomes an appliance isn't a bad thing.

Not at all. I don't attempt to call or think my phone as an computer. Window's users, on the other hand, still call their PC computers. I guess that is ok if computers are appliances. It is just that there are still a big camp of people who use a computer as computer. That causes some confusions in the communication between.


> Window's users, on the other hand, still call their PC computers. I guess that is ok if computers are appliances.

It seems that 90% of computer use has moved into the web browser. Heck outside of writing code, almost everything I do is in a browser, and my code editor of choice happens to be built as fancy skinned web browser...

> To clarify, beneath the GUI interface is the actual code that implements that interface.

I'd say that everything moving onto the web has once again made the code underneath it all accessible to the end user, if they so choose.

(Ignoring server side)


> It seems that 90% of computer use has moved into the web browser.

This is an extremely (web) developer-centric viewpoint IMHO.

Try telling 3D modellers/sculptors, games programmers, audio engineers that 90% of their computer use has moved into a browser. They will look at you with a blank face since they all require traditional "fat" desktop apps to get their work done at professional level.

And those are just the examples I can find off the top of my head, I'm sure I could think of more.


Do all of those computer users constitute more than 10% of computer users? I don't think there's even 5% of computer users there.

Talking about 3D modelling, game programming and producers like they're the majority is a technologist-centric view.

Most computer use is by people who think the web is the internet and use 'inbox' as a verb.


> Do all of those computer users constitute more than 10% of computer users? I don't think there's even 5% of computer users there.

So as stated that was the list of the top of my head. I just pulled from my personal list of hobbies, things that I use a computer for other than programming or automation.

Within 50 feet of me at work there are a whole bunch of electrical engineers who spend 90% of their time in some fat app for designing, I dunno, telco electrical stuff.

In the fishbowl next to that, are 50 network operations staff who spend 90% of their day in "fat" network monitoring applications.

I'm just pointing out if you look far enough there are plenty of people using apps outside a web browser for their daily work and hobbies.

In my nearly 25 career in IT I have never heard people use 'inbox' as a verb (as in 'inbox me?'). Sure some people must say it sometimes, but I think this is overstated and another example of programmer cliche or hyperbole.


Most computer use is by people who think the web is the internet and use 'inbox' as a verb.

I think most of those people dont use general purpose computers anymore


> Try telling 3D modellers/sculptors, games programmers, audio engineers that 90% of their computer use has moved into a browser. They will look at you with a blank face since they all require traditional "fat" desktop apps to get their work done at professional level.

I'm talking about overall computer use. For certain professional fields, yes, apps still matter. But I'd take a guess that the majority of screen time with computers now days involves a web browser.

Heck as a programmer (not web), I am probably 50% in browser looking things up.

At most offices, computers are used for web browsing, and Microsoft Office.


> Heck as a programmer (not web), I am probably 50% in browser looking things up.

I should have put web in square brackets eg. [web] to indicate "optional". You seem to fall into the category I was describing.

Use an IDE (other than Atom), eg. Visual Studio, Eclipse, IntelliJ? All fat apps.

Use VMware workstation or Virtualbox? Fat apps.

> At most offices, computers are used for web browsing, and Microsoft Office.

So is that 90% web browsing and 10% MS Office? BTW, Office365 is still fat application (or suite of them) last time I looked.


Those PC-as-appliance people are better served in the mobile space and it would be best if they left us and our computers alone


>There is nothing beneath the Windows UI interface.

Yes there is .. Microsoft! For instance if you have a dir named "user" it will show up in your GUI as "User". In fact User, USER, user are all the same for you because MS transliterate on the fly. Did you know that you cannot access some directory because MS prevent it .. event with the 'dir' command from the DOS prompt .. only an 'ls' non-MS command can do that.


> There is nothing beneath the Windows UI interface.

> Windows has traditionally been built up around the Win32 API, which is GUI first.

What did I just read?


Which is not a problem. The command line was only one, historical UI. Not the be-all end-all of UIs, and there's no reason it should be of any real interest to modern desktop users (non devs).

And I cut my teeth as a developer on DOS, Sun OS (pre-Solaris), and HP-UX, and early Linux back in the day.


Almost every CLI program I've ever used in Windows has no problem with spaces in filenames, so I don't exactly why he's fixated on the GUI... But I had forgotten, computers aren't useful as tools to accomplish work, but as mechanisms to assuage intellectual inferiority complexes. He should advocate for punch cards again, since that would certainly stop morons from using computers.


> Almost every CLI program I've ever used in Windows has no problem with spaces in filenames, so I don't exactly ...

Just to clarify on what we think as problem could differ:

    C:\Users\hzhou>ls *.txt
    new  2.txt

    C:\Users\hzhou>ls new  2.txt
    ls: new: No such file or directory
    ls: 2.txt: No such file or directory


dir works fine for that.

:)

I actually didn't know that dir supported multiple globs for filenames! I've never had a need for that.

Super cool.


Um... no it doesn't. It takes each space delimited name as a new name. You will need to add "" and quote the names -- but Windows shell only has one level of quoting, (") which means you can't easily type the command you need. Unix shell is a bit better. Unix only appears worse because people do attempt scripting.

Directory of C:\Users\fred

12/14/2017 04:44 PM 1,556 new 2.txt 1 File(s) 1,556 bytes 0 Dir(s) 75,989,876,736 bytes free

C:\Users\fred>dir new 2.txt Volume in drive C has no label. Volume Serial Number is BA05-C445

Directory of C:\Users\fred

Directory of C:\Users\fred

File Not Found

C:\Users\fred>


> Um... no it doesn't. It takes each space delimited name as a new name. You will need to add "" and quote the names -- but Windows shell only has one level of quoting, (") which means you can't easily type the command you need. Unix shell is a bit better. Unix only appears worse because people do attempt scripting.

Ah I see, you have a file "new 2.txt", I was a bit confused.

Not sure what you mean by only 1 level of quoting being a problem, sorry.


> Ah I see, you have a file "new 2.txt", I was a bit confused.

This is highly ironical, given this thread.

Some people seems to advocate for programs to be better than humans at globbing filenames.


That's a great point. Computers sure have the potential to deal with spaces just fine. But if textual interaction is a requirement, we can only have one of arbitrary filenames and clutter-free syntax.


Window and Linux shells have the same ideas about spaces in file names, which is that if they appear they need to be quoted or the space character needs to be escaped.

Outside of Make which has long and boring historical reasons for not supporting spaces well, just about every program is fine with spaces.


In UNIX shell, IFS variable can be set to an other characters, e.g. newline and tab, or to empty string.


To be fair to modern desktop users, a command line that doesn't redefine the word "command" to fit the constraints of the system would look something like this:

$ take this list of transactions and for each group of duplicate names show me the name and total dollar amount spent, sorted from highest to lowest

$ I think I have a SQL command for that. Are you feeling lucky? [y/n] y

Foo $25 Bar $20 Blah $15

$ Was this what you were looking for? [y/n]


> Which is not a problem. The command line was only one, historical UI. Not the be-all end-all of UIs,

A similar trend is happening in electronic communication: from letter to email, to text, to emoji's. The text is just one historical communication way and there's no reason it should be of any real interest to modern communicator as long as we have emojis ... wait, we've been there before.


Only the text can express things that emojis cannot. This is not true to GUI vs command line.

The GUI, which can include arbitrary number of text boxes for command line style entry, is a superset of what the command line can do.


This kind of interface is horrific for the use-case:

https://www.powergrep.com/screens/replace.png

There is such a thing as a cluttered graphical interface, and some complexity cannot just be abstracted away.

> The GUI, which can include arbitrary number of text boxes for command line style entry, is a superset of what the command line can do.

That would be like saying that because I am able to write "First open this app by clicking on the upper-left icon, then click File, new File", would mean that text is a superset of the GUI because it can describe visual cues. An abstraction can always replace another, it does not mean it is a superset of it.

Take the source code for your GUI application, this is pure text. In the early days, there was shell code that would describe terminal GUI of sort to configure the linux kernel. This shell code is still commands read token per token with matching behavior. Because an abstraction can be traded for another (which could be argued to be a founding principle of computer science), does not mean that one is a superset of the other.


> Users expect computers to work in a non-surprising ways.

"Users" is a very broad category with a bunch of partitioning.

The particular subgroup in question are authors of build automation systems for medium to large scale software projects. They (we) have a rather different perspective on "surprising".


> authors of build automation systems for medium to large scale software projects

For desktop software, a build automation system needs to handle files that are included in the installer, installed on user’s machines, and in some cases be discoverable by end users of the software.

I would be very surprised if my build system wouldn’t support spaces in file names.


Windows land also had https://en.m.wikipedia.org/wiki/8.3_filename which was never confusing.


"users" of Makefiles are actually developers, not secretaries, pensioners or architects. They should at least understand tradeoffs and choose accordingly.

Developers are used to not using spaces: oneStupidClass or AnotherStillStupidClass or others_do_it_like_this or bIhatehungariannotation or daxpy .

As you can see it is not that bad and it has been a widely used practice for decades.

Since you are comparing specifically to windows land, perhaps it is their focus on using spaces in file names that made them always be worst than unix like systems in terms of stability, sequrity or performance.


Users aren't programmers and programmers write things no user should ever see. Only the machines do.


Human beings who name things are going to use spaces. That spaces were used as delimiters for computers is somewhere between unfortunate and a colossal mistake.

But to use that as evidence of why spaces should not be supported in filenames is putting the cart before the horse. The goal of software is not to perpetuate whatever mistakes have been made in the past. It's to solve problems for human beings.

And human beings have been using spaces to delimit words since long before computers existed.


On the command line, using spaces to delimit words is also quite natural; that's why they get used as a delimiter:

    mv old-file new-file
Spaces separate the verb, the direct object, and the indirect object. Using commas or colons instead would painfully artificial.

So the question is whether you will favor naturalness on the command line or in the GUI. It's no surprise that a Unix build tool favors the command line.


True, but on the command line I don't have to worry about the spaces. I'm just going to tab-complete to simultaneously ensure I'm in the context I expect to be in, ensure I'm not making any typos, and save some time to boot.


What if your file system didn't distinguish between underscores and spaces, your GUI displayed them as spaces, and your command line displayed underscores?

Then this problem goes away entirely and you instead have the problem of not being allowed to have "this_file" and "this file" in the same directory. A problem which probably doesn't matter.


That hides the problem from the user, and will consequently lead to command line users typing spaces where they should be typing underscores.


Not all languages have (or had) spaces.


I apologize if I implied that they did, because that has nothing to do with my intended message.


To clarify our views:

The words and sentences are inside the file. The filenames are identifiers to the files. Identifiers' purpose is mainly to identify, rather than to communicate.

On the other hand, we could imagine a user interface that users don't see (actual) filenames at all.


Identification is a form of communication.

The identifier assigned to me at birth contains two spaces. Most people use one of the shorthand forms, but still.


I'm intrigued. Your first name has two spaces in it?

That is, I think it is safe to say that we expect we can include spaces in the names of things. The movie is named "Star Wars", after all. That seems natural.

For people, though, we have grown used to someone having multiple parts of their name separated by a space. If given out of order, a comma.

Even in the "naming things" category. We are used to having to put quotes around names that have spaces. Or otherwise using a convention like Capitalized Identifiers to show something goes together. Probably isn't a clearcut rule.


> I'm intrigued. Your first name has two spaces in it?

No. But my name does.

> For people, though, we have grown used to someone having multiple parts of their name separated by a space. If given out of order, a comma.

https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...


Ok. Something about the way I read the first post, I thought you meant your simple name. No reason it couldn't have spaces. My wife knew of someone named 1/2. Insisted it be a single character, too. Was amusing how hard that was for most payroll systems.

And I wasn't saying this isn't allowed. What we are used to is not the total of what is allowed. The fringes are full of mistakes, though. At some point, you make a statistical choice.


What does the merit of spaces in file names matter? You still will need to deal with files with spaces in the real world. If your tooling doesn't support it, it's a non-starter for many.


Your tools support filenames with tabs? With any unicode char?

A real world possibility.


Every project I develop is unit tested with strings containing invalid unicode text, containing null bytes, and other random stuff.

If a user can’t paste any byte sequence, and expect it to work, then the tool is broken. I handle these cases.


You can do it. The question is whether you should if you can avoid it.

When writing a shell script or makefile to deal with repetitive tasks if is easier to assume some things.


I'd prefer it if the writer of the shell or make didn't make too many assumptions about my assumptions.

Even bash lets you escape unusual filenames for when you need them. Make will always explode your strings, and doesn't even attempt to let you escape them. I don't think it's unreasonable to expect a time-tested Unix tool dealing with files to actually handle all possible files.


So in order to deal with the real world possibility of spaces in filenames you choose to ditch a real world, useful tool?

Perfectly reasonable if spaces in filenames are a hard req.

In lots of scenarios they are not, and I prefer to assume that filenames have no spaces, and stick to make.


works in bash:

> touch "with<Ctrl+v><Tab>tab"


Exactly as easy as without tabs. Now do that the whole day, sometimes with tabs, sometimes without.


What about real world is that in real world, it is not context-free. There are real world situations -- such as selling a word processor to general public (especially) including clueless -- you need to deal with files with spaces, period. But there are real world situations -- such as Makefiles -- requires users to learn the tool, to be able to handle certain tricky situations themselves and excludes incompetents. And then there are real world situations that the application only concerns within a company, or within an office, or within one-self.

What about real world is, it is complicated.


I'm bemused by this repeated insistence that supporting spaces in file names is somehow difficult.


The word was complicated, not difficult. It is not difficult to handle spaces, it is more complicated.


That's a worthwhile clarification - thank you. While that's certainly true in the strict sense, I remain bemused.


I'm bemused by this repeated insistence that supporting colon in file names is somehow difficult.


Who on earth insists that? Only Windows programmers, I expect! But it's easy: make sure they're expressible in your syntax, then pass them through as part of the file name. Just like spaces. If there's a problem, you'll get an error back from the OS.


Yes, of course you're right. My apologies, it was a stupid comparison.

Though I still feel it's a bit of "choose your poison". In any language, certain symbols will have special meaning. You pretty much have to pick which ones, and come up with rules about how to let their natural meaning bleed through. With unix shells, one such character is the space, and there's a number of rules about how to let their natural meaning bleed through - but they're all somewhat awkward, and they all need special care.


And nulls? And quotes? And dollars? And any unicode char?


I expect you mean NUL (ASCII 0) - but that's typically not a good choice, because POSIX decrees it invalid, along with '/'. But anything else is fair game, sure. Handling every Unicode char might be a pain, but that's the filing system's job, not yours!

Most programming languages manage to get this right. You have a quoting syntax with an escape character that lets you express the quotes, the escape character, and any arbitrary character (as a byte sequence, as a Unicode codepoint in some encoding, etc.) as well. Now you can do everything. Why not do this?

I'm not going to say this won't be a pain if you decide to write out every file name possible, because it will be, inevitably. But you can supply alternative syntaxes by way of more convenient (if limiting) shorthand - Python has its r"""...""" notation, for example, allowing you to express the majority of interesting strings without needing to escape anything.

You might argue that I've just punted the problem on to the text editor and the filing system. You'd be quite right.


Depends very much on your programming environment: on Python I don"t care, but when doing bash scripting or makefiles (very often) I very much care.

I have recently decided to stop using $ in passwords (defined by me) for that same reason: of course they are valid char, but they are such a big pain to support in usual contexts that it is simply not worth it.


And regarding slashes: are you bemused that unix does not support them in filenames?

(Thanks for correcting the nul reference)


No, not really - I have no particular opinion about what POSIX chooses to support or not.

But I'm going to go back to my original point. What I do have an opinion about is how reasonable it is for tools not to support a character that is valid in file names, when that character is straightforward to support with everyday, well known syntax.

And when that character is ' ', the standard English word separator, as straightforwardly supported by approximately ever kind of quoting or escaping syntax ever, my opinion about a lack of support is: it's crap.


Please write a makefile that uses an environment variable as a mysql password to connect to a mysql server and perform an arbitrary administrative task.

Now do the same, assuming that the password can have a dollar in it.

You can do it. It is not as easy, readable or maintanable. Avoid it if you can.


Why do it from a makefile instead of a script called by the makefile? This plays well to the strengths of both.


Just because of the dollar? Exactly my point.


No, not just because of the dollar. Because it's easier to test in isolation and allows you to use things like here docs which are useful for dealing with sql.


?

Somehow we started with a single dollar in a string, and now I am dealing with sql, here docs, and TDD?

There is a place for everything in life: for small helper makefiles, which I write simply to support me in my workflow and even to remember some interesting commands that I need to run for a certain project, I can assure you that making some simple assumptions helps me staying halfway sane.


This represents a problem with make, not any other part of the system.


Not a problem if you use a my.cnf file...


And I need that because of ... a dollar!

Extra complication to avoid if possible.


You’re trying to “sell” make. It is lacking a feature many of its competitors have. That’s a tough sell :)


You solution is a “boil the oceans” one typically proposed by engineers. You can re-program computers. You can’t re-program millions of people. Every natural language uses spaces to separate things.

You can accept that or you can keep tilting at windmills.

In every branch of science, reality wins. If your model can’t accomodate reality it’s either completely wrong or it needs adjustments, at least.


> Every natural language uses spaces to separate things.

Exactly. To separate things. Which, incidentally, happens to also be precisely what make, and the traditional UNIX shells do :-)

The problem isn't that space in itself is a particularly difficult character. The problem is that its meaning is overloaded and ambiguous. No matter what you do, computers will have difficulty with ambiguity. You'll always have the problem that the separator is special, but hey, I'd be all for using 0x1C instead ;)


How exactly is this different to, say, maths, where there is an assumed precedence and when you need to either make that clear or change the order you use parentheses to encapsulate the inner calculation?

What it sounds like is that the Unix shell syntax was established how it was, everyone built on it with all of its syntactical conveniences, and suddenly there's 100% buy-in to the idea that a computer just can't handle a filename with spaces in a shell.

    ./cmd do-something-with --force file with spaces


    ./cmd do-something-with --force 'file with spaces'
That's one of the main problems solved. If you're expecting to run an executable with spaces in it, like this:

    ./do something with cmd --force 'file with spaces'
Then it's another problem but one that can be solved by convention. A GUI can happily execute `./do\ something\ else` but if you're in the shell you've got completions, aliases, functions, symbolic links...

And if that's not ideal, then `./'do something with' cmd …` should be good enough right?


You can prevent your tools from having to deal with the fact, by preprocessing.


> Every natural language uses spaces to separate things.

Nope. You don't space out each word when speaking. If you were speaking about writing systems, not all writing systems use spaces as word delimiters. See: https://en.wikipedia.org/wiki/Space_(punctuation)


Not handling spaces is a symptom of a much deeper problem in common with a lot of 'unix' utilities: not actually structuring data apart from in ad-hoc string encodings. The fact that data can be so easy conflated with program structure is the cause of so many obscure bugs and overhead in using utilities that suffer from it it's a wonder anyone is defending this approach going forward.


>Demanding the support of spaces in filenames significantly complicates code as simple space delimination no longer works

Wow.

So human beings should stop using allowed file names because it's too hard for you?


The fact that Unix tools have trouble with spaces in filenames is absolutely a problem with Unix. If the Unix ecosystem had better support for this, then it wouldn't be a problem.


I don’t quite see how Unix tools have trouble with spaces in filenames. Could you detail some cases where the space handling is not due to the shell, as opposed to the program being invoked?


There's this program called make...


My question was aimed at ops generic statement. As for Make, it originally wasn’t clear to me where the issue was supposed to lie. Lines in rule bodies are handed off to the shell, and that whitespace in rule dependencies need escaping didn’t seem surprising since it’s a list (though it’s probably a bug that whitespace in target names must be escaped, since it’s just one token that ends in a colon). But I see now that the expansion of list-valued automatic variables is probably a real Make-endemic issue.


> though it’s probably a bug that whitespace in target names must be escaped, since it’s just one token that ends in a colon

It's perfectly fine for a rule to have multiple targets, e.g.

  output.txt error.txt: source
    build source > output.txt 2> error.txt


Interesting! So in that case one could defend the need for escaped whitespace. Still leaves automatic variables, I guess.


Well it could be rewritten to use 0x1f (i.e. unit separator) to separate items. I mean it already has significant tabs. Though invariably people would be like "How is it acceptable for make to not support filenames which contain unit separators? It's 2020 guys, get with the program!"


Unix tools have no trouble with space in file names. "-", "\0", "\n" and bad Unicode are troublesome. Lazy programmers can forget to put quotes around variables in shell scripts, but it's not a problem of the tool.


Additionally, not every shell expands variables the way Bourne-based shells do.


You're talking about implementation complexity. The only argument you can throw at the user is feature complexity. Handling spaces in filename isn't a complex feature at all, and users don't care that the simplistic implementation you're using makes it a problem.


I've been working with strings with spaced since before the 21st century. I've also worked with strings with special characters.

I've even worked with variables with spaces and special characters.

I don't see why filenames are so much more special, except a lot of old tools never got updated to world beyond ASCII


I am old-school and hate spaces. They do occasionally show up on my computer. But never in anything I'd be touching with a Makefile.


> At a certain point, I realized that I should be managing this processing somehow, so I thought of using a simple Makefile.

I don't understand why one would think that make would be a good tool for this...

> Demanding the support of spaces in filenames significantly complicates code as simple space delimination no longer works and other delimination schemes are much more error prone -- forgetting balancing quotes, any one?

Yes this is a limitation of make, but not a million other tools out there.

Make is not a panacea, no tool is - pressing a tool into a job it is un-suited for because you understand it is "Not good" (trade mark and copy right pending).

This is the classic case of having a hammer and a screw -


> I don't understand why one would think that make would be a good tool for this...

Why not? To me, it sounds like a perfect use of Make. You have a number of files that should be processed somehow (presumably by invoking a certain tool for each and every file) and produce another set of files.

Whether it's C-files to compiled binaries, or some source files to PDF:s, make seems very well suited for the job. Except yeah, perhaps, spaces.


Yeah make seems like a great first tool to grab in this case. But apparently some of the requirements of this current usecase don't play well with make. Too bad, but it doesn't mean make is somehow a bad tool...it just doesn't fit all problems.

Given that make isn't working well in this case due to spaces, I would personally probably use something like find and sed in a bash script to just get all the files and convert them to pdf. (Obviously this won't work if your system doesn't have these tools though...)


Nothing an intake script and a little gsub() can't handle


> The merits of filenames with spaces is they read better in a GUI explorer.

Even this merit is debatable. Is foo bar two things or one? I know foo-bar is one.


yeah and then is 田中 one or two things? It all depend on the domain. To me 'é' and 'ê' mean different thing that are both outside the make domain. In French you also have half white space (diacritic that latex can handle) for terminal '?!..' Those are simply outside make domain .. so bothering about space encoding as: ' ', '+', "%20" or '_' seem futile.

It all depend of domain convention. Make align on variable naming convention .. that is all


I hate hearing people using 21st century or modern as reasons for inflating complexities.

No. He's right, you're wrong, hzhou321. We want spaces in filenames. We even want UTF-8 if possible. We don't want crude tools that cannot handle the most basic names. You can argue all you want, this is a very very basic demand that could be met with very very basic tools but make is just too crude.

People like you are exactly the cancer in the developer community that argues away reasonable demands like spaces in filenames and perpetuates the garbage legacy tools we have.


get out of here spaces.


I was able to to it. Here's the source code "hello world.c":

    #include <stdio.h>
    
    int main(void)
    {
      puts("Hello, world!");
      return 0;
    }

And here's the minimal Makefile to generate the output:

    hello world:
Of course, I did have to swap out the ASCII SP (character 32) for the Unicode non-blank space (code 160) to get this to work, but hey, spaces!


>I did have to swap out the ASCII SP (character 32) for the Unicode non-blank space (code 160)

How?

EDIT: Ok, now I got it. Boy that was a wild ride.


I did not do the method below. Instead, I wrote a script to generate the filename with the non-breaking space where I needed it.

Edit: rewording and typos.


Any way I can see the script? Looks like I am still learning and ended up taking a longer route.


It's Lua 5.3:

    h = "hello" .. utf8.char(160) .. "world"
    
    f = io.open("Makefile","w")
    f:write(h,":\n")
    f:close()
    
    f = io.open(h .. ".c","w")
    f:write([[
    #include <stdio.h>
    
    int main(void)
    {
      puts("Hello, world!");
      return 0;
    }
    ]])
    f:close()
Pretty straightforward.


Could you explain how you did it?


Replacing breaking space with non-breaking space :

1. Set up your compose key on linux(Top right corner settings->system settings->keyboards->shortcut->typing->Compose key(choose an appropriate one)).

2. Whenever you need to type breaking space instead type (compose key + spacebar + spacebar). This puts a non-breaking space there instead. That's it.

3. Create file -> sudo nano hello(compose key + spacebar + spacebar)world.c

4. Paste the code to execute.

5. Makefile-> hello(compose key + space + space)world:

6. make

7. ./hello world (pressing tab will recognise the executable automatically).

NOTE: Don't do this ever in any production code or just ever. This hack can take up hours to resolve and a lot of frustration which could have better been spent on fixing something meaningful. This is a frowned upon practice. The only cool part is that your directory can have two files with whose name look exactly the same.


Solution: create the pdfs with spaces replaced by underscores. Then as the very last command in the relevant makefile section, insert a bash command to replace those underscores with spaces.


What if the filename is a mixture of underscores and spaces?


Escaping an escape character is not a new problem in programming. There are solutions :-)


"Premature generalization is the root of all evil."


Is there any clone of make that's aiming to address the issues with file name spaces?

Being incapable of handling spaces is a bug that's been marked Minor since 2002 - https://savannah.gnu.org/bugs/?712

(plus I find double quotation marks easier to read and write than escaping every space in a path)

Edit: https://stackoverflow.com/questions/66800/promising-alternat... (they aren't really clones tho)


I deal with this shape of problem quite a bit. After using scons and make in the past I recently tried using ninja, and it really works well.

Specifically, a python configure script using the ninja_syntax.py module. This seems like it's a bit more complicated, but has a lot of nice attributes.

File names with spaces should just work (unlike make). The amount of hidden complexity is very low (unlike make or SCons); all the complexity lives in your configure script. It's driven by a real, non-arcane language (unlike make). Targets are automatically rebuilt when their rules/dependencies change (unlike make).

It's more difficult to install than make, but only marginally.


Perhaps you just chose the wrong tool for the job. Just because you are able to do similar things with make, doesn't mean it is has to be suited to your chosen use case. It's a tool that was created with a specific purpose in mind, with specific constraints, and it works fine for thousands (I assume) of people every day. You can't blame it for not being a general-purpose programming language. Make isn't beyond building other tools that you can write yourself - and use in the very same makefiles - to assist in handling cases like this, however.


Spaces in filenames create troubles with almost every command line tool. cut(1), awk(1), find(1), xargs(1), what not. How do you quote them, do you need to use \0 as a separator instead, do other commands on the pipeline support \0 separators? What happens after a couple expansions, passing stuff from one script to another?

And what the heck happened on 31 dec 1999 that the world became a different place where suddenly people realised: there were these space things, quite useful they were, why don't we tuck them into every file name and URL and who knows what?

People have better things to do than dealing with these things.


> Spaces in filenames create troubles with almost every command line tool

Then that's a shortcoming which should be addressed with the tools, because humans everywhere use spaces in filenames.

For every command-line tool I make (Windows & Linux), I ensure it handles such trivial use-cases. I can't see why such a simple task is seemingly impossible to get done in GNU coreutils.


Very good of you indeed, but it's a hard task to retroactively change how a system and the greater community around it behaves and how standards like POSIX have defined field separation syntax for decades. I wouldn't mind if make supported spaces in filenames, but the thing is it's a bit late now and the problem is too unimportant to bother solving, frankly.


I think the reason make is both so controversial and also long-lived is that despite how everyone thinks of it, it isn't really a build tool. It actually doesn't know anything at all about how to build C, C++, or any other kind of code. (I know this is obvious to those of us that know make, but I often get the impression that a lot of people think of make as gradle or maven for C, which it really isn't.) It's really a workflow automation tool, and the UX for that is actually pretty close to what you would want. You can pretty trivially just copy tiresome sequences of shell commands that you started out typing manually into a Makefile and automate your workflow really easily without thinking too much. Of course that's what shell scripts are for too, but make has an understanding of file based dependencies that lets you much more naturally express the automated steps in a way that's a lot more efficient to run. A lot of more modern build tools mix up the workflow element with the build element (and in some cases with packaging and distribution as well), and so they are "better than make", but only for a specific language and a specific workflow.


> It's really a workflow automation tool,

That's true.

> and the UX for that is actually pretty close to what you would want.

That is so not true. Make has deeply woven into it the assumption that the product of workflows are files, and that the way you can tell the state of a file is by its last modification date. That's often true for builds (which is why make works reasonably well for builds), but often not true for other kinds of workflows.

But regardless of that, a tool that makes a semantic distinction between tabs and spaces is NEVER the UX you want unless you're a masochist.


> Make has deeply woven into it the assumption that the product of workflows are files, and that the way you can tell the state of a file is by its last modification date.

I've always wondered whether Make would be seen as less of a grudging necessity, and more of an elegant panacea, if operating systems had gone the route of Plan 9, where everything is—symbolically—a file, even if it's not a file in the sense of "a byte-stream persisted on disk."

Or, to put that another way: have you ever considered writing a FUSE filesystem to expose workflow inputs as readable files, and expect outputs as file creation/write calls—and then just throw Make at that?


> everything is—symbolically—a file

How are you going to make the result of a join in a relational database into a file, symbolically or otherwise?


On plan 9, you'd do something like:

     ctlfd = open("/mnt/sql/ctl", OREAD|OWRITE);
     write(fd, "your query");
     read(fd, resultpath);


     resultfd = open(resultpath, OREAD);
     read(fd, result);
     close(resultfd);
This is similar to the patterns used to open network connections or create new windows.


And how would you use that in a makefile?


Something like this would be ideal.

    /mnt/sql/myjoin:
      echo "<sql query>" > /mnt/sql/myjoin
It's just representing writing and reading a database as file operations, they map pretty cleanly. Keep in mind that Plan 9 has per process views of the namespace so you don't have to worry about other processes messing up your /mnt/sql.


I think you're missing the point. I have a workflow where I have to perform some action if the result of a join on a DB meets some criterion. How does "make" help in that case?


In that case, it doesn't help much. For Plan 9 mk, there's a way to use a custom condition to decide if the action should be executed:

    rule:P check_query.rc: prereq rules
        doaction
Where check_query may be a small shell script:

     #!/bin/rc

     # redirect stdin/stdout to /mnt/sql/ctl
     <> /mnt/sql/ctl {
            # send the query to the DB
            echo query
            # print the response, check it for
            # your condition.
            cat `{sed 1q} | awk '$2 != "condition"{exit(1)}'
     }
But I'm not familiar with an alternative using Make. You'd have to do something like:

    .PHONY: rule
     rule: prereq rules
          check_query.sh && doaction


> You'd have to do something like:

That's exactly right, but notice that you are not actually using make at all any more at this point, except as a vehicle to run

check_query.sh && doaction

which is doing all work.


It's being used to manage the ordering of that with other rules, and to run independent steps in parallel.


OK. So here's a scenario: I have a DB table that keeps track of email notifications sent out. There is a column for the address that the email was sent to, and another for the time at which the email was sent. A second table keeps track of replies (e.g. clicks on an embedded link in the email). Feel free to assume additional columns (e.g. unique ids) as needed. When some particular event occurs, I want the following to happen:

1. An email gets sent to a user

2. If there is no reply within a certain time frame, the email gets sent again

3. The above is repeated 3 times. If there is still no reply, a separate notification is sent to an admin account.

That is a common scenario, and trivial to implement as code. Show me how make would help here.


I wouldn't; I don't think that make is a great fit for that kind of long running job. It's a great tool for managing DAGs of dependent, non-interactive, idempotent actions.

You have no DAG, and no actions that can be considered fresh/stale, so there's nothing for make to help with. SQL doesn't have much to do with that.


> How are you going to make the result of a join in a relational database into a file, symbolically or otherwise?

A file that represents the temporary table that has been created. Naming it is harder, unless the SQL query writer was feeling nice and verbose.


You would probably need another query language, but that would come with time, after people had gotten used to the idea.

With that said, there are NoSQL databases these days whose query language is easily expressed as file paths. CouchDB, for example.


A very simple approach is to use empty marker files to make such changes visible in the filesystem. Say,

    dbjoin.done: database.db
        sqlite3 $< <<< "your query"
        touch $@


you could mount the database as a filesystem


With respect to Make, does the database (mounted as a filesystem) retain accurate information that Make needs to operate as designed (primarily the timestamps). To what level of granularity is this data present within the database, and what is the performance of the database accessed in this way? Will it tell you that the table was updated at 08:40:33.7777, or will it tell you only that the whole database was altered at a specific time?


You're talking about a theoretical implementation of a filesystem with a back-end in a relational database. The question is only whether the information is available.

Say directories map to databases and files map to tables and views. You can create new tables and views by either writing data or an appropriate query. Views and result files would be read-only while data files would be writable. Writing to a data file would be done with a query which modifies the table and the result could be retrieved by then reading the file -- the modification time would be the time of the last update which is known.

Views and queries could be cached results from the last time they were ran which could be updated/rerun by touching them or they could be dynamic and update whenever a table they reference is updated.


> but often not true for other kinds of workflows.

Examples? I mean, there are some broken tools (EDA toolchains are famous for this) that generate multiple files with a single program run, which make can handle only with subtlety and care.

But actual tasks that make manages are things that are "expensive" and require checkpointing of state in some sense (if the build was cheap, no one would bother with build tooling). And the filesystem, with its monotonic date stamping of modifications, is the way we checkpoint state in almost all cases.

That's an argument that only makes sense when you state it in the abstract as you did. When it comes down to naming a real world tool or problem that has requirements that can't be solved with files, it's a much harder sell (and one not treated by most "make replacements", FWIW).


> Examples?

Anything where the relevant state lives in a database, or is part of a config file, or is an event that doesn't leave a file behind (like sending a notification).


Like, for example?

To be serious, those are sort of contrived. "Sending a notification" isn't something you want to be managing as state at all. What you probably mean is that you want to send that notification once, on an "official" build. And that requires storing the fact that the notification was sent and a timestamp somewhere (like, heh, a file).

And as for building into a database... that just seems weird to me. I'd be very curious to hear about systems that have successfully done this. As just a general design point, storing clearly derived data (it's build output from "source" files!) in a database is generally considered bad form. It also introduces the idea of an outside dependency on a build, which is also bad form (the "source" code isn't enough anymore, you need a deployed system out there somewhere also).


I need to send an email every time a log file updates, just the tail, simple make file:

  send: foo.log
          tail foo.log | email

  watch make send
Crap, it keeps sending it. Ok, so you work out some scheme involving temporary files which act as guards against duplicate processing. Or you write a script which conditionally sends the email by storing the hash of the previous transmission and comparing it against the hash of the new one.

That last option actually makes sense and can work well and solves a lot of problems, but you've left Make's features to pull this off. For a full workflow system you'll end up needing something more than files and timestamps to control actions, though Make can work very well to prototype it or if you only care about those timestamps.

================

Another issue with Make is that it's not smart enough to know that intermediate files may change without those changes being important. Consider that I change the comments in foo.c or reformat for some reason. This generates a new foo.o because the foo.c timestamp is updated. Now it wants to rebuild everything that uses foo.o because foo.o is newer than those targets. Problem, foo.o didn't actually change and a check of its hash would reveal that. Make doesn't know about this. So you end up making a trivial change to a source file and could spend the afternoon rebuilding the whole system because your build system doesn't understand that nothing in the binaries are actually changing.


How would you fix that with your preferred make replacement? None of that has anything to do with make, you're trying to solve a stateful problem ("did I send this or not?") without using any state. That just doesn't work. It's not a make thing at all.


Lisper was replying to the OP who suggested using Make for general workflows. Make falls apart when your workflow doesn't naturally involve file modification tasks.

With regard to my last comment (the problem with small changes in a file resulting in full-system recompilation), see Tup. It maintains a database of what's happened. So when foo.c is altered it will regenerate foo.o. But if foo.o is not changed, you can set it up to not do anything else. The database is updated to reflect that the current foo.c maps to the current foo.o, and no tasks depending on foo.o will be executed. Tup also handles the case of multiple outputs from a task. There are probably others that do this, it's the one I found that worked well for my (filesystem-based) workflows.

With regard to general workflows (that involve non-filesystem activities), you have to have a workflow system that registers when events happened and other traits to determine whether or not to reexecute all or part of the workflow.


I mean you're just describing make but with hashes instead of file modification times. It's probably the most common criticism of make that its database is the filesystem. If file modification times aren't meaningful to your workflow then of course make won't meet your needs. But saying the solution is 'make with a different back-end' seems a little silly, not because it's not useful, but because they're not really that different.

GNU make handles multiple outputs alright but I will admit they if you want something portable it's pretty hairy.


I love Tup, and have used it in production builds. It is the optimal solution for the problem that it solves, viz, a deterministic file-based build describable with static rules. To start using it, you probably have to "clean up" your existing build.

I don't use it anymore, for several reasons. One is that it would be too off-the-wall for my current work environment. The deeper reason is that it demands a very static view of the world. What I really want is not fast incremental builds, but a live programming environment. We're building custom tooling for that (using tsserver), and it's been very interesting. It's challenging, but one tradeoff is that you don't really care how long a build takes, incremental or otherwise.


    send: foo.log
          tail foo.log | email
          touch send


Correct, that works for this example. But if you have a lot of tasks that involve non-filesystem activities you'll end up littering your filesystem with these empty files for every one of them. This can lead to its own problems (fragility, you forgot that `task_x` doesn't generate a file, or it used to generate one but no longer does, etc.).


> you'll end up littering your filesystem with these empty files for every one of them

These files are information just like files that are not empty.


You're misusing make here. This should be a shell script or a program that uses inotify/kqueue, or a loop with sleeps and stat calls.


just make "send" not be a phony target.

How about "touch send"?

Now "touch -t" will allow you to control the timestamp.

md5sum, diff would be your friends.

Anyway, my C compiler doesn't provide that info, anyway.


What about, for example, a source file that needs to be downloaded and diffed from the web? What about when you need to pull stuff from a database? You can hack your way around but it's not the most fun.


WRT the web file, curl can be run and only download a file if the file has been modified after the file on disk.

DB are harder (yet possible) but not a common request that I’ve seen.


curl -z only works if the server has the proper headers set - good luck with that. The point is, it's great to be able to have custom conditions for determining "needs to be updated", for example.


You can always download to a temporary location and only copy to the destination file if there is a difference. You don't need direct support from curl or whatever other tool generates the data.


A language that uses lots of parens to delimit expressions is incredibly bad UX, especially when you try to balance a complex expression, but hopefully there are tools like Paredit to deal with that, so that I can write my Emacs Lisp with pleasure about every day. Similarly, any decent editor will help you out with using correct indentation with Makefiles.

Last modification date is not always a correct heuristic to use, but it's quite cheap compared to hashing things all the time.

Make is a tool for transforming files. I wonder how it's not quite natural and correct for it to assume it's working with files?


> Make has deeply woven into it the assumption that the product of workflows are files

You're referring to a standard Unix tool, an operating system where EVERYTHING is a file.


Sometimes things in workflows are sending/retrieving data over a network. It may be turning on a light. It could be changing a database. Make has no way of recognizing those events unless you've tied them to your file system. Do you really want an extra file for every entry or table in a database? It becomes fragile and error prone. A real workflow system should use a database, and not the filesystem-as-database.


> Sometimes things in workflows are sending/retrieving data over a network. It may be turning on a light. It could be changing a database. Make has no way of recognizing those events

Why should Make violate basic software design rules and fundamental Unix principles? Do you want your build system to tweak lights? Setup a file interface and add it to your makefile. Do you want your build system to receive data through a network? Well, just go get it. Hell, the whole point of REST is to access data as a glorified file.


The filesystem is, and has always been, a database.


But it's not true that everything is a file. A row in a relational database, for example, is not a file, even in unix.


> A row in a relational database, for example, is not a file, even in unix.

Says who? Nothing stops you from creating an interface that maps that row to a file.

That's the whole point of Unix.

Heck, look at the /proc filesystem tree. Even cpu sensor data is available as a file.


Ha, even eth0 is a file! You can open a network connection by opening this file! Erm... no, that doesn't work.

Then a process! You spawn a process by opening a file! Erm... again, no.

You want me to continue?


In 9front you can. You can even import /net from another machine. Bam, instant NAT.


> Nothing stops you from creating an interface that maps that row to a file.

That's true, nothing stops you, though it is worth noting that no one actually does this, and there's a reason for that. So suppose you did this; how are you going to use that in a makefile?


[flagged]


This seem downvoted, but I would second the opinion. If you're capable of representing a dependency graph, you should be able to handle the tabs. If `make` does your job and the only problem is the tabs, it's not masochism, just pragmatism.


HN has a pretty strong anti-make bias. People here would much rather use build tools that are restricted to specific languages or not available on most systems. Using some obscure hipster build tool means it's a dependency. Though these people who are used to using language-specific package manager seem to take adding dependencies extremely lightly.


> It actually doesn't know anything at all about how to build C, C++, or any other kind of code.

I guess it depends on how you define "know", but there are implicit rules.

    $ cat foo.c
    #include <stdio.h>
    int main() {
      printf("Hello\n");
      return 0;
    }
    $ cat Makefile
    foo: foo.c
    $ make
    cc -O2 -pipe    foo.c  -o foo
    $ ./foo
    Hello


Fun fact: your Makefile above is redundant. You can delete it entirely, and the implicit rules you're using here continue to work just fine.


Not quite: it does declare "foo" as the default target. Without the Makefile, it would be necessary to type `make foo` instead of just `make`.


The built in rule to copy 'build.sh' to 'build' and make it executable is also interesting.. confused the hell out of me


That doesn't work for me. Tried with empty Makefile, no Makefile, with make (PMake) and gmake (GNU Make).


Don't know why that would be. I'm using GNU Make 4.1, but this has worked for years and years as far as I knew. Not a particularly useful feature, so I it doesn't really matter, but you messed up my fun fact.

  dima@fatty:/tmp$ mkdir dir

  dima@fatty:/tmp$ cd dir

  dima@fatty:/tmp/dir$ touch foo.c

  dima@fatty:/tmp/dir$ make -n foo
  cc     foo.c   -o foo

  dima@fatty:/tmp/dir$ make --version
  GNU Make 4.1
  Built for x86_64-pc-linux-gnu
  Copyright (C) 1988-2014 Free Software Foundation, Inc.
  License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
  This is free software: you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.


I had fun learning about that.


Try "make foo" instead of "make"


Ah, that works!


Then you did something wrong — it definitely works with GNU make (can’t speak for PMake):

https://asciinema.org/a/zVu7sYyh7lQZTNAgAsbKUmocr


Yeah

   make foo.c

Should just work. No makefile needed.


No, `make foo`. You need to state the target, not the input.


You're correct.

   make foo

or

   make foo.o


Yeah. There is a metric crap-ton of the design of Make that is solely for the purpose of compiling and linking and document processing. That's actually part of what makes it annoying to use it for projects other than C or C++, when you don't need to compile or transform or depend on different formats.


The core of make is really just a control flow model that understands file dependencies as a first class thing, and permits arbitrary user supplied actions to be specified to update those files. All those default rules around how to handle C files are really more like a standard library and can be easily overridden as desired.

IMHO what makes it annoying for projects other than C or C++ is that there isn't an equivalent portion of makes "standard library" that applies to e.g. java, but this is largely because java went down a different path to develop its build ecosystem.

In an alternate reality java tooling might have been designed to work well with make, and then make would have a substantial builtin knowledge base around how to work with java artifacts as well as having a really nice UX for automating custom workflows, but instead java went down the road of creating monolithic build tooling and for a long time java build tooling really sucked at being extensible for custom workflows.


The thing about Java is that it has its own dependency system embedded in the compiler. This design decision made it difficult to integrate with a tool like make.


I don't think having dependencies built into the language and/or compiler means it needs to be difficult to integrate with something like make. In fact gcc has dependency analysis built into it. It just knows how to output that information in a simple format that make can then consume.

I feel like this choice has more to do with early java culture and/or constraints as compared to unix/linux. With "the unix way" it is really common to solve a problem by writing two separate programs that are loosely coupled by a simple text based file format. When done well, this approach has a lot of the benefits of well done microservices-style applications built today. By contrast, (and probably for a variety of reasons) this approach was always very rare in early java days. It seemed for a while like the norm was to rewrite everything in java and run it all in one giant JVM in order to avoid JVM startup overhead. ;-) The upshot being you often ended up with a lot more monolithic/tightly coupled designs in Java. (I think this is less true about Java today.)


> There is a metric crap-ton of the design of Make that is solely for the purpose of compiling and linking and document processing.

Not really. The bit being pointed out here certainly isn't. It's not any special design going on, it's just a built-in library of rules and variables for C/C++/Pascal/Fortran/Modula-2/Assembler/TeX. These rules are no different than if you had typed them in to the Makefile yourself. And if you don't like them, you can say --no-builtin-rules --no-builtin-variables.

The only actual bit of C-specific design I can think of is .LIBPATTERNS library searching.


Thank you. It's nice to know that I'm not alone in this dark, dark world.

It's so depressing when people use arguments like "it's old", "it uses tabs", and "it's hard to learn". As described by one Peter Miller, "Make is an expert system". If a tool is the most powerful, standard, and expressive among its peers, its age or the fact that it uses tabs should be inconsequential.

If anything, the fact that it's decades old and used in every major software project is a testament to its effectiveness, not a drawback.

And if "learning Make" is a barrier, that to me is a sign that someone cares more about complaining than about their project. The same way people learn Git when it's clear that it's the best tool, people learn Make. It really isn't that hard. Even the basics are enough to reap huge benefits immediately.


> If anything, the fact that it's decades old and used in every major software project is a testament to its effectiveness, not a drawback.

Part of the reason is because people see the superficial issues (like the discussion regarding spaces) before they see the value of years (or decades) of work. It doesn't help that when folks bring this up many times you get "you're holding it wrong" type responses.

I sympathise with both sides of the argument. I don't know the solution but it's unfortunate seeing folks reinventing the wheel and struggling with problems solved in the past.


I definitely think Gulp and Webpack have a place. Where you have to write a Makefile fresh every time for every type of project, Gulp and Webpack come prepared with JS-specific stuff. That's perfect for a throwaway project or a small codebase where build performance and maintenance really don't matter.

My issue is that people who need serious solutions forgo Make because "tabs, man", or because Webpack has pretty console output.


The tools are not the same on every platform. That’s reason enough for me to not use it with my JavaScript projects.

The bigger reason though is that it’s not very idiomatic for JavaScript projects to use Make. It sounds like the only reason that some people go out of their way to use it is because they actually don’t want to learn something.


> "The tools are not the same on every platform."

Any common examples?

> "It’s not very idiomatic for JavaScript projects to use Make."

While I agree that popularity is a factor in picking a tool, it shouldn't be a deciding factor. Going by popularity is precisely how we end up with a new Build System of the Year(TM) every few years. The fact that we've gone through 4 fairly prominent tools (Gulp, Grunt, Broccoli, Webpack), all of which contending to "fix" the previous, and none of which have proper DAG or incremental builds (which Make has had for decades) is damning evidence.

In other words, I think Make could be (and I wish it was) idiomatic for JS.


This comment points out the kind of problems that can occur just on Unix systems - https://news.ycombinator.com/item?id=16485637

And then there’s Windows...

Anyway, the fact that things change quickly in JS-land is more of a testament to how popular it is than anything else IMO. If C were used in the same environments as JS, I’m sure that you'd see just as much churn.


C is used in a lot of platforms.

Js only in the browser (thankfully mostly converged) and in node.


JavaScript is in the server, browser, mobile apps, desktop apps and embedded devices.

Basically, it’s used everywhere that C is used and then some.


> "Basically, it’s used everywhere that C is used and then some."

It's the other way around. Every JavaScript program is run by a C/C++ program.


But if you consider browsers to be a "platform", which I think they are in fairness, then the GP comment could still apply. Whether the browser is written in C is pretty irrelevant to whether it can host it.


I conjecture that an upper-level "platform" that's on top of a lower-level "platform", for some definition of "platform", will be less varied and fragmented than the lower-level one.


What is the runtime? Always the same.

More than C? Ha! The platform where your js is running is probably written in C!


Pffffft. Nobody wants to touch C these days.

That's why many, many, many, many, many more people are writing server, browser, desktop and mobile apps with JS and not C.




There's way more to software engineering than GitHub. Example: every company that hosts their own code repositories.

The TIOBE index, while not entirely accurate either, does reflect usage by a much more broad set of engineers.


Here's another survey that shows what kinds of languages programmers actually want to use and JS tops the list while C comes in below even Bash/Shell programming and PHP.


Incorrect. TIOBE shows you what companies are making their employees do. Github numbers show you what programmers actually want to do.

However, looking at job ads these days I hardly see any for C relative to JavaScript.


My point was that TIOBE is simply more all-encompassing than GitHut. It gives you a clearer picture of the most searched programming languages, which includes both GitHub as well as non-GitHub stats.

Also, GitHut counts repositories, not lines of code. Not sure which of C or JS has more lines of code being written every year, but my guess is that it's C (Java might have more than both though).


Yes and my point is that GitHub numbers show you a much better picture of programming languages that programmers actually want to use.

Are we just going to go back and forth saying the same thing over and over though at this point?


Nah, then it's pretty simple. My rebuttal is that I think "programming languages that programmers actually want to use" (besides still being a debatable claim) is an insignificant metric. The industry doesn't pay for what you want to use.


When you are responding to a comment that says that programmers don’t want to use C...

K, thx bye now.


I think the comment was that JS is more popular than C. ;)


Yes, popular = the thing people want to use. And it was my comment and it’s quite obvious upon reading again, that my implication is very clear. That's why github numbers alone are better because it shows what programmers actually want to use.

It's the same thing I've been saying to you for our last 20 interactions here. But I'll keep going because I'm not going to let you get the last word.

SO let's keep going I guess.


The GitHut numbers show repository count. I'm sure there all sorts of small JS repos with 50 lines of code, boosting project porfolios of people everywhere. Of course lines of code also don't mean something good: some languages are a bit more verbose than others. The TIOBE index shows what programmers search for, which is likely unbiased by LoC or repository count. My goal really is to help show you that JS is popular, but not nearly as widely used (or desired) as people make it out to be. As uncool as Java and C are, they still dominate software development. If you're dead-set on your position however, and don't wish to believe those data, then by all means let's disagree. :)


My position is that JavaScript is way more popular than C with programmers and that the popularity of JavaScript on GitHub proves my point.

Nothing that you have said has refuted that.

However, I do find your comment about "JS repos with 50 lines of code" to be particularly humorous since you can do more with 50 lines of JS than you could possibly dream of doing with 50 lines of C. LOL :) Thanks for a good laugh.

Care to try again?


You can use gnu make on platforms where the usual make is not gnu. For example I install it from ports on *bsd and have an msys gnu make on Windows.


What is the build tool used in the javascript world this week? Broccoli? Grunt? Gulp? Talp?

Just dealing with breaking changes in any one of those is a full time job!


Pro-tip: you don’t have to switch tools the minute that something new comes out.


You do, since different communities use different build tools. And they even switch tools, for the new kid on the block.


Bullshit. You're arguing as if every JS developer has to interact with every JS community.

Stop presenting these ridiculous false dichotomies.


You often do though. For example if you want to do open source development on JS projects, this means that you might have to learn many different build systems.


Yeah, where “often” is defined as “this one example that I just conjured up”.


Yep, you're right. It's anecdotal evidence, of which you have 2 data points on this thread. We're extrapolating here from what we've seen, for sure. If you have better data, please do share.


I know I'm right. Thanks.


Aw, I was hoping you'd actually follow up with some meaningful data. :(


Same


Our anecdotal evidence trumps your no evidence. ;)


Haha yes you do, because your manager does, every time


Make's interface is horrible. Significant tabs. Syntax which relies on bizarre punctuation... If only whoever authored Make 40 years ago had had the design acumen of a Ken Thompson or a Dennis Ritchie!

We're stuck with Make because of network effects. I wish that it could just become "lost" forever and a different dependency-based-programming build tool could replace it... but that's just wishful thinking. The pace of our progress is doomed to be held back by the legacy of those poor design decisions for a long time to come.


Maybe I'm in the minority, but I've always found its syntax to be quite nice (though admittedly a departure from most modern languages). Then again, I find using JSON or not-quite-ruby to configure a build incredibly bizarre and confusing, so I guess I'm just set in my ways...

In all seriousness, what's wrong with it? Significant tabs aren't great, but I feel like that's a relatively minor wart. The simple things are _very_ simple and straightforward. The more complex things are more complex, but usually still manageable...

I've seen plenty of unmanageable Makefiles, but I haven't seen another system that would make them inherently cleaner. (I love CMake, but it's a beast, and even harder to debug than make. If it weren't for its nice cross-platform capabilities, I'm not sure it would see much use. It's also too specialized for a generic build tool. Then again, I definitely prefer it to raw Makefiles for a large C++ project.)


"In all seriousness, what's wrong with it?"

1. Claiming a rule makes a target, but then fails to make that target, ought to be a runtime fatal error in the makefile. I can hardly even guess at how much time this one change alone would have saved people.

2. String concatenation as the fundamental composition method is a cute hack for the 1970s... no sarcasm, it really is... but there's better known ways to make "templates" nowadays. It's hard to debug template-based code, it's hard to build a non-trivial system without templates.

3. Debugging makefiles is made much more difficult than necessary by make's default expansion of every target to about 30 different extensions for specific C-based tools (many of which nobody uses anymore), so make -d output is really hard to use. Technically once you learn to read the output it tends to have all the details you need to figure out what's going wrong, but it is simply buried in piles of files that have never and will never be found in my project.

4. The distinction between runtime variables and template-time variables is really difficult and annoying.

5. I have read the description of what INTERMEDIATE does at least a dozen times and I still don't really get it. I'm pretty sure it's basically a hack on the fact the underlying model isn't rich enough to do what people want.

6. Sort of related to 2, but the only datatype being strings makes a lot of things harder than it needs to be.

7. Make really needs a debugger so I can step through the build, see the final expansions of templates and commands, etc. It's a great example of a place where printf debugging can be very difficult to make work, but it's your only choice.

That said, I'd sort of like "a fixed-up make" myself, but there's an effect I wish I had a name for where new techs that are merely improvements on an old one almost never succeed, as if they are overshadowed by the original. Make++ is probably impossible to get anybody to buy in to, so if you don't want make you pretty much have to make something substantially different just to get people to look at you at all.

Also, obviously, many of the preceding comments still apply to a lot of other build tools, too.


(I'm making a separate comment from the INTERMEDIATE explanation)

I'm a big fan of Make, but appreciated your detailed criticism, and found myself nodding in agreement.

#3: I like to stick MAKEFLAGS += --no-builtin-rules in my Makefiles, for this reason. This, of course, has the downside that I can't take advantage of any of the builtin rules.

#7: There is a 3rd-party GNU Make debugger called Remake https://bashdb.sourceforge.net/remake/ https://sourceforge.net/projects/bashdb/files/remake/ It comes recommended by Paul Smith, the maintainer of GNU Make.


> I have read the description of what INTERMEDIATE does at least a dozen times and I still don't really get it.

It took many reads, but I think I get it.

As a toy example, consider the dependency chain:

   foo <- foo.o <- foo.c
          ^^^^^
             `- candidate to be considered "intermediate"
Depending on how we wrote the rules, foo.o may automatically be considered "intermediate". If we didn't write the rules in a way that foo.o is automatically considered intermediate, we may explicitly mark it as intermediate by writing:

    .INTERMEDIATE: foo.o
So, what does "foo.o" being "intermediate" mean? 2 things:

1. Make will automatically delete "foo.o" after "foo" has been built.

2. "foo.o" doesn't need to exist for "foo" to be considered up-to-date. If "foo" exists and is newer than "foo.c", and "foo.o" doesn't exist, "foo" is considered up-to-date (if "foo" was built by make, then when it was built, "foo.o" must have existed at the time, and must have been up-to-date with foo.c at the time). This is mostly a hack so that property #1 does not break incremental builds.

These seem like useful properties if disk space is at a premium, but is something that I have never wanted Make to do. Rather than characterizing it as "a hack on the fact the underlying model isn't rich enough", I'd characterize it as "a space-saving hack from a time when drives were smaller".

----

If, like me, you don't like this, and want to disable it even on files that Make automatically decides are intermediate, you can write:

    .SECONDARY:
Which tells it 2 things:

a. never apply property #1; never automatically delete intermediate files

b. always apply property #2; always let us hop over missing elements in the dependency tree

I write .SECONDARY: for #a, and don't so much care for #b. But, because #1 never triggers, #b/#2 shoudn't ever come up in practice.


Concerning expansion, I guess the article is not claiming that it is good, it doesn't even mention this "feature". I guess it's just saying: "make can do everything your gulp can, it's better, faster and more readable", that's without using any variable expansion feature.

As soon as more JavaScript developers start writing very very simple Makefiles, tooling will improve and maybe someone will come up with a better make.

The other option is to let people keep using webpack and gulp until they come up with another JS-based build-system, webpack 5 or 6, and grulpt or whatever comes after grunt/gulp.


I'm happy to see this kind of detailed criticism. I would be happy to use a new tool if it is similarly general, and has a declarative style. Other commenters brought up Bazel, which I am looking forward to learning about.


With the debugging expansion thing you're mentioning, now I'm craving a make built on some minimalist functional programming language like Racket where "expand the call tree" is a basic operation.


I've been writing Makefiles regularly for maybe 15 years and I always end up on this page every time I need to write a new one: https://www.gnu.org/software/make/manual/html_node/Automatic...

$< $> $* $^ ... Not particularly explicit. You also have the very useful substitution rules, like $(SRC:.c=.o) which are probably more arcane than they ought to be. You can make similar complaints about POSIX shell syntax but at least the shell has the excuse of being used interactively so it makes sense to save on the typing I suppose.

That's my major qualm with it however, the rest of the syntax is mostly straightforward in my opinion, at least for basic Makefiles.


give pmake a shot sometime.. the syntax/semantics are much more 'shell-like' imho and some things are just much more possible.. (e.g. looping rather than recursive calls to function definitions)

manual: ('make' on a bsd is PMake)

https://www.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...

but most linux flavors have a package somewhere..


> $(SRC:.c=.o)

I use

  $(patsubst %.c,%.o,$(SRC))
instead, which I find easier to remember.


Isn't that a GNU extension? Here's an other problem right there, figure out which dialect of Make you're using and their various quirks and extensions.


I think you can compile gmake for most platforms.


> I've seen plenty of unmanageable Makefiles, but I haven't seen another system that would make them inherently cleaner.

Not to push the particular product, but the approach:

FAKE (https://fake.build/), is an F# "Make" system that takes a fundamentally different tack to handling the complexity. Instead of having a new, restricted, purpose built language they've implemented a build DSL in F# scripts.

That yields build scripts that are strongly typed, just-in-time compiled, have full access to the .Net ecosystem and all your own code libraries, and are implemented in a first class functional language. That is to say: you can bring the full force of programming and abstraction to handle arbitrarily complex situations using the same competencies that one uses for coding.

As the build file grows in complexity and scope it can be refactored, and use functionality integrated into your infrastructure code, in the same way programs are refactored and improved to make them manageable. The result is something highly accessible, supportive, and aggressively positioned for modern build chains... If you can do it in .Net, you can do it in FAKE.


I don't like the syntax much, but I love the programming model. I think people who are used to imperative languages are put off by the declarative programming model of make.


> If only whoever authored Make 40 years ago

Make was created by Stuart Feldman at Bell Labs in 1976. The fact that it is still in use in any form and still being discussed here is a testament to what an amazingly good job he did at the time. Whether it is the right tool for any given modern use case is up to the people who decide to use it or pass it by. I still work with almost daily, and it's in wide use by backend system engineers if my experience is any guide. Yes, it's quite clunky but also quite powerful and reliable at the few things it does. Its also pretty much guaranteed to already be installed and working on every nix system, and that's not nothing.


Look at the git makefile: https://github.com/git/git/blob/master/Makefile

You know what you have to do to build git? Type make. It's amazing.


If fixing things in this makefile is not your job, then it really is amazing.


Honestly its like any other tool we use: once you know the rules governing its behavior it really isn't that hard to debug issues. Make is very consistent in most cases. There are plenty of traps, and they're made easier to fall into given the archaic syntax, but you don't typically have to fall into them over and over again :).


Yes but some tools have... shall we say, irregular rules. Tell me how CMake's argument quoting works for example.


Actually, you might want to glance at the first few hundred lines of platform specific stuff you are supposed to set manually before the main makefile begins. Literally meant to be set by hand, but the Git developers made educated guesses about the specifics of each platform.


> The fact that it is still in use in any form and still being discussed here is a testament to what an amazingly good job he did at the time.

Not necessarily.

> It’s also pretty much guaranteed to already be installed and working on every nix system, and that's not nothing.

First mover advantage.

The fact that no modern language, basically nothing outside of C/C++ uses it, says a lot. And even those are moving away, see Cmake & co.


> basically nothing outside of C/C++ uses it

That's how it has always been, though. In the '90s you didn't need to run Perl or Tcl through it because you weren't compiling anything. The Venn diagram of "Platforms that have make" and "Popular compiled languages" comes up with only asm/C/C++.

Many languages want to do things their way, such as Erlang, Common Lisp, Java, etc. Ruby and Python are interpreted and also don't need a build process. JS, until recently, was interpreted. Lots of NIH going around.

> And even those are moving away, see Cmake & co.

Cmake is closer to autoconf/automake/libtool. If you have serious cross-platform needs, then Cmake is a fine tool. But it's hardly less archaic than make (and only slightly less so than autoconf) and I'm dubious that too many people are really moving away rather than just picking up the newer, shiny tool for newer projects.

If I were doing a small static website or something that required a build with standard *ix tools, vanilla make would be my tool of choice, hands down. Tools like autoconf and, as the author pointed out, webpack provide a more specialized need.


> That's how it has always been, though. In the '90s you didn't need to run Perl or Tcl through it because you weren't compiling anything. The Venn diagram of "Platforms that have make" and "Popular compiled languages" comes up with only asm/C/C++.

Pascal/Delphi were wildly popular in the late 80's, early 90's, though. I don't remember it being built with make, though.


> Make's interface is horrible. Significant tabs.

True, one should not edit makefiles with notepad. Proper editors have support for editing them, though.

> Syntax which relies on bizarre punctuation...

Well documented, though.[1]

> a different dependency-based-programming build tool could replace it... but that's just wishful thinking

Use prolog[2], you should be able to write this in about 42 lines. But you'll end up with the same complaints ("the syntax, the magic variables!") because, in my experience, those are just superficial: The real problem, imho, is that declarative and rule based programming are simply not part of the curriculum, especially not for auto-didactic web developers. OTOH, it only takes an hour or two to grok it, when somebody "who knows" is around and explains. It really is dead simple.

[1] http://pubs.opengroup.org/onlinepubs/009695399/utilities/mak...

[2] https://en.wikipedia.org/wiki/Prolog


FWIW $< is < from redirecting input... a mnemonic for inputs.

$@ is target, because @ looks sort of like a bullseye.

significant tabs are gross, yes, but this is a one-time edit to your vimrc then you can forget about it forever.

it’s also installed everywhere and has minimal dependencies. it could be a lot worse. (see also: m4, autoconf, sendmail.cf)


But how do you remember when to use $^ and $<


Make is such a horrifically awful thing to work with that I just end up using a regular scripting language for building. Why learn another language with all its eccentricities and footguns when I already know several others?


Because, like many other things in programming, you'll end up with a half-baked and buggy implementation of make anyways.

Incremental builds by looking for changed dependencies, a configuration file with its own significant identifiers (i.e. a build DSL shoehorned into JSON or YAML), generalized target rules, shelling out commands, sub-project builds, dry runs, dependencies for your own script, parallelization, and a unique tool with (making an generalization here) insufficient documentation.

If you're really unlucky, you'll even end up with the equivalent of a configure.sh to transpile one DSL and run environment into the DSL for your custom tool.


> Because, like many other things in programming, you'll end up with a half-baked and buggy implementation of make anyways.

I'd argue that make is a half-baked and buggy implementation of make - so that's not really a drawback so much as the status quo.

E.g. I have scripts that exist mainly to carefully select the "correct" version of make for a given project to deal with path normalization and bintools selection issues on windows - and none of these ~3 versions of make work on all our Makefiles. One of those versions appears to have some kind of IO bug - it'll invoke commands with random characters missing for sufficiently large Makefiles, which I've already gone over with a hex editor to make sure there wasn't some weird control characters or invisible whitespace that were to blame. So, buggy and brittle.


I'd argue that there are some major concerns with the makefiles if they require the use of 3 different versions of make to get it all working - a situation I've never personally seen before. I'd suggest prioritizing fixing that before attempting tracking down the cause of other issues. As it stands, there are too many points of interaction to attribute any bugs to any one program.

That said, Windows has never been a strong platform on which to run make (or git, gcc, or any other [u/li]nix originated CLI tools). When I hear of folks using make, I tend to make the assumption that they're running on a [u/li]nix or BSD derivative.


> I'd suggest prioritizing fixing that

I've already had one upstream patch rejected on account of the additional complexity fixing it introduces, and would rather not indefinitely support my own fork of other people's build setups.

Or if I am going to indefinitely support my own fork, I might as well rewrite the build config to properly integrate with the rest of whatever build system I happen to be using - at least then I'll get unified build/compilation options etc.


> path normalization and bintools selection issues on windows

make was never intended to be a cross-platform tool. If you face problems using it on Windows, then that's on you.


All the more reason to use a regular scripting language for building, then, since there are several that are plenty cross-platform.


That is, unless the folks who maintain and champion make want me to use make. It's on them to court me, the cross-platform developer, not the other way around.


Again, if you're a cross-platform developer, make is not for you. Do you also expect zsh enthusiasts to court Windows devs?


I kind of disagree, it's quite possible to use gnu make on Windows with msys (though requires some scripting discipline and probably not for everyone).

I'm currently doing this for a cross platform c++ project. Same makefile used for Linux, Mac, and windows+msys+cl.exe. (yes, fair amount of extra variables and ifdefs to support the last one...)


Incremental builds. It's a pain in the ass to write this in a good, generic way yourself. If your build tools don't already understand it, then make (and similar tools) makes for a nice addition versus just a script that invokes everything every time.

EDIT: Oh, and parallel execution, but smart parallel execution. Independent tasks can be allowed to run simultaneously. Very useful when you have a lot of IO bound tasks. Like in compilation, or if you set it up to retrieve or transmit data over a network.

It's not too hard to do that in your custom script, but more care is required because until you custom script reaches make level internal complexity you will have to manually track dependencies or make sub-optimal assumptions.


I very much prefer Rake for orchestrating multiple build systems in a web project or dependency installation or just any sort of scripting. It comes with a simplified interface to shelling out that will print out the commands you are calling.

If for some reason the project is ruby allergic, I'll try to use Invoke [0].

Sometimes I feel like people's usage of Make in web projects is akin to someone taking an axe and hand saw out to break down a tree for firewood when there are multiple perfectly functioning chainsaws in the garage.

[0] http://www.pyinvoke.org/


This is why I like redo; it handles all of the dependency stuff for you, but your build scripts are written in pure sh


I have been meaning to spend some time with redo!

https://cr.yp.to/redo.html


always been curious about redo. which variant do you use, and what do you use it for?


I use apenwar's because I installed it a long time ago and haven't needed anything more; it does look like it may be abandoned though.

I use it for anything where I need job scheduling around creating output files; even my DVD ripping is managed with redo calling into ffmpeg.


What's your solution for mutliple-output processes like yacc/bison (which creates both an .h and a .c file?)


I don't run into those too often.

When I do, there are two cases:

1) All dependencies are on only one output file (e.g. link stages that generate .map files, compile phases that generate separate .o and debug files). I just treat these as normal

2) I may need to depend on each of the files (I don't use yacc/bison, but it sounds like this would qualify). I select one file to be the main output and have the .do file for that ensure that the secondary outputs go to a mangled name; I then have the .do files for the secondary outputs rename the mangled file.

quick example for generating foo.c/foo.h

foo.c.do:

    generate-file --cout "$3" --hout "$2-mangled.h"
foo.h.do:

    redo-ifchange "$2.c"
    mv "$2-mangled.h" "$3"


If the h output changes but the c doesn’t, this rule will miss it.

I once solved it by making a tar file or all the outputs and extracting it to both .c and .h but that’s incredibly kludgy, still looking for a better solution


As long as the timestamp changes on the C file, that's fine, right? At least with the version of redo I use timestamps are used by default, not file contents.


Do you honestly think Make has no advantages over conventional scripting languages when it comes to building software? I suspect you know that it’s designed for that task and has been used for that task for several decades. Presumably you respect the community over those decades sufficiently to have a strong prior belief that there are good arguments for Make (as well as downsides), even if you can’t be bothered to research them.


I honestly think any advantages it has are significantly outweighed by all its disadvantages.

And no, I don't respect the community. The community very often makes "The Way Things Are Done" its personal religion and refuses to ever change anything for the better.


Another advantage of using a scripting language is that the hard work of portability will already have been done for you by the authors of the scripting language.

Make, in contrast, works within a shell and invokes programs which may be either wildly or subtly incompatible across platforms. Add in the lacking support for conditionals in POSIX make and portability is a nightmare.


On the other hand, make invokes well known shell commands; I already know what they do, when I see them used.

If you are invoking your functions (or functions imported from random library), I have to check them for what they do.


it is possible that if you need to do that much in a makefile that you are running into shell incompatibilities then perhaps you are attempting to cram too much complexity into your build system and perhaps should be using something like Docker to decrease incidences of “works on my machine/os/distro/et c”.

(I am aware that this advice does not hold true in all cases; just that for many of them overly complicated build systems is a code smell.)


To be fair, at the core much of what you're going to be doing is invoking command-line tools. I mean, most compilers are invoked as command-line tools.


because then people new to your project can see a makefile and know that “make” and “make test” and “make install” probably work without having to learn your homebrew, one-off build system.


This argument for Make is exclusively based on network effects. Fine. we accept that we're stuck with it. It still sucks.


I disagree. It’s an argument for convention. This is the same reason we have package.json or Dockerfile or Pipfile or Rakefile - it tells us the standard interface for “install this”. It’s not specific to make.

Docker is new and didn’t have any network effect until recently.


How many of those languages are declarative, not imperative? The advantage of Make is that you only need to give it the inputs and the outputs.



Compared to the autotools I'd say make is fairly decent! But yeah, it's a mess. I wonder if it's really possible to improve significantly over the status quo if you're not willing to compromise on flexibility and design the build system alongside the language itself, like Rust does with cargo for instance.

Make on the other hand is completely language agnostic, you can use it to compile C, Java, LaTeX, or make your taxes. Make is a bit like shell scripts, it's great when you have a small project and you just want to build a few C files for instance[1] but when it starts growing there always comes a point where it becomes hell.

[1] And even then if you want to do it right you need compiler support to figure header dependencies out, like GCC's various -M flags.


It's not like C is famous for having particularly good syntax. If anything, it's the worst thing about it.


They must have got something good though. Else why would there entire families of C-like languages ?

Plus, to me C syntax is particularly good. You're writing real words and the computers does the things you tell it to. To the letter.


> Else why would there entire families of C-like languages ?

C became popular because Unix became popular, and when designing a language that intends to become popular one aims for a ratio of 10% novelty and 90% familiarity. Like, ever wonder why Javascript's date format numbers the months starting from zero and the days starting from one? It's because Eich was told to make JS as much like Java as he could, and java.util.Date numbers the months from zero and the days from one, which Java itself got from C's time.h. (Not coincidentally, Java and JS are both in the C-like language family.)

> You're writing real words

In C? Not compared to ALGOL, COBOL, Pascal, and Ada, you're not. :)

> the computers does the things you tell it to. To the letter.

As long as you're not using a modern compiler, whose optimizations will gleefully translate your code into whatever operations it pleases. And even if one were to bypass C and write assembly code manually, that still doesn't give you complete control over modern CPUs, who are free to do all sorts of opaque silliness in the background for the sake of performance.


To be fair with myself I was mostly snarking around op comment who dismissed the C language as if it was some ancient relic.

But even without accounting for compiler optimizations and cpu architecture, the C language just straight up lies to its user. You could code something entirely with void*.

PS: what I meant by "real words" is that you're naming functions and calling them by their name. Which in itself is very powerful.


> They must have got something good though. Else why would there entire families of C-like languages ?

Network effect. If you wanted to write for unix, you almost had to use C originally.

> Plus, to me C syntax is particularly good.

It's meh. It's not the most straightforward when defining complex types like arrays or function pointers.

> You're writing real words and the computers does the things you tell it to. To the letter.

So yeah, about that...


Everybody I know found it confusing at first, but its logical, and modular. I don't know a language with a better type declaration syntax. It gets impractical when you define function that return functions that return... because these expression grow on the left and right simultaneously. But realistically, you don't do that in C, and other than that, I find it easy to read the type of any expression...

    const int *x[5];
    *x[3];  // const int
    x[3];  // const int *
    x;   // ok - there is first a decay of x[5] to *x, so: const int **
Is there any other syntax that has this modularity?


> But realistically, you don't do that in C

IMHO, that's a self-fulfilling prophecy. If it were reasonably easy to do that in C, it would be done more.


It is quite reasonably easy enough compared to any real practical programming task. For example:

  // your ordinary function declarations
  int plus2(int i) { return i + 2; }
  int times2(int i) { return i * 2; }

  typedef int (*modfun)(int);
  // a very reasonable syntax altogether
  modfun get_modfun(bool mul) { return (mul ? times2 : plus2); }
Nests as well if you ever wanted to:

  modfun get_modfun_no_mul(bool mul) { return plus2; }

  typedef modfun (*modfun_getter)(bool);
  modfun_getter get_getter(bool allow_mul_opt) { return (allow_mul_opt ? get_modfun : get_modfun_no_mul); }


C is a systems programming language. It doesn't have closures or other features from functional programming. In other words, you can't "create functions at runtime". That is why you basically never see a function returning another function.


So, go is also a "systems" language, so the terms more or less meaningless now. Assuming you mean a language we can easily compile to an independent binary capable of being run directly on a microprocessor with no support, I offer you rust as a counter-example.

Also, functional programming and returning functions does not mean they are created at runtime.


It's a fact that C doesn't have closures. That is my point. I happen to like that fact, but you don't have to agree with me.

And "creating functions" means: closing over variables ("closures"), or partial application. I think it takes at least that to be able to return interesting functions.

(and whether go is a systems programming language is at least debatable. I think the creators have distanced themselves from that. It depends on your definition of "systems". You can't really write an OS in go).


> It's a fact that C doesn't have closures.

OK.

> That is my point.

Not unless your first two sentences have absolutely nothing to do with each other. Your point appears to be that because it's a low-level language it doesn't have these features, which is false.

> I happen to like that fact, but you don't have to agree with me.

Or I just think you don't have any experince using better languages. That isn't to say that other language could supplant C, but just that it's difficult for me to image actually liking the C type system (or lack thereof) and lack of first class functions. It's incredibly limiting and requires a lot of hoops to be jumped through to do anything interesting.

> And "creating functions" means: closing over variables ("closures"), or partial application.

Well, you said at runtime. Of course you can "create" functions at compile or programming time!


> Your point appears to be that because it's a low-level language it doesn't have these features, which is false.

I would think that first class closures do indeed _not_ belong in a low-level language. They hide complexity, and you want to avoid that in low-level programming. Not necessarily for performance reasons, but more from a standpoint of clarity (which in turn can critically affect performance, but in subtler ways).

> Or I just think you don't have any experince using better languages.

Nah, I have experience in many other languages, including Python, C++11, Java, Haskell, Javascript, Postscript. The self-containment, control, robustness and clarity I get from a cleanly designed C architecture is just a lot more appealing to me. The only other language I can stand is Python, but for complex things, it becomes actually more work. For example, because it's so goddamn hard to just copy data as values in most languages (thanks to crazy object graphs).

> It's incredibly limiting and requires a lot of hoops to be jumped through to do anything interesting.

It depends on what you are doing. It's a bad match for domains where you have to fight with short-lived objects and do a lot of uncontrolled allocations and string conversions. My experience in other domains (including some types of enterprise software) is more the opposite, though. Most software projects written in more advanced languages are so damn complicated, but do nothing impressive at all. They are mostly busy fighting the complexity that comes from using the many features of the language. But those features help only a bit (in the small), and when you scale up they come back and bite you!

Here's a nice video from a guy who gets shit done, if you are interested: https://www.youtube.com/watch?v=khmFGThc5TI

> Well, you said at runtime. Of course you can "create" functions at compile or programming time!

Closures and partial application are done at runtime. The values bound are dynamic. So in that sense, the functions are indeed created at runtime. I sense that you were of the impression that a closure would actually have the argument "baked" in at compile time (resulting in a different code, at a low level) instead of the argument being applied at runtime. That's not the case, unless optimizations are possible. If that was really your impression, this makes my point regarding avoiding complexity and that closures do not belong in a low-level language. (Look up closure conversion / lambda lifting)


I'm really not sure what you mean by "modularity". There's a lot of languages with much more readable and composable type declarations than C. For example in OCaml:

  let x : (int list ref, string) result option =
    let x1 : int = 0 in
    let x2 : int list = [ x1 ] in
    let x3 : int list ref = ref x2 in
    let x4 : (int list ref, string) result = Ok x3 in
    Some x4
The declaration of List.map:

  val map : ('a -> 'b) -> 'a list -> 'b list


I've got no issue with C syntax either, but to be clear, syntax != semantics.

To cut to the chase:

* syntax = structure

* semantics = meaning

C syntax would include things like curly braces and semicolons, whereas C semantics would include things like the functionality of the reserved keywords.

This SO answer gives a more detailed explanation:

https://stackoverflow.com/a/17931183/1863924


The nice thing about C is that it's a great cross-platform assembly language. I wouldn't call its syntax good or bad.


> The nice thing about C is that it's a great cross-platform assembly language. I wouldn't call its syntax good or bad.

Is it though? It's definitely available on a huge number of platforms, but are the implementations compatible?

And even if they are, there is so much undefined behavior that taking advantage of the cross-platform nature is not nearly as easy as it should be.


Exactly. If C hadn't been invented then someone would have invented it later under a different name.

Because it's obvious people need a minimalistic portable assembler.


Before C was invented there were already other companies writing OSes in high level languages, but yeah thanks to its victory now it gets all the credits.

History is re-written by winners as usual.


I wasn't thinking in terms of winners.

But what were those other OSs and languages? Sounds interesting!


You can start here,

https://en.wikipedia.org/wiki/System_programming_language

https://en.wikipedia.org/wiki/Category:Systems_programming_l...

Some examples, out of my head.

- Burroughs, now being sold as Unisys ClearCase, used ESPOL, later replaced by NEWP, which already used the concept of UNSAFE code blocks;

- IBM used PL/8 for their RISC research, before switching to C, when they decide to go commercial selling RISC hardware for UNIX workstations

- VAX/VMS used Bliss

- Xerox PARC started their research in BCPL, eventually moved to Mesa (later upgraded to Mesa/Cedar), these languages are the inspiration for Wirth's Modula-2 and Oberon languages

- OS/400, nowadays known as IBM i, was developed in PL/S. New code started to be replaced by C++. It was probably the first OS to use a kernel level JIT with a portable bytecode for its executables.

BitSavers and Archive web sites are full of scanned papers and manuals from these and other systems.


Most languages become popular because they have a reputation of necessity. Which is to say, they are popular due to marketing, not due to quality. (As most things are.)


Isn't C just following the syntax of ALGOL? Or are we referring to the parts specific to C?


C is a simpler Algol, yes. It messed up the dangling-else problem, and lost nested procedures, among other things.


It also lost call-by-name parameters, which should be a considered a good thing.


Sort of related -- I'm sure this made it to HN when it was new -- is https://beebo.org/haycorn/2015-04-20_tabs-and-makefiles.html.


I'm really glad software development has largely moved away from the terse names and symbols that used to be so common. I mean patsubst? What a terrible function name! At the very least I would have added the 'h' in path.


> I mean patsubst? What a terrible function name! At the very least I would have added the 'h' in path.

The "pat" in "patsubst" means pattern, not path, so adding an h would be incorrect. (https://www.gnu.org/software/make/manual/html_node/Text-Func...)


Why would you add an 'h' to 'pattern'?


I think that just supports my argument, really. I've never really used makefiles. The blog post talked about using patsubst to change a path. The name of the function essentially gives no clue at all what it does.


"the blog post talked"

Isn't that the problem with using blog posts as educational sources?

(I'm happy I was learning all my toolkit long before blogs became a thing. It's hard to open and read Physics book, when someone on YT talks funny about its second chapter.)


I smell a 8 character limitation somewhere :)


>and a different dependency-based-programming build tool could replace it...

Make one that's generic and provides significant enough improvements that people care.


> Significant tabs

.RECIPEPREFIX option has been available for 7 years. Stop whining about non-issues.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: