
GNU Make 4.0 Released - jgrahamc
http://lists.gnu.org/archive/html/make-w32/2013-10/msg00021.html
======
beagle3
Congrats to the GNU Make team.

I still use Make occasionally, but redo
([https://github.com/apenwarr/redo](https://github.com/apenwarr/redo)) just
fits my mind better - it's simple, consistent, doesn't require keeping another
language with its dark corners (it's just shell scripts), and works well.

It also solves the bootstrap problem - there's a short script called "do",
which always rebuilds everything, all of 177 of non-minified bash script. If
you need to distribute anything, you should distribute it with "do" inside -
and guarantee that your users can build without relying on an installed
package (or a specific version of GNU Make).

p.s: OS/X, Linux and BSD supported. Supposedly, windows through cygwin or
busybox - but I've never tried.

~~~
_ak
An attitude "It's just a shell script" can be dangerous when you have software
that is supposed to be built on multiple platforms. You can never be 100 %
sure what your /bin/sh is, be it bash, ash, dash, ksh, zsh or even an ancient
Bourne shell.

~~~
mixmastamyk
According to the the redo readme, it takes care of that problem.
Alternatively, if a specific shell is required, it can be specified with #!...

~~~
LukeShu
To explain why you are being downvoted:

It doesn't take care of the problem, it mitigates it. It tries to find the
most POSIX-like shell. You still don't know which shell that will be. And
then, even if you did, you are still relying on external programs, which may
all work differently than the ones on your box. Writing portable shell scripts
is _hard_.

------
JoshTriplett
Meanwhile, many users are still stuck on 3.81, because 3.82 introduced an
intentionally backward-incompatible change to the handling of pattern rules,
which breaks real makefiles such as those in the Linux kernel:
[https://savannah.gnu.org/bugs/index.php?33034](https://savannah.gnu.org/bugs/index.php?33034)

This bug is the reason why make 3.82, released in 2011, still sits in Debian
experimental and hasn't yet graduated to unstable where it might form part of
a release. Make 4.0 seems likely to suffer the same fate.

~~~
ajross
The kernel actually patched around that, so kernel developers aren't actually
stuck. It's true that older kernels won't build with current make though. I
think I remember noticing that Fedora carries a patch that fixes this issue,
but am too lazy to check. It's possible other distros do too.

FWIW: Other issues with 3.82 broke things in Android and openembedded too. It
wasn't their finest release.

~~~
lambda
When will developers start to understand that backwards compatibility is one
of the most important goals of any software system? Randomly introducing
something that is backwards incompatible in a minor point release is simply
unacceptable.

A huge amount of complexity in using and deploying software comes about
because of very narrow version requirements, where due to combinations of
backwards incompatibility and necessary new features, there are very narrow
windows of software which is actually usable. Add in dozens of different
packages, and you are frequently left with a situation in which there is no
single set that will all interoperate.

The Linux kernel has a very reasonable attitude: you don't break userspace.
You just don't do it. I wish more tools would adopt this attitude. You just
don't break your API, ever. No backwards-incompatible revisions, ever. If it's
backwards incompatible, it's a new piece of software, not an update.

I realize that this is fairly idealistic, but having experienced so much
breakage with non-backwards compatible software, and had so much better
experience any time people make an actual honest effort to preserve backwards
compatibility, I really hope that more developers start to consider backwards
compatibility one of their primary goals, only to be broken if absolutely
necessary, and only with a major version change or entire renaming of the
package.

~~~
rwmj
Actually GNU make is not a good target. Historically make has been extremely
careful about backwards compatibility. The "bug" here is that the Makefiles
were relying on an undocumented feature.

 _In general_ yes it'd be good if developers paid more attention to backwards
compatibility.

~~~
lambda
Sure, they were relying on an undocumented feature, but it's not like the GNU
Make developers didn't know about this. They explicitly called out the
backwards incompatible change in the release note; they knew they were
breaking compatibility.

My point is that even undocumented features are an important part of backwards
compatibility. If there is software out there relying on it, breaking it and
saying "well, they were relying on an undocumented feature" doesn't actually
help your users at all. Is your goal to ensure that developers do everything
the way you tell them to all the time, and punish them if they don't? Or is
your goal to provide software that your users can use to do their jobs?

~~~
Yaa101
I disagree, when using undocumented features, the burden is fully on the
developer using these features.

It is how the world works, when you use undocumented features in every aspect
of life (e.g. not following the rules) it is great when you get away with
that, but don't expect the world to bend their rules to accommodate you, that
only happens when a majority goes that path, and often it doesn't even happen
then.

~~~
jbooth
No, how the world works is that you satisfy your users or your users go
someplace else.

Microsoft gets this. Linus gets this. FSF and GNU, while I love their ideals,
are apparently a bit too ivory tower to actually get it on things like this.

------
cjensen
Back at the dawn of time, Borland C wrote dependency info into its object
files, and Borland Make could read the object files to get the info. That's so
much simpler than makedepend or gcc -M that it's sad.

------
Shish2k
> New feature: GNU Guile integration

Because if there's one thing the GNU software-building toolchain needs, it's
more languages! How many are we up to now?

Grouping output by target for parallel builds sounds very useful though \o/

~~~
Kototama
I think it's great. In the future all GNU tools will probably be extensible
with Scheme.

~~~
fosap
Can't wait for EMACS with scheme support.

~~~
dbaupp
[http://www.emacswiki.org/emacs/GuileEmacs](http://www.emacswiki.org/emacs/GuileEmacs)

------
undoware
Finally! An answer to all the clamour for a Guile-based extension language to
an increasingly nonstandard implementation of an increasingly fragmented
'POSIX' 'standard'.

~~~
ajross
I don't see that the sarcasm is warranted. GNU make, really, _is_ the relevant
standard. It's what virtually everyone doing serious work (as opposed to e.g.
emitting "makefiles" from some other tool) with make uses. GNU Make is simply
more featureful and more performant than the other choices.

One of the features it doesn't have, though, is a first class language for
doing non-trivial extensions. Some systems (the kernel Makefiles and Android's
build system are really good examples) have stretched make's built-in
environmnent past the limit already and might have benefitted from something
like this.

------
philjackson
I'm being incredibly unimaginative today, but what's a good use-case for
extending Make via scheme?

~~~
vog
Some years ago I designed a Makefile-based build system [1] that uses many GNU
Make features. If you write Makefiles on that level, you quickly notice that
GNU Make's internal structure resembles Lisp quite a lot [2], e.g. you write
$(filter xx,yy) which is very close to (filter xx yy). Also, you have other
Lisp-like stuff like quoting, eval, and so on. Sometimes GNU Make really felt
almost like Lisp, but with a slightly more cumbersome syntax and evaluation
model.

So I often asked myself why they don't use Lisp directly. Now they almost do,
which I find very consequent.

[1] [http://mxe.cc/](http://mxe.cc/)

[2]
[https://github.com/mxe/mxe/blob/master/Makefile](https://github.com/mxe/mxe/blob/master/Makefile)

~~~
mzs
That's funny, I see what you are talking about, but I had a moment like that
with make, but instead likened it to prolog instead in my head one evening!

~~~
mturmon
I think Prolog is a much better analogy than Lisp. Like Prolog, the Makefile
declares relationships, which go into a rule database, and then when you need
to make something, the runtime scans the rules looking for what matches. This
will imply other dependencies, which are satisfied recursively.

~~~
opk
There are two separate stages with GNU make. The first is very lisp like with
the functions like $(filter) and you use it to create all the rules for the
second, more prolog like, stage. A simple Makefile would do nothing for the
first stage because it'd all be static text instead of function calls.

~~~
vog
Not only that. In the above mentioned MXE project, the $(...) functions also
play a vital role on the "second stage": They create a huge, automatic set of
rules and actions, based on a higher-level description that is given for each
package via a bunch Make variables (some of which are multiline and contain
actual commands for that package).

------
Roboprog
It would be nice if gmake had some predefined, portable, macros for working
with Java, so XML clutter like Ant and Maven could be shown the door.

I so prefer reading makefiles to a sea of XML gibberish.

    
    
        mytarget:  source1 source2
    
            do-something -o mytarget source1 source2
    
    

so much easier to read than Ant tasks. (and, yes, there are ways to get the
file lists into symbols without explicitly enumerating them)

Ant, and by extension, Maven, was a huge step backward. Sure, have a way to
make portable commands, BUT, was XML really needed??? And, does every task
have to be built such that the task checks dependency timestamps, rather than
the framework that calls the task???

~~~
lsd5you
Isn't what you are saying is that if Make was better then it could be better
than Ant?

Ant and Maven both have plenty of issues, but being dissimilar from make is
not one of them. At the end of the day 'file based' tools like Make are not
really suitable for Java which is more concerned with source directories -
rightly so in my opinion. There is no program that cannot be written because
of a consistent source layout.

There are ways to enumerate files, but nothing good (in my experience) and by
the time you are using them the make files are no longer easy to read (never
mind debug).

Make also has this Faustian pact thing going on. Access to the shell and all
of its power and familiarity, but at the (significant) expense of portability.

------
Aardwolf
Does make have any sort of way to see if a .h file changed, that all C/C++
source files that directly or indirectly include it need to be recompiled, but
others don't need to?

~~~
majika
Another commenter already pointed out the `-M` flag on GCC (and Clang), but I
thought I'd share the rule I use to provide automatic dependencies. It's a bit
simpler than the one suggested in the GNU Make Manual:

[https://github.com/mcinglis/c-style#use-gccs-and-clangs--
m-t...](https://github.com/mcinglis/c-style#use-gccs-and-clangs--m-to-
automatically-generate-object-file-dependencies)

~~~
jedbrown
I suggest just building with "-MMD -MP", using "-MT" if for some reason you
want _.d files to reside elsewhere. The "-MP" option prevents build errors
when a file is removed. No need for crazy redirects or an actual rule to
create the _.d files.

~~~
majika
`-MP` makes sense, but I don't see how Make could work out when to regenerate
the dependency files themselves without the `-MT` value as in my rule, or the
`sed` expression given in the GNU Make Manual. Maybe I'm missing something
about `-MMD` - I don't understand everything said about it in the manpage of
GCC.

Generally it's my style to prefer I/O redirection in the shell to programs
taking output parameters and managing their own files. Thus, using `> $@`
rather than `-MF` or `-MD`.

~~~
opk
You seem to be invoking the compiler separately for the sole purpose of
building .d files (with a %.d: %.c rule). Don't do it that way. With the -MMD
option, you create the .d file as a side-effect of the real compilation. Any
change that would require the .d file to be rebuilt has to involve a change to
the old dependencies as listed in the old .d file. So the source file will be
recompiled. In order for a new dependency on a new header file to be added, an
existing and remaining dependency needs to change to add '#include ...'. So
the .c file will get recompiled because that file has changed and then you
have a new .d file.

For C/C++, I have to subvert GNU make's attempts to rebuild .d files by using
$(wildcard) on them and inserting a /./ in the middle of the path.

Note that there are other cases where I do have a rule for building .d files
and have them rebuilt. In particular for Fortran 90 modules. This occurs
wherever you need to build the .d files first for the initial first build to
be in the correct order. This is only an issue for C/C++ if you are invoking a
program to generate C sources (e.g. rpcgen). In practice, that is often best
handled with a few dependencies listed explicitly in your Makefile.

------
the_mitsuhiko
Considering that one of the most popular operating systems among developers
has an 8 year old version of make, I wonder how many people will start using
new features.

~~~
k-mcgrady
What OS are you referring to?

~~~
jeremiep
That would be OS X. The make that comes with XCode is rather old, although its
easy to update it using homebrew.

------
madmax96
There are a few of the features mentioned that I already thought that make
had. At least now I might start actually using them.

------
Siecje
When would you make over a bash script?

~~~
michael_h
Well, for starters: make does dependency resolution. If you change a single
file and recompile, it will figure out what needs to be rebuilt instead of
rebuilding _everything_. It will also figure out what can be built in parallel
if you give it the -j option. It also has implicit build rules so you don't
have to write a line in a bash script for every single file.

EDIT: To answer your question more directly: when you have a large project, or
a project that needs to build on many platforms. Probably a whole load of
other situations as well, but those are the two that stick out.

