
Why Not Make? - entelechy
https://hackernoon.com/why-not-make-db142ccb2081
======
gibba999
I don't think Make is ideal, but:

1) It's simple, standard, everywhere, and everyone knows it

2) I haven't seen anything better. Most of the other systems are disastrously
complex.

I certainly, never, in a million years would use an untested, untrusted,
proprietary build system from a little startup which might go away tomorrow.

I do think there is room for Make 2.0. Not Make reinvented; just cleaned up,
better for distributed builds, etc.

~~~
AndrewGaspar
Make may be simple, but most real world uses of Make I've seen are NOT simple.
The primary benefit of CMake is it imposes a modicum more consistency on the
build system. I'll admit CMake still has this problem, though, due to how
flexible it is.

------
moreentropy
I just had to deal with a large, handwritten Makefile when trying to package
and cross compile something for buildroot.

Turns out it was easier to learn to use cmake and rewrite the relevant parts
of the Makefile than fix everything in the Makefile that didn't work in a
cross compilation scenario.

~~~
orev
If I had a nickel for every time a developer said “it was just easier to
rewrite...” I’d be a rich man. It always seems easier to start from a clean
slate and do a rewrite than to try to understand someone else’s code, but in
the end you just wind up with a different set of hacks and complexities.

------
dahart
I feel like every single point against make in this article is considered a
feature of make by many other people. But I guess more than why not, I’d like
to hear more about what the alternatives are. I see this is an ad, and it
implies but doesn’t say explicitly whether this product fixes all these
issues, or what it actually does at all. People have been trying to redo make
for decades, and despite all its warts (and it does have warts) so far it
seems like nobody has been able to supplant it.

No sandboxing? This complaint amounts to “it isn’t pure”. There are reasons to
allow reading a file in a build rule that isn’t a dependency. Allowing temp
files in a multi-step rule without complicating or obfuscating the dependency
graph is one of them. You can do anything you want in a build rule. How often
is this flexibility actually a problem? If someone is reading from a file
dependency without using make to depend on it, they already know they’ll get
the expected behavior; failure to rebuild when the dependency changes.

Unportable caches is asking make to solve a problem that make wasn’t built to
solve. And one could argue this isn’t technically a build system problem at
all. Wanting network caching for multiple hardware configurations is
reasonable, but isn’t what make is for.

No language abstraction is something I like about make. I use make for
automating analytics and image resizing and all kinds of things that aren’t
C++ specific. Adding language abstractions would bloat and complicate make.

Timestamps are faster to compute than hashes. Occasionally this matters a lot.

~~~
loa-in-backup
Sandboxing issue can be solved by extending cmake with a directive specifying
which paths are accessible up front.

I might be willing to do a PR soon

~~~
dahart
Yes. This isn’t quite what the author was talking about though. They used the
term “sandboxing” to mean that dependency rules in the build system can act on
things that they don’t depend on. You’re talking about a higher level
sandboxed view of the file system.

The kind of sandboxing you’re talking about can be mostly solved for any build
system with some wrapper shell scripts, or an environment shell script. That’s
how most people are doing it.

At my work, there’s a crazy sandboxing system for Linux written as a C program
that actually creates an environment with a modified file system, and access
to certain paths is strictly enforced... no process launched from inside this
environment can access anything but the whitelisted paths, by any means. This
is a very heavy hammer, and I suspect over the top and unnecessary. OTOH, this
definitely prevents mistakes, and my company is large. I don’t know the
history here, but the existence of a system like this leads me to assume there
must have been some large and real pains that lead to the development of a
heavy-handed sandbox.

------
russtrotter
While make, for me, fell out of day-to-day use once C/C++ projects waned, it's
still a great and powerful tool. I really appreciate its consistency to a Unix
philosophy where it provides the essential feature of target building through
dependencies but leaves so much of the heaving lifting through good ole
/bin/sh.

------
zelly
Make is trash, but some in house tool made 2 weeks ago is guaranteed to be a
worse choice. Just generate Makefiles with CMake like everyone else.

~~~
dfgdghdf
Some alternatives to consider:
[https://buckbuild.com/](https://buckbuild.com/)
[https://bazel.build/](https://bazel.build/)
[https://please.build/](https://please.build/) These have been deployed for
_years_ in major corporations.

~~~
jcelerier
> These have been deployed for years in major corporations.

CMake is 6th fastest language growing on github:
[https://octoverse.github.com/projects.html](https://octoverse.github.com/projects.html)

Boost is switching to it, Qt may be switching to it for Qt 6, it's the most
used in the jetbrains survey
([https://www.jetbrains.com/research/devecosystem-2018/cpp/](https://www.jetbrains.com/research/devecosystem-2018/cpp/))...

for the sake everything holy, this is _the_ chance for the C++ community to
standardize on a build tool. Once this is done and settled, it will be much
easier to improve CMake incrementally since there will mechanically be more
people interested in improving it, and it will also be much easier to switch
from CMake to next-gen-buildsystem-which-solves-all-cmake-problems in 10 years
than the current fragmented situation.

------
xyzzy_plugh
TFA makes a bunch of claims about the functionality of make being bad or
dangerous but doesn't really provide any backing scenarios to demonstrate why.
E.g. where timestamps can bite you, how you can miss compiler flag changes,
etc.

Without this, it reads as FUD.

~~~
zeroxfe
Not sure if you read the whole thing, but it seemed to me like it made pretty
good points.

From the article: "In determining when to rebuild, Make uses the timestamps of
build inputs rather than their hashes. This results in many unnecessary
rebuilds when a file has been touched, but has not actually changed. If this
happens deep in the dependency tree, then the rebuild will take a long time!"

That's kinda clear to me -- why would this read as FUD?

~~~
rusk
Make compares timestamp of object-file with source-file amiright? This is nice
and cheap and stateless. Do modern build-systems maintain a separate tree of
hashes? I haven't seen this in my day-to-day but I don't tend to work on the
bleeding edge ... I'd have thought the cost of calculating hashes for trivial
source-files would be maybe only an order of magnitude cheaper than compiling
one? Over a large source-tree I'd expect these costs to mount - I think I'd
prefer to take my chance with timestamps!

~~~
dfgdghdf
Modern build systems (Buck, Bazel, Pants, etc) will compute a tree of hashes
and manage it for you. They will also do this in the background (e.g. using
Watchman). This is how Google, Facebook, Uber, Amazon etc. build their code,
so the approach scales well.

~~~
rusk
So what I guess is that this is for building at scale?

In my experience this kind of incremental build is a matter of developer
convenience so that small changes can be made and checked quickly, but builds
for test and release are always built clean.

I guess the codebases we're talking about here are at such a scale that the
incremental approach is actually used for releases too?

It still seems to me as though this is something that s should be solved
architecturally though with an appropriate composition of modules ...

~~~
tom_
It's for your day-to-day builds, so when you switch between build types while
working you don't have to suffer a full rebuild.

~~~
rusk
by build-types you mean release to test vs. build for my own work? All my code
gets released off a build server, and I'd consider it bad practice for a
developer to be releasing their own code from their own workspace ...

~~~
tom_
Same build, sure, but you don't release it - you just run it on your own
computer, as opposed to the development build you might ordinarily run, to
make sure you didn't break anything.

~~~
rusk
I still can't help but feel this could be obviated by a more top-down
approach, like decomposing into modules.

Even with a well-specified dependency tree there are areas where an
incremental build can still bite you in the ass. Things like inlining and
transitive dependencies ... I think you still need to build from scratch to be
sure.

------
moring
What if the objections in the article were solved directly?

> Writing a Large, Correct Makefile is Really Hard > No Sandboxing

Would it be possible to track dependencies on a file level, simply declaring
everything that gets read as a dependency? In the end, it is very likely that
the generated artifact is different if any of the files touched to build it
changes.

So, why not track open() syscalls made by make and its sub-processes? This
obviously has some requirements towards the kernel to allow such tracking. It
would also report helper files used by the toolchain and maybe even the code
(executables and .so files) used by the toolchain, but one may argue that they
are actually dependencies -- the artifact must be regenerated if the toolchain
gets updated.

> Caches are Not Portable

It should be considered first if rebuilding on each node is really that bad.
Next, caches are non-portable by nature if they use different formats, or if
dependencies that were used to build the artifacts are actually different on
the various nodes. So this sets limits on cache sharing in principle -- but
this has nothing to do with make.

> No Language Abstractions are Provided

Not going to argue against that.

> Timestamps, Not Hashes

Is there anything fundamental to make that requires the use of timestamps?
Otherwise this sounds more like "nobody has bothered yet to change make to use
hashes".

------
jypepin
I've recently started to build my own static website generator (I know, I
know, there are already millions of them). I really just wanted something
simple that would allow me to generate a few haml pages together and deploy on
my DO droplet.

I thought about Make because I remembered reading a nice article about how
great it is for this kind of stuff. During my research, I also looked at Rake,
which is Make in Ruby. Since I was doing my project in Ruby, I felt it would
be nicer.

I really enjoyed using Rake, it's very nice and will definitely re-use it in
the future. Martin Fowler has a great article[1] about Rake and how it
compares to Make.

[1]
[https://martinfowler.com/articles/rake.html](https://martinfowler.com/articles/rake.html)

------
jenscow
I haven't used make for years - but I miss the simplicity. It just does what
it's supposed to do, in plain text.

~~~
vbezhenar
GNU make has many obscure features. Don't know about other implementations.

------
gbumakefan
Make is fine: [https://agottem.com/weld](https://agottem.com/weld)

Stop inventing build systems.

~~~
rusk
_" Makefiles are inherently evil"_ [0]

While I agree with this sentiment, it's hard to know whether it's more or less
evil than munging imperative and declarative build instructions in an XML file
...

Has anyone ever just thought about just replacing the tabs with something else
and leaving it at that?

My own experiences with Make show there's a bunch of other things that make it
difficult to manage and maintain beyond a certain point.

[0]
[https://ant.apache.org/manual/intro.html](https://ant.apache.org/manual/intro.html)

