I use one of the earliest Make versions that barely does anything beyond recognize rules, rule actions, and simplistic macros. I've seen this version of Make used to build a feature animated film. Witnessing the versatility of that feature film's multitude of software and renders and composites all built through Make taught me Make is the only tool needed for any building of anything.
I remember when MS introduced a Make with extensions, and many developers ran over and fucked up their build environments, starting them on the path to the complex manual required build mess most people have today.
I write my Make films by hand, still. It is that easy. And you really should have a build environment that is that easy. When you need it easy, as in some major deadline and crap is broken, you will thank yourself.
I suspect the second statement might be a type-o -- accidentally using "films" in place of "files" -- but in the case that it's not, I wonder if you could detail the practice of "Make films" -- seems like it might be interesting.
Awesome, tell us more! Which film was this? Or if you can't say, what year was it made? I have to imagine The Makefile Architecture is still pretty popular in VFX pipelines.
I run a SaaS that's made end credits for hundreds of feature films (including "Moonlight"). Our render pipeline uses The Makefile Architecture.
They get the things done they've been doing all along. But build metadata is very important and very hard to extract from Makefiles, especially as builds increase in complexity.
You need things like search paths accessible to get IDEs, static analyzers, software packaging systems, and other third party tools working with your code base.
So make works great if the only things you're interested in doing are the top-level make commands you define, but it doesn't do you any favors when it comes to other things.
In particular, I write a lot of C++. I want targets for gcov, g++, gtest, dpkg, rpm, clang-format, clang-tidy, cppcheck, clang, CLion, Eclipse, ctags, and include-what-you-use. Am I supposed to maintain each of those targets on each of my repos?
It's simpler to have a standard place with all the needed source files, binaries, search paths, definitions, etc., and just wire that data up to each extra target.
The point is that you can write reusable cmake modules you can ship with your package manager or even git and a tweaked cmake search path. You can get there with make, but it's more work and a bit of a pain to replicate and maintain. In fact I often see teams with those setups put too many things in their repos to avoid build tool pain. Or, perhaps more often, they just give up on ever getting clang builds to work (for instance).
Instead, the new tools came and each one would buck any preceding trend and introduce a new way of doing things. Often poorly. All the while introducing yet another new way of executing the actual build.
Let me preempt with saying that Make is not perfect. I genuinely feel that many things could have been improved on. However, I also question how many lessons learned in Make were carried forward, versus how many mistakes were just reintroduced and hit again and again.
I wrote a book on make and I fully recognize that there are people who look at what I've written there and think I'm completely mad: https://www.nostarch.com/gnumake
Make's greatest irony is that it's a system for building files based on dependencies but has no way of actually discovering dependencies. I wrote about this for Dr Dobbs a while back: http://www.drdobbs.com/architecture-and-design/dependency-ma...
And make has deficiencies that are a nightmare to work around and get right (e.g. spaces in filenames, recursion, ...)
If anyone else out there is a make hacker like myself they might enjoy the GNU Make Standard Library project: http://gmsl.sourceforge.net/
> And make has deficiencies that are a nightmare to work around and get right (e.g. spaces in filenames, recursion, ...)
Yes! we seem to be stuck in a local maximum with make. I wrote my own make replacement, as have many before me, and learned the folly of my own ways.
Nowadays I use (and love) make despite its terrible handicaps, but I really wish there were something fundamentally similar but a) less C-focused and b) containing many fewer footguns.
Make is a good build system but it is a shitty deployment system. Yes, you can do everything you want if you put enough effort in it as it is a complete scripting system, something many alternatives are not.
It does not mean that it is a good idea to do so.
Recently a client made me begrudgingly try Maven. 'Yet another make system' I thought. Yup, pretty standard one: 'mvn install' ,I'll compile everything and will also do something make has never done for me: install dependencies.
Sure, it is doable to do this with a makefile. With a lot of efforts multiplied by the number of platforms you target. In Maven you just add repos and dependencies to your pom file.
So yes, there recent build systems that are worse than Make but it does not mean that make is the perfect system with all the features we need.
It is minimalistic. It works. But it should not be your only option.
It's also possible to do some truly awful things with make. I once worked on an embedded device that the SDK provided by the OEM was written using a truly horrific makefile system. There were no less than 80 makefiles all tied together to build an entire system image, including the building of the linux kernel, all the bintools, and a bunch of other custom stuff the OEM provided. Trying to understand, much less modify any part of that build system was an absolute nightmare, and is one of the few times I've actually wished they had used shell scripts instead of make for some of that stuff.
As for Maven, it was quite a pioneer in its day, but it hasn't aged well, in particular being written in XML which just adds needless verbosity. For a more modern take, look at Gradle, which provides everything that Maven does (and uses mavens dependency system), but is a lot more flexible because it's a DSL for project building instead of a declarative system like Maven.
One of the things that Daniel J. Bernstein did in his packages is merge steps two and three. There would be makefile/redo rules for detecting the presence of operating-system-specific stuff, and linking the appropriate source/header files into place.
Here's an example from ucspi-tcp that auto-configures whether the platform supports waitpid():
When I extended redo to the whole of djbwares, this system was fairly simple to convert. Here is haswaitp.h.do which is what redo ends up running whenever something reveals a dependency from haswaitp.h :
redo-ifchange trywaitp.c compile link
if ( ./compile trywaitp.o trywaitp.c trywaitp.d && ./link trywaitp trywaitp.o ) >/dev/null 2>&1
echo \#define HASWAITPID 1 > "$3"
echo '/* sysdep: -waitpid */' > "$3"
The BSD Ports tree provides exactly this functionality. All package dependencies provided via standard make variables and rules, enabling recursive resolution of all package dependencies.
For generating those configure files, Autoconf is extremely powerful (and really not that complicated once you have a basic understanding). If you use Automake, you get all the standard targets (including `make dist`, which will build your distribution archive, as is being done manually in this article) generated for you.
I use GNU Make with npm. Some use e.g. Grunt. Etc.
> More importantly, there's also been a push towards wherever possible installing dependencies into sandboxes to try to prevent things like DLL hell, which isn't possible if you install dependencies at the OS level.
Make has been used this way for quite some time. Debian has its reproducible builds project. GNU Guix builds everything in an isolated environment, and even handles multiple versions of multiple packages---if you have five different packages that each depend on five different versions of the same library, then it'll install five different versions of the library and work just fine.
> I much rather just check out a project and run its build system to fetch all the dependencies instead of what we used to have to do with C projects
Make is just _part_ of a build system---it isn't necessarily one in itself. Some projects might use it exclusively in their build system, but others use a suite of tools. I use Autoconf (which generates configure) and Automake, which in turn generates a Makefile with all standard targets. If you check out a git repository for one of my JS projects, you run `npm install` to get the dependencies before building. Usually they're runtime dependencies, though, so they're not needed for the build. Other dependencies are detected via the configure script (e.g. I use graphviz for certain project, and the script makes sure it's installed and supports the feature I need).
I'm curious how you would use Make to install dependencies. I'm aware of Guix, but that's probably an extreme example, most people probably aren't willing to sandbox literally their entire OS install, plus it's entirely unreasonable to require a particular OS in order to handle dependencies of your project. It shouldn't matter what OS I'm using (within reason), the project should have an automated way to install any dependencies it requires.
Here's the thing with Make, even if used only as part of a build system, it's both too complicated, and not complicated enough at the same time. The makefile format when used very sparingly isn't too bad, but once you pass needing more than 2 or 3 rules it gets unwieldy, and on any significantly sized project it will need more than 2 or 3 rules. Using autoconf and automake just proves the point, now you're using a build system to build your build system. Autoconf in particular is way too complicated, and an absolute nightmare to try to extend or customize.
Instead of using a tiny little sliver of make as essentially build system glue to connect the actual build tools together, why not just use slightly more powerful build tools and get rid of make all together?
> I'm aware of Guix, but that's probably an extreme example, most people probably aren't willing to sandbox literally their entire OS install, plus it's entirely unreasonable to require a particular OS in order to handle dependencies of your project. It shouldn't matter what OS I'm using (within reason), the project should have an automated way to install any dependencies it requires.
Guix is just a package manager, not an operating system. The Guix System Distribution, on the other hand, is a GNU system. You can use Guix on any GNU+Linux system; you do not need GuixSD for that.
However, big front-end applications I think are starting to become more and more complex and can benefit from this. Part of the reason is perhaps more because of the lack-of-functionality around incremental builds in the compilers themselves (thinking less/sass/minify/uglify/etc., this is where I've seen people usually fall back to using Gulp). As the application becomes larger, I think the lack of incremental build becomes more and more of a tax on the developer because the builds become longer and longer (I've worked on several projects like that).
When the article talks about dependencies, it is more talking about dependencies within the project itself (due to the modular architecture). Before I switched it over to Make, it was using a combination of preinstall, postinstall, and custom "prestart" scripts to do everything in the right order because it wasn't as simple as using `npm link` or doing a single npm install in the root of the project. These various scripts become a total mishmash of different approaches and it became increasingly difficult to understand and visualize what was being built when and in what order.
It wasn't uncommon to run into issues as well where one project would be `npm install`'d before it's dependencies were actually ready to be used (since they hadn't been compiled and packaged yet).
With that said, Make in this project doesn't install dependencies in the sense that I think you are discussing. This project is most definitely a Node.js project and uses npm for all that stuff. It's just within the project itself, there are sub-projects that are NPM packages themselves that can be built and deployed separately. That's where it began to fall down. But the project still most definitely uses "npm install", "npm start", and such. It's just the build component of issuing "npm run build" just shells out to "make -r" to build everything in the right order, in an incremental fashion, and to get it where it needs to be.
There are many reasons behind this modular architecture that I never went into in the post for good reasons. And like I said, I would be sad if lots of projects thought all-the-sudden it was a good idea to do it :P.
All that said, over the last 10+ years, I've grown increasingly tired of new frameworks coming out every 18 months (or less) that simply re-invent the wheel, but do it in a different way. I think there are plenty of situations where Make would perfectly suffice yet people immediately pull in a code-based build tool with tons of dependencies because it's what "everybody is using now."
Just today I had to compile an older version of gimp, just a 2 years old one. Configure. Fails. Apt-get this. Configure. Fails. Apt-get that. Oh, no, configure doesn't want obscure-package4, it wants obscure-package2, that has been removed from your distribution's repos. Arrrrrr! Truth be told, I just rebooted on windows and dowloaded a binary from an archive.
With maven, I can just tell it which release I want and it just fetches it and all its dependencies, compiles them and installs them. Full auto. If I add a lib in my project, it adds it automatically and the intellisense works, the source navigation works.
Make is years behind.
Yes, this is a fair criticism. Fortunately, many projects state their dependencies. But not all do. If I'm already on a Debian system that has another version of the package, `apt-get build-dep' is very helpful to get most of the way there. Of course this is completely outside Autoconf.
Whether it fails immediately or not depends on how the author writes `configure.ac'. Usually `AC_MSG_ERROR' is used right away, because it's convenient. Instead, some authors choose to set a flag and fail at the end.
apt-get build-dep gimp
aptitude build-depends gimp
Having said that, if you really want to manage dependencies with Make, it's still trivial to do a curl/yum/rpm install when some component is missing. I've done that in the form of a `make deps` pseudo-target, to prevent doing anything that the user doesn't expect.
Oh yeah? Which one? curl, yum, rpm, apt-get? All of them? Which revisions are you targeting? Which package names? And that only covers windows. A lot of apps need to be deployable under windows and OSX too.
If you start being even slightly cross-platform, you soon have your own build system.
Because you really should not deploy things with your build system. It's the
today's tooling that got this backward. For deployment you have much saner
tools, like package systems.
Make files just seem good to users of the due to experience. They work, but let’s not pretend they’re easy to work with.
It’s not hard or complex to get working initially, or maintain, which is not true of makefiles
Again, .
I use Makefiles in a variety of projects:
- building Vagrant base boxes
- building environment-specific configuration for LAMP-ish CRUD apps
- testing and building a range of shell-script based packages
Does Make have some warts? Sure, everything does. I'm not claiming it's perfect.
But it's a far sight clearer - to me at least - what's going on with a Makefile.
Makefiles also don't need to be replaced by something new in 6 months because the entire community has decided that its time for a new tool to do basically the same thing, solve 1 problem with the old tool and introduce 5 new ones.
PS. Copying files specifically is such huge PITA that cmake implements crossplatform copy for you, just run "cmake -E copy $in $out" (can be added as add_custom_command in CMakeLists.txt)
mixing make with a modern scripting language would make this possible.
Make is simultaneously powerful enough to seem pretty good for use, but not powerful enough to do anything outside its very narrow problem domain without really ugly hacks.
It's designed by Dan Bernstein of crypto fame and implemented by Avery Pennarun now at Google.
I will contribute $100 to any Rust leader who wants to start coding "do" in Rust.
If you're curious, take a look at Do. It's like all of the above, yet flipped over on their heads. The value of Do is in the design, which is essentially the opposite of Make. Do leverages the composability of items, files, scripts, artifacts, and typical Unix pipes.
I would be very interested in your opinions about Do, and if you want the $100 to jumpstart working on Do in Rust, let me know how to donate to you.
How do you use redo for go projects?
https://bitbucket.org/rjp/can-x-win/src is a good example - basically, apart from `all.do` containing the targets, I have a `default.do` which looks for a matching `*.od` which has the DEPS and then passes those to `go build`.
Not some "hype of the day" build system of course as that by definition makes it a short lived maintenance burden rather than universal.
While make is universal, with makefiles now it's so bad, that even if you write pure C90 code with zero dependencies (which should work everywhere), users will still find reasons why you should add all kinds of platform dependent stuff or things for a particular distro to your makefile...
Most build tools are stuck in the 70-80's where a single library weighing hundreds of kilobytes was a big thing. In comparison to saving a few kilobytes vs spending hours of a developers time hunting down old versions of libraries is a no-brainier. Especially when you take security into account. I've seen .Net developers just give up in frustration and download dll files from those dodgy download sites and put them into production distributions.
I personally loathe make. I've never had a good experience when using it.
Like not handling file paths that have spaces?  That's one hell of an edge case.
For what it's worth, I am pretty sure one of my first BSD systems forced me into using snakecase so that would explain why I never hit that limitation. That said, make, being designed originally for C, and C being also known for snakecase, and early linux also known for discouraging spaces in names, I can see how this happened in the first place. What I don't understand is the lack of an implimented fix.
So its agreeable that the building pipeline of NPM calling some packer does replace any-other-language calling any-other-packer, but what is the point in replacing the role of `npm build`, with a makefile that just calls what npm would of called anyways?
replacing make with npm with make-calling-npm?
You're right, replacing "npm build" with "make build" doesn't win you anything. But that's only true on the small scale. In a service world, there are lots of projects, each with different requirements. Some will have front-end, some won't. Others will require a totally different build process. There will probably be different languages involved.
Using make, you can standardise a lot of this. If you set up a coding standard that after you clone a repo, "make dep" should grab anything the project requires, then developers don't initially need to know whether that's calling out to npm or composer or pip or whatever - it's just "working".
This is a much bigger win when you have a number of projects. The process is standardised, so developers know it's only a two or three step build (or whatever you've setup). They know how to do it, and when they need to look under the covers, they can see how it works, and this knowledge is transferable from one project to another because Make is universal.
I don't advocate doing big Makefiles - that's why I call this approach ATOM; only use a touch of Make. But used judiciously, it smooths out projects very nicely. Not everyone needs that, though.
So the JVM folks created a crappy Makefile that accomplished some of the basic stuff to appease the Pythonistas. Meanwhile the JVM folks continued to use Gradle directly and the Python folks who had to work on the JVM projects got a shittier experience through the Makefile. As new tasks were added to build process the JVM folks would just write custom Gradle tasks to accomplish the job which would drive the Python folks nuts because they wanted shell scripts or make targets invoking shell scripts or whatever.
My only take away from this experience was developer tooling sucks and there is no one size fits all approach to build systems. Try to stick to a convention within an ecosystem (e.g. all JVM projects should use one of Gradle/Maven/Sbt), but don't try and get cute with making stuff more common than it has to be.
Isn't that the problem? Isn't Make more general than Gradle? If the project was primarily JVM, why couldn't the Pythonistas be forced to use whatever build system was mandated?
e.g a pip target that builds a python dep folder. You can do the same for Java with Apache Ivy.
Is Gradle not a make like system with a lot of JVM-specific functionality built in?
The Makefile then acts as documentation of exactly what to do. All of the setup stuff that people often put in a README is right there.
All of the complex stuff gets put in other tools or separate scripts. But there's probably going to be a make target that tells people which tool and where to look if they need to change it, or that'll let them run it without having to know the details.
In my experience you will end up with a dozen coding "standards" and project structures. Especially in C and C++ where header dependencies have to be generated by the compiler even the most basic makefile will take you at least an hour to create if you already have experience in making makefiles.
The point is to have a known starting place, so that when you pull down a new project you don't have to spend ages reading about how it works.
Even if you can't grab the deps automatically, doing 'make build' and it saying "Hey, you don't have cmake and a bunch of other things you need, but go look at http://whatever.. to set yourself up" is a much better experience.
I can do huge refactors and it cleans up files without any additional work from me. The filesystem watch automatically builds things so I never have to run it directly. I use it on my production server when deploying since it's 100% accurate at incrementally moving between different project states.
There are definitely some disadvantages, you have to work around its syntax and limitations. For complicated build functions I've written the config in Lua and it works fine.
The project I tried to use it on was a before/after comparison of the results of some development work to an energy model. So, a couple of my make targets cloned old and new versions of the repo.
Tup doesn't like you to create directories, and wants to track every file that comes into existence and check that it is either a) a declared output or b) deleted by the end of the job. It really didn't enjoy trying to process a whole Git clone operation.
Although I suspect the main complaint (slow, due to checking directories where nothing has changed) has mostly been swept under the rug by increased CPU speeds some of the other items (managing dependencies, ordering, ...) are still worth considering today.
I may live in a bubble, but my last use of it was… maybe 2 hours ago (to build someone else's project) and yesterday (to build one of my own projects).
It’s not been adopted (with good reason) in many newer language communities, so depending on your tooling and platform you may see it less.
Makefiles don't scale, hard to learn because of awkward syntax.
GNUstep's Makefiles are actually really nice.
Tab echo it certainly is not greater than side_effect.txt.
SOURCES = index.pug layout.pug style.styl main.coffee privacy.md
OBJECTS = index.html style.css main.js privacy.html
while true; do \
inotifywait -qq $(SOURCES); \
pug -P $^
coffee -c $^
markdown $^ > $@
rm -rfv $(OBJECTS)
%.html: %.pug ; pug -P $^
%.html: %.pug ; pug -P $^
%.css: %.styl ; stylus $^
%.js: %.coffee ; coffee -c $^
%.html: %.md ; markdown $^ > $@
clean: ; rm -rfv $(OBJECTS)
But most of us want to require() modules also in the browser, so you'll anyway need browserify or other bundler, which already have cross-platform watch mode & plugin ecosystem, and are expected to be run continously, and suddenly the Makefile feels rather redundant, especially with npm scripting.
In the appropriate places.
A Makefile consists of separate commands¹ and is heavily file-based¹, which makes it rather slow and doesn't allow to keep state (in memory) between re-runs. Traditionally this hasn't been bottleneck, because compiling C code was relatively slow operation, but modern web development tools prefer to work with in-memory streams instead of invoking executables for tiny files on disk.
I'm not saying you cannot do things with make, but in NodeJS/web ecosystem it just doesn't feel as natural/flexible as the "native" NodeJS-based toolchain.
 Great for interoperability, but sometimes more tailored solution is worth it.
while ! inotifywait -e modify $(SRC) ; do time -p make all; done
or just use tup
pip install when-changed
when-changed *.tex -c "make all && echo 'done'"
Until your build chain is f----- by the tab VS space issue of makefiles. Then abandon makefiles again, for good.
* Written in JS and installable via npm (i.e. runs on Windows, Linux and the other brand)
* 'makefiles' are just JS code.
* Supports using Shelljs for the command scripting parts and being cross platform.
* Supports calling tools in "node_modules/.bin/"
* Supports parallel builds for big modules projects.
* No plugins, promises, streams, async or any other nonsense like most other JS/web related build tools or "task runners".