Here's a wonderfully useful open source project - Google gperftools . It includes TCMalloc amongst other things. The Makefile.in is 6390 lines long. Configure is 20,767 lines long. Libtool is 10247 lines long. That's fucking insane.
Compiling OpenSource is such a pain in the ass. Particularly if you trying to use it in a cross-platform Windows + OS X + Linux environment. One of my favorite things about iOS middleware is that one of their #1 goals is to make it as easy as possible to integrate into your project. Usually as simple as add a lib, include a header, and call one single line of code to initialize their system (in the case of crash report generation and gathering).
I work on a professional project. We have our own content and code build pipeline. It supports many platforms. I don't want anyone's god damn make bullshit. I want to drop some source code and library dependencies into my source tree that already exists and click my build button that already exists.
Autoconf.. Well, I can't disagree. It's a hack built on top of a hack and should probably be rethought. Once autoconf is done generating Makefiles, make itself is generally trouble-free.
Automake, on the other hand...
Now to be fair, I am sacrificing some options here. The biggest is that autotools runs tests on the installation environment and tests the available functions and standards compliance, which in theory allows compiling the source on any system with has autotools for it, which is why it's so huge. You can't do that with standard Makefiles. I just stick close to the standard and avoid any non-standard extensions that I don't need.
I've written my own generic make library. The library itself is 95 lines of script, and handles all the dependency sniffing, library and binary building, and so on that I need in a reusable way.
The makefiles themselves just list the sources and object files. They're typically only a few lines, like so:
BIN = mybinary
OBJ = foo.o bar. baz.o
Are all things made since 1977 hipster?
Get off my lawn, hippie.
With a non-standard mess on its own ...
CMake is pretty bad at doing things (standard paths, install targets etc.) that the GNU folks solved a long time ago. Yes, the Autotools are a royal PITA but at least a pain that one knows how to deal with.
Perhaps, but not any that I have had problems with.
E.g. an application that we distribute uses Qt, Boost, Berkeley DB XML, libxml2, libxslt, etc. Producing signed application bundles for OS X, MSI installers for Windows, and packages for Ubuntu has been nearly painless. And that's with clang on OS X, Visual C++ on Windows, and gcc on Linux. If it's easy to produce binaries on the three most popular platforms, with three different compilers, I don't see the problem.
We have tried autotools before. But it's a pain on Windows with Visual Studio. Let alone that I can quickly generate a Visual Studio project to do some debugging.
But when it comes to the differences between all those Linux distributions, the respective packager will be very glad to see that he can customize install prefixes (no, CMAKE_INSTALL_PREFIX is not enough) and use standard make targets.
Your list of dependencies shows libraries that are well covered by the stock CMake modules but try getting a build variable that is not LIBS or CFLAGS from a library that can only be queried with pkg-config. Impossible.
That's a fair point. However, most often, I am more interested in accommodating the 99.8% of the population that uses Windows, OS X, or one of the major Linux distributions, than the tiny group that runs Sabayon and is able to get things compiled themselves if necessary.
Creating portable software and then distributing it with a Posix only build system seems wasteful.
Yet when I download some random project's source code, I groan at any sophistry in the build process at all. I'm not interested in your build system - I'm interested in the application itself. Maybe I want to try and fix a bug, or have a half-baked idea for a new feature. I don't need dependency checking, incremental rebuilding, parallel building, and all that stuff you get from a fully operational build system at this point. I only need to build the project - once - as I decide whether to stick around. Sure, if I start working on it for serious, rebuilding over and over - then I'll bother to learn the native build system, and read any complicated scripts. Build systems are an optimization for active developers. They're a utility that is supposed to save time.
Of course, you're never going to get everyone in the world to agree on the same build system. We all have different desires and needs for what machines it should run on, how automated, how much general system administration it should wrap up, how abstractly the build should be described, etc. It's a bit like one's dot files or choice of text editor - my ideal build is tailored just for me but I wouldn't expect it to satisfy anyone else.
So now I wish that everyone who distributes software as source code would do this: include a shell script that builds the project. Just the list of commands, in the order that they are executed, that carries out a full build on the author's system. That's what it comes down to, in the end, isn't it? Your fancy build system should be able to log this out automatically. (Of course then you still include all the fancy build stuff as well, for those interested.)
Of course it's extremely unlikely that your shell script will work on my system without modification. There's probably machine-specific pathnames in there for a start. We might not even use the same shell! It's basically pseudocode. But if I'm faced with a straight list of imperative shell commands that doesn't work, and a program of some sort with its own idiosyncratic syntax and logic and a hundred-page manual and the requirement for me to install something - which also doesn't "just work" - well, as long as you know how to call your compilers and linkers and so on - which you should - the former is going to be easier to tweak into submission, to get that first successful build. After all, if I need much more than that I'll probably just recreate the build in my favourite system anyway.
For me, the makefile itself wasn't the problem, I've been rather aggressive to keep it as pretty much just a dependency enumeration with flag lists and (arrogantly) it is rather clean, but the runtime flags/things I need to wrap the executable around pushed me to the script.
(I honestly worried that this was sloppy since it indicated exactly what it did, mask really ugly complexity behind a shiny frontend, which always makes me wonder if that complexity wasn't undue, but it does give the advantage that your last paragraph mentions, that it gives a more modular pseudocode of the various components of building/running.)
1) mapping a source pattern to an output pattern
2) managing dependencies between rules
To be fair it's good at those, and often the sorts of things you can do with a rule are quite complex (being basically shell scripts).
However, the problem is that 1) it's an obscure DSL and 2) that it is really rubbish at doing more complicated things.
For example, grunt-contrib-clean lets you: delete any files that match a regex, not leave alone any files that match a different regex. Grunt also has a built in templating language that can be used to expand configuration files from submodule into local build scripts without copying the entire gruntfile. grunt-open launches a browser to a dev url cross platform. The list goes on and on and on.
Make is terribly terribly bad at complex tasks like this, that's the problem.
You can write a custom shell-script / ruby-script / python-script for these tasks, but why would you? Someone else already has. Dont repeat all the things every time with your own code.
If all you need to do is map .c to .o, or .scss to .css and .coffee to .js, use make, totally. It's good at that.
Otherwise, stay away.
I'd rather rewrite a 4-LOC shell script in my new project, instead of depending on a build tool that depends on a non-standard, infant runtime itself, and also depends on third party libraries for deleting files.
rather than having a unix development environment for the browser platform, thanks to lighttable, grunt, and others, it's possible to use node/web-browsers as a development environment as well. i guess node isn't as venerable as unix or even the jvm for that matter, but is "infant" really a fair characterization? i've heard that node can actually do some types of string processing faster than the jvm.
i totally agree on starting with shell scripts, though. i tend to put short scripts all throughout repos, even if they just run a single command with a few arguments. sometimes they grow into longer scripts, and sometimes they get changed to run different programs (e.g. grunt<->make), but having a few consistent names for tasks like build, run, test, and deploy (.sh) across projects and languages goes a long way to cut back on cognitive load.
The point I wanted to make was that one needs to install node to run grunt; I don't believe there's any OS that distributes node in the base distribution, whereas make and sh is a part of the POSIX standard, and perl/python is available by default on most OS's.
Also, node is indeed infant and young, don't consider that a bad characterisation, I do enjoy playing with node. My plea is that one should not need to install some software to run the build scripts in a project they download.
grunt uses a JSON file
I'd rather rewrite a 4-LOC shell script in my new project...
...just maybe don't tell who ever is paying you that you're doing it that way.
(or wait, you can reuse code with make can't you? It's called autoconf or something like that...)
You can break down complex tasks like "Has my software been delivered through the app store" down to goals that have to be fulfilled by carrying out (recursively) many layers of rules, like "did it pass human evaluation?".
It's a generalization of make / build systems.
make is great for a lot of build tasks.
One was a news recommendation engine. We pulled down and parsed RSS feeds, crawled every new link they referred to, crawled thumbnails for each page, identified and scraped out textual content from pages, ran search indexing on the content, ran NLP analysis, added them to a document corpus, ran classifiers and statistical models, etc.
Every step of the way took some input files and produced an output file. We used programs written in many different languages -- whatever was best for the job.
So a build system was the obvious way to structure all of this, and we needed a build system we could push pretty hard. Our first version used make and quickly ran into some limitations (essentially, we needed more control over the dependency graph than was possible with the static, declarative approach) so we turned to redo, which lets you write build scripts in the language of your choice.
One thing we needed almost immediately was more powerful pattern matching rules than make's % expansion. No problem: invent a pattern syntax and a special mode where every .do script simply announces what patterns it can handle. Collect patterns, iteratively match against what's actually in the filesystem, and then you've got the list of target files you can build. (This already differs from make, which wants you to either specify the targets explicitly up front as "goals," or enumerate their dependencies via a $(shell ...) expansion and then string transform them into a list of targets which are ALSO matched by some pattern rule somewhere...okay you get it, it's make, it's really disgusting.)
Another thing we needed was to say, here's a list of target files that appear in the dependency graph, give me them in topologically sorted order. This allowed us to "compact" datasets as they became fragmented, without disturbing things downstream from them in the dependency graph. Again, this was not difficult with redo once we had some basic infrastructure.
Now, was all of this maintainable, or was it just kind of insane? I think in the end it ended up somewhat insane, and most importantly, it was an unfamiliar kind of insane. The insanity that you encounter in traditional Makefiles is at least well understood. And treatable.
With redo, you can do almost anything with your build. You can sail the seven seas of your dependency graph. It's awesome. It's also terrifying, because there is very little to guide you, and you may very well be in uncharted waters.
But give it a shot anyway. YMMV.
(I don't like how makefiles have so many features that reimplement what you can do in the shell. I also don't care for big languages with build-tool DSLs--though you could say credo is a build-tool DSL for the shell, like git is a version-control DSL for the shell. With only language directives, no constructs.)
I wrote it in the Inferno shell to take advantage of some nice OS features and its cleaner shell language. One of these days I should port it to bash, so other people might use it.
I've now tweaked my make files so they are almost unrecognisable from the twitter ones, it can run PHP, JS unit tests, fires up phantomJS and tests individual modules, can release minified for production, or not for debugging purposes. I can't stress how useful it is being able to just add a line and you get such powerful support.
I haven't had time to add git hooks yet, but thats the next stage, I plan to set the hooks to run tests and clamp down on poor quality code (I work with interns quite often... sometimes I cry for just average quality code coming in).
For a story of the production gains I've had. I moved the whole company CSS into a CSS preprocessor and cleaned up all the existing structures to fit the make file release procedure. It came back a thousand fold away a rebranding and I had everything done in two days. I was blown away with that alone, I've been involved in so many rebrands over the years that go on for months... not hours.
Do it, if you're bit unsure where to start check the twitter bootstrap build file and muck around with what they have done.
I don't have anything against Make, I've never used it; judging by the code samples in the article, however, and contrasting that with my gulpfile (which, it's worth noting, didn't require me to learn a new language/DSL, just a dead-simple API), I feel much more empowered by gulp than make.
Somewhat relevant: I've also found gulp much easier to use and maintain than Grunt.
With that being said, it's miles above Grunt.
I use make together with a markdown compiler, and the m4 preprocessor, to keep devdocs up to date, in one huge document, where everything is included, and the various sections as stand-alone docs. The markdown version of the section files is almost uncluttered from m4 and html. I link from any external doc to any other via links in the toc in the main doc index.html, to keep everything as simple as possible. It's sweet.
It's a trade-off between simplicity and convenience.
The thing I don't like about make is the poor debuggability when something goes wrong. I having been using make for 15 years and I'm ready for something that gives me more traceability.
A poster above had it right - it's perfect for mapping input files to output, but inbetween and above that, it stumbles pitifully.
I liked how security centric FreeBSD was but they seem to be anti-enterprise friendly. The process to install Oracle Java was painful and EnterpriseDB didn't even bother creating a Postgres installer for it. Maybe the freedom that the BSD license offers, is making their environment too stagnant when compared to GNU and Linux.
As with many other utilities, you can always run "gmake" for GNU make. If you can get what you need to do done with portable make, by all means do so, but if you need to depend on GNU make, it's widely available.
This enables you to put a lot of your configuration in a json file, such as actual lists of files (which make is actually terrible at), have a decent way of getting json data out into make processes, and build files templated out of that json configuration, such as ssi files that set whether to load built JS or seperate script tags (also generated out of the same json config).
SO you get the declarative stuff in the declarative JSON format, and the stuff that make is good at: incremental builds, stays in make.
[^1]: ESR lovefest: http://esr.ibiblio.org/?p=3089
My hope is that we can just carry on with our dev work and Windows will slowly fade away as a platform for developing anything but SharePoint intranets...
[Note: I don't have a neckbeard, but I do run Debian testing.]
For my cross-platform game development I've moved to CMake, as it takes care of some annoying bits for me. Not a massive fan of it though, and am tempted to go back to Make for my C++ game dev stuff. Any recommendations for it?
Have use http://www.finalbuilder.com/, is a great tool that put me in the automated build mantra.
I would say the documentation and google results for a decade old tool would be more available when you try to do something a bit complex.
Yes -- and then I proceed to use the thing every day. I wouldn't be adjusting a Makefile very often so I would keep forgetting whatever I "learned" every time I changed it.
If you are using Grunt everyday, doesn't that mean you are modifying it? Then you would be in the same exact place as with a Makefile.
So this guy re-implemented the most useful grunt tasks in his own language. Grats. You've wasted time instead of using Grunt. Saving time and not re-implementing things was the point all along.
Very few people want to write "small programs" to chain together build tasks. They should be readily available, just work, be continously updated, and work cross platform. I don't want to go look into the intricacies of compiling Handlebars templates just to do that shit.
Also, a make file syntactically looks like garble compared to a well honed Gulp / Broccoli file.
So, well, your argument can be reverted: why try to install a big "let's do it all" machinery which depends on the latest versions of very recent tools, while a simple Makefile, which will run everywhere, would do the task?
Grunt comes with a set of existing (and maintained by the grunt authors) plugins for many common tasks that lets you get away with having someone else do all the complex shell-script-custom-language-scripting you need to do with make for the 'complex logic bits'.
Writing those by hand is a terrible and tedious burden in using make.
I recently had the need to automate batch of converting XCF files to PNG files. Writing and publishing a Grunt module just for that seems a bit over the top, and it would end up a wrapper for a few lines of bash anyway.
In my use case, I was using a CLI utility called xcf2png. Writing a wrapper module seems like loads more work than a bash one-liner that calls xcf2png, no?
And that's a good thing.