I was going to say that this is brittle because UTF-8 multi-byte sequences might contain bytes such as 0x2F (forward slash) without actually encoding a slash... but it turns out that's wrong. All bytes in multi-byte sequences always have the high bit set, so you can look for ASCII-7 characters in UTF-8 strings without having to worry about getting false positives. That's a very useful property of UTF-8 I wasn't (consciously) aware of before.
The only disadvantage I can really think of is that the implementations must be fairly complex. That and determining character count can no longer be calculated from filesize.
Almost always, these things are junk (Ant, I'm looking at you) which at best implement a subset of make's features in a "pure" way (and thus look good to people who don't understand make but like Java-or-whatever).
This one actually seems to understand what make needs (cleaner variable semantics, multiple outputs for a target) and what parts need to be tossed (all the default rules).
I'm impressed. But I still won't likely use it. The world doesn't need it.
That's the direction we've taken with ElectricAccelerator (
http://www.electric-cloud.com/products/electricaccelerator-d...) -- fix make, rather than cooking up Y.A.M.R. It hasn't (yet) fixed all of the issues with make, but it's hit some of the biggest.
(disclaimer: I'm the architect of ElectricAccelerator)
But all you need to do is look at the Linux kernel or Android AOSP build systems (both implemented almost entirely in make) to see what it's capable of. Do we really need something better, given the friction that variant build systems cause?
Null build [^1] with cold cache & make: 40s
Null build with cold cache & ninja: 12s
Null build with hot cache & make: 20s
Null build with hot cache & ninja: < 1s
Ninja saves me 20 seconds every single time I build something. Let's say I kick off about 30-40 builds a day, that's 10-15 minutes each day.
[^1]: I.e. nothing changed
Edit: Important info here might be that those times are for an OSX build. I haven't measured, but it seems builds on Linux are faster.
I just had to figure out how to do this today with GNU make. It was something like this:
%Parser.c %Parser.h %Lexer.c %Lexer.h %.tokens: %.g
For completeness it should be noted that GNU make is able to express rules with multiple output files using pattern rules
The problem with this approach (aside from the fact that it's specific to GNU make) is that you need a shared (non-empty) stem in all output files; however, the following workaround appears to work:
foo bar baz : % : .dummy/../%
%/../foo %/../bar %/../baz : quux
touch foo bar baz
yet another make replacement the world really doesn't need
Makefiles solve a different problem; which is of local build dependencies, looking for changes, packaging and suchlike.
CMake, as it happens, was written because Autotools were particularly ineffecient at the configuration problem. Make happens to be very efficient.
Compared to make, redo is extremely simple, yet more versatile, more robust - and potentially very efficient. djb only released the spec (not working code). apenwarr implemented it in Python, which means it's a lot slower than it could be (which you'd mostly feel on nop builds).
ninja config files, as I understand it, are designed to be produced by some other tool, because purely-declarative languages are typically a pain for humans to write by hand. So it's one layer in a multi-layer system, hence the integration with cmake.
redo removes layers; you can quite easily write your whole build system in redo, without first translating your configuration from one file type to another. The down side of that design is it's hard to guarantee your build system is "hygienic"; since every .do script is a program, the program might go do things it shouldn't be doing or which might be insecure. In ninja, that sort of thing would be easier to detect/prevent, and in turn it ought to be easy to implement shared caching, distributed builds, etc in a transparent way. It can be very powerful to manipulate declarative structures like ninja's. (Not that you couldn't do those things with redo, but it would be trickier.)
For similar reasons, ninja is probably more portable to Windows than redo is. (redo can run on Windows, but you need a Unix-compatible sh to do it with, which is obviously rather un-Windowsy.)
But apparently, these are not the goals for ninja. The goals for ninja appear to be speed, speed and more speed, especially for a no-op or one-file-change build.
I wonder if anyone converted the build project of a project e.g. the size of chrome to redo and can compare build speeds to the ninja version.
Furthermore, if speed is your major optimization point, it seems the approach taken by http://gittup.org/tup seems impossible to beat, and as a bonus you get perfect dependency information with no additional work (and see http://gittup.org/gittup - they ported quite a few projects to it)
For full builds, I don't see any reason tup would be particularly fast for speed, in fact. Auto-calculation of dependencies sounds nice, but I personally don't trust it; "perfect" is harder to attain than it sounds. For example, what do you do if one of my build rules retrieves a list of files using wget? redo can handle this, but tup could never automate it "perfectly" (since there are so many possible definitions of perfect), so you will always have weird tradeoffs. I don't really believe in the concept of perfect automated dependencies. Of course, if it works for you, then go for it; that's kind of an edge case.
> "perfect" is harder to attain than it sounds. For example, what do you do if one of my build rules retrieves a list of files using wget?
I wholeheartedly believes that, in that case, you deserve all the suffering your build process calls for :). But seriously, this should be a never-satisfied phony target in any build system.
> I don't really believe in the concept of perfect automated dependencies.
Yes, there is an implicit definition of "perfect" in my writing, and that is: "If any file was consulted in the building of an object previously, and that file has changed, then the dependent object will be rebuilt again".
I don't believe in perfect manual dependencies. So if we are both right, guaranteeing robust builds is not possible :) (which is not an unreasonable conclusion, IMO)
Note that tup makes the implicit dependency on tools explicit (oh, you executed /usr/bin/gcc - that's a dependency. It changed? we need to rebuild). I have never seen any explicit build script do that.
redo can obviously do that -- but have you ever written something like "redo-if-change /usr/bin/gcc"?
> Of course, if it works for you, then go for it; that's kind of an edge case.
I'll be switching from Makefile to redo for my next big non-windows project. I like the idea of tup, but redo's pragmatism is a win for me.
And ... my ideal tool would be a mode for redo which would track execution and file access, and give warnings like "your build depends on /usr/bin/gcc and /usr/include/stdio.h but does not mention it", letting me either make it explicit or ignore it -- but not be ignorant of it.
All that said, I'm not exactly opposed to auto-calculation of dependencies, I just prefer it be optional. Your last paragraph suggests you're okay with such an approach. With that in mind, I think it would be fine to extract out the parts of tup that calculate dependencies, for example, and wrap your .do scripts in that. Then you have the choice.
(Saw in two places, reading mailing lists). LOL, And I'm brewing my own solution for brewing other people's solutions :)
Does Ninja offer any of that? It doesn't seem to, judging from a skim of the docs.
Ninja was designed to work within a specific pragmatic context: a very large project (Chrome) that had existing constraints on how the build works. (This design also makes Ninja suitable for use from CMake, which means you can use Ninja to build e.g. LLVM in a manner faster than the existing alternatives.)
Here is a longer post on the subject.
Most projects should probably not use Ninja. A previous iteration of the home page tried to scare users away. I could probably improve that.
That days I generated .ninja with shell scripts, so now I'm happy to hear of CMake support. Still sorry that ninja build system is not included in most Linux distros, I believe that would significantly increase its popularity among developers.
Premake, on the other hand is a full-blown build system that can make decisions and knows about how to perform specific code-related tasks, and can then generate rules for other tools to follow. Unless I'm mistaken, Premake could hypothetically output a build.ninja file.
For those already using CMake--ninja will slot right in.
Are there any large projects using premake? Ninja+gyp is used by Chromium.