Ninja is mostly encoding agnostic, as long as the bytes Ninja cares about (like slashes in paths) are ASCII. This means e.g. UTF-8 or ISO-8859-1 input files ought to work.
I was going to say that this is brittle because UTF-8 multi-byte sequences might contain bytes such as 0x2F (forward slash) without actually encoding a slash... but it turns out that's wrong. All bytes in multi-byte sequences always have the high bit set, so you can look for ASCII-7 characters in UTF-8 strings without having to worry about getting false positives. That's a very useful property of UTF-8 I wasn't (consciously) aware of before.
Bravo! Granted, it's yet another make replacement the world really doesn't need. But that said, and unlike all the other attemps, this actually seems to be better than make.
Almost always, these things are junk (Ant, I'm looking at you) which at best implement a subset of make's features in a "pure" way (and thus look good to people who don't understand make but like Java-or-whatever).
This one actually seems to understand what make needs (cleaner variable semantics, multiple outputs for a target) and what parts need to be tossed (all the default rules).
I'm impressed. But I still won't likely use it. The world doesn't need it.
What surprises me is that nobody seems to take the approach of fixing make, as if it is somehow beyond redemption. With a little effort you could address some of the things that people most dislike about make (tabs-vs-spaces, multiple outputs per target). With a little more effort you can have up-to-date checks that don't rely solely on timestamps.
I'll definitely take a look. I guess my feeling is that make, like some other tools (C++ comes to mind) happens to be Good Enough for what it does, despite its warts. Lots of new programmers get scared off by its weirdness and never really learn it.
But all you need to do is look at the Linux kernel or Android AOSP build systems (both implemented almost entirely in make) to see what it's capable of. Do we really need something better, given the friction that variant build systems cause?
That's a valid concern. One of our design goals has been 100% compatibility with GNU make, specifically to reduce that friction, so you don't have to fiddle with your makefiles to switch from gmake to ElectricMake (emake). The converse is also true -- you can switch back to gmake if you like. You lose the benefits of emake, but the makefiles still work with gmake.
Have you ever considered tup? http://gittup.org/tup seems like it is going to be as fast as ninja, without sacrificing anything. (No project where a null make takes more than 0.1 secs myself, so I can't really tell)
I'm not sure I completely followed your example, but note that the straightforward syntax for multiple outputs from Make does the wrong thing. Here's a discussion of how to make it do what you really want.
I don't quite think autotools were invented to replace Makefiles. Autoconf and friends _configure_ the build environment by discovering locations of dependencies, feature sets, available compilers and build tools and other build conditions. This information is then used to generate the appropriate Makefile.
Makefiles solve a different problem; which is of local build dependencies, looking for changes, packaging and suchlike.
CMake, as it happens, was written because Autotools were particularly ineffecient at the configuration problem. Make happens to be very efficient.
Anyone has an idea of how this compares to apenwarr's implementation of djb's "redo" concept?
Compared to make, redo is extremely simple, yet more versatile, more robust - and potentially very efficient. djb only released the spec (not working code). apenwarr implemented it in Python, which means it's a lot slower than it could be (which you'd mostly feel on nop builds).
The short version is that ninja is declarative while redo is imperative. And I'm not sure that battle will be resolved in my lifetime :) Personally, I prefer imperative stuff most of the time, but lots of smart people disagree.
ninja config files, as I understand it, are designed to be produced by some other tool, because purely-declarative languages are typically a pain for humans to write by hand. So it's one layer in a multi-layer system, hence the integration with cmake.
redo removes layers; you can quite easily write your whole build system in redo, without first translating your configuration from one file type to another. The down side of that design is it's hard to guarantee your build system is "hygienic"; since every .do script is a program, the program might go do things it shouldn't be doing or which might be insecure. In ninja, that sort of thing would be easier to detect/prevent, and in turn it ought to be easy to implement shared caching, distributed builds, etc in a transparent way. It can be very powerful to manipulate declarative structures like ninja's. (Not that you couldn't do those things with redo, but it would be trickier.)
For similar reasons, ninja is probably more portable to Windows than redo is. (redo can run on Windows, but you need a Unix-compatible sh to do it with, which is obviously rather un-Windowsy.)
> In ninja, that sort of thing would be easier to detect/prevent, and in turn it ought to be easy to implement shared caching, distributed builds, etc in a transparent way.
But apparently, these are not the goals for ninja. The goals for ninja appear to be speed, speed and more speed, especially for a no-op or one-file-change build.
I wonder if anyone converted the build project of a project e.g. the size of chrome to redo and can compare build speeds to the ninja version.
Furthermore, if speed is your major optimization point, it seems the approach taken by http://gittup.org/tup seems impossible to beat, and as a bonus you get perfect dependency information with no additional work (and see http://gittup.org/gittup - they ported quite a few projects to it)
Any project that maintains a purely-declarative dependency tree after the first build should be able to do incremental builds equally fast. That includes ninja, redo, or tup. (Of course, there would be optimization details in the implementation, and ninja is likely to be fastest at present. But the design itself doesn't preclude any of them being fast.)
For full builds, I don't see any reason tup would be particularly fast for speed, in fact. Auto-calculation of dependencies sounds nice, but I personally don't trust it; "perfect" is harder to attain than it sounds. For example, what do you do if one of my build rules retrieves a list of files using wget? redo can handle this, but tup could never automate it "perfectly" (since there are so many possible definitions of perfect), so you will always have weird tradeoffs. I don't really believe in the concept of perfect automated dependencies. Of course, if it works for you, then go for it; that's kind of an edge case.
> For full builds, I don't see any reason tup would be particularly fast for speed
> "perfect" is harder to attain than it sounds. For example, what do you do if one of my build rules retrieves a list of files using wget?
I wholeheartedly believes that, in that case, you deserve all the suffering your build process calls for :). But seriously, this should be a never-satisfied phony target in any build system.
> I don't really believe in the concept of perfect automated dependencies.
Yes, there is an implicit definition of "perfect" in my writing, and that is: "If any file was consulted in the building of an object previously, and that file has changed, then the dependent object will be rebuilt again".
I don't believe in perfect manual dependencies. So if we are both right, guaranteeing robust builds is not possible :) (which is not an unreasonable conclusion, IMO)
Note that tup makes the implicit dependency on tools explicit (oh, you executed /usr/bin/gcc - that's a dependency. It changed? we need to rebuild). I have never seen any explicit build script do that.
redo can obviously do that -- but have you ever written something like "redo-if-change /usr/bin/gcc"?
> Of course, if it works for you, then go for it; that's kind of an edge case.
I'll be switching from Makefile to redo for my next big non-windows project. I like the idea of tup, but redo's pragmatism is a win for me.
And ... my ideal tool would be a mode for redo which would track execution and file access, and give warnings like "your build depends on /usr/bin/gcc and /usr/include/stdio.h but does not mention it", letting me either make it explicit or ignore it -- but not be ignorant of it.
I actually have seen build systems that depend on the gcc version (see the 'buildroot' project for example); it's virtually never what you want, because stupid things like installing a tiny bugfix to libc or gcc causes millions of lines of code to rebuild unnecessarily. "Perfect" is not so perfect in that model. For the same reason, a lot of people, myself included, prefer not to include system headers (/usr/include) in their .o file dependencies (and gcc offers a way to decide which you want in your autodeps files).
All that said, I'm not exactly opposed to auto-calculation of dependencies, I just prefer it be optional. Your last paragraph suggests you're okay with such an approach. With that in mind, I think it would be fine to extract out the parts of tup that calculate dependencies, for example, and wrap your .do scripts in that. Then you have the choice.
I also use and enjoy tup, which has the significant features of automatically constructing most of the dependency graph, and proven optimality and correctness for incremental rebuilds. Not to mention that it's highly usable standalone.
Does Ninja offer any of that? It doesn't seem to, judging from a skim of the docs.
I (the author of Ninja) think tup is a fine choice for your project.
Ninja was designed to work within a specific pragmatic context: a very large project (Chrome) that had existing constraints on how the build works. (This design also makes Ninja suitable for use from CMake, which means you can use Ninja to build e.g. LLVM in a manner faster than the existing alternatives.)
Ninja is a build system of my dream - minimalistic, clean and simple. I've tried it on some small projects about a year ago, and it seemed very fast and stable.
That days I generated .ninja with shell scripts, so now I'm happy to hear of CMake support. Still sorry that ninja build system is not included in most Linux distros, I believe that would significantly increase its popularity among developers.
I've been looking to replace a projects build system from ant to something else, and this might just be it. Now to see if I can fiddle with Qt to see if I can get it to generate .ninja's rather than Makefiles.
Wrong question; they're working on different levels of abstraction. Ninja is designed to be a very fast and lightweight tool for doing work based on a dependency graph: actually knowing how to build anything on its own is an explicit non-goal.
Premake, on the other hand is a full-blown build system that can make decisions and knows about how to perform specific code-related tasks, and can then generate rules for other tools to follow. Unless I'm mistaken, Premake could hypothetically output a build.ninja file.
I don't know the design goals of premake but ninja is very very fast at deciding what to build (a no-op build of a large project that might take a traditional make system several seconds, is almost instantaneous with ninja, allegedly).
For those already using CMake--ninja will slot right in.
Are there any large projects using premake? Ninja+gyp is used by Chromium.