
Ninja, a small build system with a focus on speed - vgnet
http://martine.github.com/ninja/
======
julian37
_Ninja is mostly encoding agnostic, as long as the bytes Ninja cares about
(like slashes in paths) are ASCII. This means e.g. UTF-8 or ISO-8859-1 input
files ought to work._

I was going to say that this is brittle because UTF-8 multi-byte sequences
might contain bytes such as 0x2F (forward slash) without actually encoding a
slash... but it turns out that's wrong. All bytes in multi-byte sequences
always have the high bit set, so you _can_ look for ASCII-7 characters in
UTF-8 strings without having to worry about getting false positives. That's a
very useful property of UTF-8 I wasn't (consciously) aware of before.

<http://en.wikipedia.org/wiki/UTF-8#Description>

~~~
groby_b
It was actually one of the design goals of UTF-8. As much of a pain as dealing
with it is, it certainly had a lot of forethought put in.

------
ajross
Bravo! Granted, it's _yet another make replacement the world really doesn't
need_. But that said, and unlike all the other attemps, this actually seems to
be better than make.

Almost always, these things are junk (Ant, I'm looking at you) which at best
implement a subset of make's features in a "pure" way (and thus look good to
people who don't understand make but like Java-or-whatever).

This one actually seems to understand what make needs (cleaner variable
semantics, multiple outputs for a target) and what parts need to be tossed
(all the default rules).

I'm impressed. But I still won't likely use it. The world doesn't need it.

~~~
groby_b
Speaking as somebody who works on a reasonably large project,yes, the world
needs it.

Null build [^1] with cold cache & make: 40s Null build with cold cache &
ninja: 12s Null build with hot cache & make: 20s Null build with hot cache &
ninja: < 1s

Ninja saves me 20 seconds every single time I build something. Let's say I
kick off about 30-40 builds a day, that's 10-15 minutes _each day_.

[^1]: I.e. nothing changed

~~~
ajross
Are your make and ninja configurations 1:1? Note that a null build on the
kernel (larger than most "reasonably large" projects) is well under 20
seconds.

~~~
groby_b
Both generated from the same gyp files, yes. And by "reasonably large", I mean
about 9.5MLOC, 30K source files excluding headers :)

Edit: Important info here might be that those times are for an OSX build. I
haven't measured, but it _seems_ builds on Linux are faster.

------
beagle3
Anyone has an idea of how this compares to apenwarr's implementation of djb's
"redo" concept?

Compared to make, redo is extremely simple, yet more versatile, more robust -
and potentially very efficient. djb only released the spec (not working code).
apenwarr implemented it in Python, which means it's a lot slower than it could
be (which you'd mostly feel on nop builds).

~~~
apenwarr
The short version is that ninja is declarative while redo is imperative. And
I'm not sure _that_ battle will be resolved in my lifetime :) Personally, I
prefer imperative stuff most of the time, but lots of smart people disagree.

ninja config files, as I understand it, are designed to be produced by some
other tool, because purely-declarative languages are typically a pain for
humans to write by hand. So it's one layer in a multi-layer system, hence the
integration with cmake.

redo removes layers; you can quite easily write your whole build system in
redo, without first translating your configuration from one file type to
another. The down side of that design is it's hard to guarantee your build
system is "hygienic"; since every .do script is a program, the program might
go do things it shouldn't be doing or which might be insecure. In ninja, that
sort of thing would be easier to detect/prevent, and in turn it ought to be
easy to implement shared caching, distributed builds, etc in a transparent
way. It can be very powerful to manipulate declarative structures like
ninja's. (Not that you couldn't do those things with redo, but it would be
trickier.)

For similar reasons, ninja is probably more portable to Windows than redo is.
(redo can run on Windows, but you need a Unix-compatible sh to do it with,
which is obviously rather un-Windowsy.)

~~~
beagle3
> In ninja, that sort of thing would be easier to detect/prevent, and in turn
> it ought to be easy to implement shared caching, distributed builds, etc in
> a transparent way.

But apparently, these are not the goals for ninja. The goals for ninja appear
to be speed, speed and more speed, especially for a no-op or one-file-change
build.

I wonder if anyone converted the build project of a project e.g. the size of
chrome to redo and can compare build speeds to the ninja version.

Furthermore, if speed is your major optimization point, it seems the approach
taken by <http://gittup.org/tup> seems impossible to beat, and as a bonus you
get perfect dependency information with no additional work (and see
<http://gittup.org/gittup> \- they ported quite a few projects to it)

~~~
apenwarr
Any project that maintains a purely-declarative dependency tree _after_ the
first build should be able to do incremental builds equally fast. That
includes ninja, redo, or tup. (Of course, there would be optimization details
in the implementation, and ninja is likely to be fastest at present. But the
design itself doesn't preclude any of them being fast.)

For full builds, I don't see any reason tup would be particularly fast for
speed, in fact. Auto-calculation of dependencies sounds nice, but I personally
don't trust it; "perfect" is harder to attain than it sounds. For example,
what do you do if one of my build rules retrieves a list of files using wget?
redo can handle this, but tup could never automate it "perfectly" (since there
are so many possible definitions of perfect), so you will always have weird
tradeoffs. I don't really believe in the concept of perfect automated
dependencies. Of course, if it works for you, then go for it; that's kind of
an edge case.

~~~
beagle3
> For full builds, I don't see any reason tup would be particularly fast for
> speed

Indeed.

> "perfect" is harder to attain than it sounds. For example, what do you do if
> one of my build rules retrieves a list of files using wget?

I wholeheartedly believes that, in that case, you deserve all the suffering
your build process calls for :). But seriously, this should be a never-
satisfied phony target in any build system.

> I don't really believe in the concept of perfect automated dependencies.

Yes, there is an implicit definition of "perfect" in my writing, and that is:
"If any file was consulted in the building of an object previously, and that
file has changed, then the dependent object will be rebuilt again".

I don't believe in perfect manual dependencies. So if we are both right,
guaranteeing robust builds is not possible :) (which is not an unreasonable
conclusion, IMO)

Note that tup makes the implicit dependency on tools explicit (oh, you
executed /usr/bin/gcc - that's a dependency. It changed? we need to rebuild).
I have never seen any explicit build script do that.

redo can obviously do that -- but have you ever written something like "redo-
if-change /usr/bin/gcc"?

> Of course, if it works for you, then go for it; that's kind of an edge case.

I'll be switching from Makefile to redo for my next big non-windows project. I
like the idea of tup, but redo's pragmatism is a win for me.

And ... my ideal tool would be a mode for redo which would track execution and
file access, and give warnings like "your build depends on /usr/bin/gcc and
/usr/include/stdio.h but does not mention it", letting me either make it
explicit or ignore it -- but not be ignorant of it.

~~~
apenwarr
I actually have seen build systems that depend on the gcc version (see the
'buildroot' project for example); it's virtually never what you want, because
stupid things like installing a tiny bugfix to libc or gcc causes millions of
lines of code to rebuild unnecessarily. "Perfect" is not so perfect in that
model. For the same reason, a lot of people, myself included, prefer not to
include system headers (/usr/include) in their .o file dependencies (and gcc
offers a way to decide which you want in your autodeps files).

All that said, I'm not exactly opposed to auto-calculation of dependencies, I
just prefer it be optional. Your last paragraph suggests you're okay with such
an approach. With that in mind, I think it would be fine to extract out the
parts of tup that calculate dependencies, for example, and wrap your .do
scripts in that. Then you have the choice.

------
Meai
I use tup: <http://gittup.org/tup/> but I'd like to hear from the author of
Ninja what his opinion is on tup.

~~~
Ralith
I also use and enjoy tup, which has the significant features of automatically
constructing most of the dependency graph, and proven optimality and
correctness for incremental rebuilds. Not to mention that it's highly usable
standalone.

Does Ninja offer any of that? It doesn't seem to, judging from a skim of the
docs.

~~~
evmar
I (the author of Ninja) think tup is a fine choice for your project.

Ninja was designed to work within a specific pragmatic context: a very large
project (Chrome) that had existing constraints on how the build works. (This
design also makes Ninja suitable for use from CMake, which means you can use
Ninja to build e.g. LLVM in a manner faster than the existing alternatives.)

Here is a longer post on the subject. [https://groups.google.com/group/ninja-
build/msg/b52e7d3b77bb...](https://groups.google.com/group/ninja-
build/msg/b52e7d3b77bb2e2c)

Most projects should probably _not_ use Ninja. A previous iteration of the
home page tried to scare users away. I could probably improve that.

------
zserge
Ninja is a build system of my dream - minimalistic, clean and simple. I've
tried it on some small projects about a year ago, and it seemed very fast and
stable.

That days I generated .ninja with shell scripts, so now I'm happy to hear of
CMake support. Still sorry that ninja build system is not included in most
Linux distros, I believe that would significantly increase its popularity
among developers.

------
bla2
More ninja benchmark numbers:
[https://plus.google.com/101038813433650812235/posts/irc26fhR...](https://plus.google.com/101038813433650812235/posts/irc26fhRtPC)

------
nicholassmith
I've been looking to replace a projects build system from ant to something
else, and this might just be it. Now to see if I can fiddle with Qt to see if
I can get it to generate .ninja's rather than Makefiles.

------
daphoenix
How is this better than <http://industriousone.com/premake> ?

~~~
i80and
Wrong question; they're working on different levels of abstraction. Ninja is
designed to be a very fast and lightweight tool for doing work based on a
dependency graph: actually knowing how to build anything on its own is an
explicit non-goal.

Premake, on the other hand is a full-blown build system that can make
decisions and knows about how to perform specific code-related tasks, and can
then generate rules for other tools to follow. Unless I'm mistaken, Premake
could hypothetically output a build.ninja file.

