Hacker Newsnew | comments | show | ask | jobs | submit login
Ninja, a small build system with a focus on speed (github.com)
36 points by vgnet 1218 days ago | 35 comments



Ninja is mostly encoding agnostic, as long as the bytes Ninja cares about (like slashes in paths) are ASCII. This means e.g. UTF-8 or ISO-8859-1 input files ought to work.

I was going to say that this is brittle because UTF-8 multi-byte sequences might contain bytes such as 0x2F (forward slash) without actually encoding a slash... but it turns out that's wrong. All bytes in multi-byte sequences always have the high bit set, so you can look for ASCII-7 characters in UTF-8 strings without having to worry about getting false positives. That's a very useful property of UTF-8 I wasn't (consciously) aware of before.

http://en.wikipedia.org/wiki/UTF-8#Description

-----


It was actually one of the design goals of UTF-8. As much of a pain as dealing with it is, it certainly had a lot of forethought put in.

-----


UTF-8 is just... really good.

The only disadvantage I can really think of is that the implementations must be fairly complex. That and determining character count can no longer be calculated from filesize.

-----


Bravo! Granted, it's yet another make replacement the world really doesn't need. But that said, and unlike all the other attemps, this actually seems to be better than make.

Almost always, these things are junk (Ant, I'm looking at you) which at best implement a subset of make's features in a "pure" way (and thus look good to people who don't understand make but like Java-or-whatever).

This one actually seems to understand what make needs (cleaner variable semantics, multiple outputs for a target) and what parts need to be tossed (all the default rules).

I'm impressed. But I still won't likely use it. The world doesn't need it.

-----


What surprises me is that nobody seems to take the approach of fixing make, as if it is somehow beyond redemption. With a little effort you could address some of the things that people most dislike about make (tabs-vs-spaces, multiple outputs per target). With a little more effort you can have up-to-date checks that don't rely solely on timestamps.

That's the direction we've taken with ElectricAccelerator ( http://www.electric-cloud.com/products/electricaccelerator-d...) -- fix make, rather than cooking up Y.A.M.R. It hasn't (yet) fixed all of the issues with make, but it's hit some of the biggest.

(disclaimer: I'm the architect of ElectricAccelerator)

-----


I'll definitely take a look. I guess my feeling is that make, like some other tools (C++ comes to mind) happens to be Good Enough for what it does, despite its warts. Lots of new programmers get scared off by its weirdness and never really learn it.

But all you need to do is look at the Linux kernel or Android AOSP build systems (both implemented almost entirely in make) to see what it's capable of. Do we really need something better, given the friction that variant build systems cause?

-----


That's a valid concern. One of our design goals has been 100% compatibility with GNU make, specifically to reduce that friction, so you don't have to fiddle with your makefiles to switch from gmake to ElectricMake (emake). The converse is also true -- you can switch back to gmake if you like. You lose the benefits of emake, but the makefiles still work with gmake.

-----


Speaking as somebody who works on a reasonably large project,yes, the world needs it.

Null build [^1] with cold cache & make: 40s Null build with cold cache & ninja: 12s Null build with hot cache & make: 20s Null build with hot cache & ninja: < 1s

Ninja saves me 20 seconds every single time I build something. Let's say I kick off about 30-40 builds a day, that's 10-15 minutes each day.

[^1]: I.e. nothing changed

-----


Are your make and ninja configurations 1:1? Note that a null build on the kernel (larger than most "reasonably large" projects) is well under 20 seconds.

-----


Both generated from the same gyp files, yes. And by "reasonably large", I mean about 9.5MLOC, 30K source files excluding headers :)

Edit: Important info here might be that those times are for an OSX build. I haven't measured, but it seems builds on Linux are faster.

-----


Have you ever considered tup? http://gittup.org/tup seems like it is going to be as fast as ninja, without sacrificing anything. (No project where a null make takes more than 0.1 secs myself, so I can't really tell)

-----


> multiple outputs for a target

I just had to figure out how to do this today with GNU make. It was something like this:

  %Parser.c %Parser.h %Lexer.c %Lexer.h %.tokens: %.g
      antlr $<
With the object files required by the main project. Worked like a charm and solved the problem of running antlr multiple times for the produced files during a parallel build.

-----


I'm not sure I completely followed your example, but note that the straightforward syntax for multiple outputs from Make does the wrong thing. Here's a discussion of how to make it do what you really want.

http://www.gnu.org/software/automake/manual/html_node/Multip...

-----


Look at the last paragraph of the linked site:

For completeness it should be noted that GNU make is able to express rules with multiple output files using pattern rules

The problem with this approach (aside from the fact that it's specific to GNU make) is that you need a shared (non-empty) stem in all output files; however, the following workaround appears to work:

    foo bar baz : % : .dummy/../%
    %/../foo %/../bar %/../baz : quux
    	@sleep 2
    	touch foo bar baz

-----


    yet another make replacement the world really doesn't need
Didn't it start exactly like that with CMake? Turns out, the world needed it.

-----


cmake is a replacement for autotools, not make. It generates makefiles as output.

-----


Good point, although if autotools was invented to replace makefiles by generating them, I'd argue CMake was too, even if indirectly.

-----


I don't quite think autotools were invented to replace Makefiles. Autoconf and friends _configure_ the build environment by discovering locations of dependencies, feature sets, available compilers and build tools and other build conditions. This information is then used to generate the appropriate Makefile.

Makefiles solve a different problem; which is of local build dependencies, looking for changes, packaging and suchlike.

CMake, as it happens, was written because Autotools were particularly ineffecient at the configuration problem. Make happens to be very efficient.

-----


Anyone has an idea of how this compares to apenwarr's implementation of djb's "redo" concept?

Compared to make, redo is extremely simple, yet more versatile, more robust - and potentially very efficient. djb only released the spec (not working code). apenwarr implemented it in Python, which means it's a lot slower than it could be (which you'd mostly feel on nop builds).

-----


The short version is that ninja is declarative while redo is imperative. And I'm not sure that battle will be resolved in my lifetime :) Personally, I prefer imperative stuff most of the time, but lots of smart people disagree.

ninja config files, as I understand it, are designed to be produced by some other tool, because purely-declarative languages are typically a pain for humans to write by hand. So it's one layer in a multi-layer system, hence the integration with cmake.

redo removes layers; you can quite easily write your whole build system in redo, without first translating your configuration from one file type to another. The down side of that design is it's hard to guarantee your build system is "hygienic"; since every .do script is a program, the program might go do things it shouldn't be doing or which might be insecure. In ninja, that sort of thing would be easier to detect/prevent, and in turn it ought to be easy to implement shared caching, distributed builds, etc in a transparent way. It can be very powerful to manipulate declarative structures like ninja's. (Not that you couldn't do those things with redo, but it would be trickier.)

For similar reasons, ninja is probably more portable to Windows than redo is. (redo can run on Windows, but you need a Unix-compatible sh to do it with, which is obviously rather un-Windowsy.)

-----


> In ninja, that sort of thing would be easier to detect/prevent, and in turn it ought to be easy to implement shared caching, distributed builds, etc in a transparent way.

But apparently, these are not the goals for ninja. The goals for ninja appear to be speed, speed and more speed, especially for a no-op or one-file-change build.

I wonder if anyone converted the build project of a project e.g. the size of chrome to redo and can compare build speeds to the ninja version.

Furthermore, if speed is your major optimization point, it seems the approach taken by http://gittup.org/tup seems impossible to beat, and as a bonus you get perfect dependency information with no additional work (and see http://gittup.org/gittup - they ported quite a few projects to it)

-----


Any project that maintains a purely-declarative dependency tree after the first build should be able to do incremental builds equally fast. That includes ninja, redo, or tup. (Of course, there would be optimization details in the implementation, and ninja is likely to be fastest at present. But the design itself doesn't preclude any of them being fast.)

For full builds, I don't see any reason tup would be particularly fast for speed, in fact. Auto-calculation of dependencies sounds nice, but I personally don't trust it; "perfect" is harder to attain than it sounds. For example, what do you do if one of my build rules retrieves a list of files using wget? redo can handle this, but tup could never automate it "perfectly" (since there are so many possible definitions of perfect), so you will always have weird tradeoffs. I don't really believe in the concept of perfect automated dependencies. Of course, if it works for you, then go for it; that's kind of an edge case.

-----


> For full builds, I don't see any reason tup would be particularly fast for speed

Indeed.

> "perfect" is harder to attain than it sounds. For example, what do you do if one of my build rules retrieves a list of files using wget?

I wholeheartedly believes that, in that case, you deserve all the suffering your build process calls for :). But seriously, this should be a never-satisfied phony target in any build system.

> I don't really believe in the concept of perfect automated dependencies.

Yes, there is an implicit definition of "perfect" in my writing, and that is: "If any file was consulted in the building of an object previously, and that file has changed, then the dependent object will be rebuilt again".

I don't believe in perfect manual dependencies. So if we are both right, guaranteeing robust builds is not possible :) (which is not an unreasonable conclusion, IMO)

Note that tup makes the implicit dependency on tools explicit (oh, you executed /usr/bin/gcc - that's a dependency. It changed? we need to rebuild). I have never seen any explicit build script do that.

redo can obviously do that -- but have you ever written something like "redo-if-change /usr/bin/gcc"?

> Of course, if it works for you, then go for it; that's kind of an edge case.

I'll be switching from Makefile to redo for my next big non-windows project. I like the idea of tup, but redo's pragmatism is a win for me.

And ... my ideal tool would be a mode for redo which would track execution and file access, and give warnings like "your build depends on /usr/bin/gcc and /usr/include/stdio.h but does not mention it", letting me either make it explicit or ignore it -- but not be ignorant of it.

-----


I actually have seen build systems that depend on the gcc version (see the 'buildroot' project for example); it's virtually never what you want, because stupid things like installing a tiny bugfix to libc or gcc causes millions of lines of code to rebuild unnecessarily. "Perfect" is not so perfect in that model. For the same reason, a lot of people, myself included, prefer not to include system headers (/usr/include) in their .o file dependencies (and gcc offers a way to decide which you want in your autodeps files).

All that said, I'm not exactly opposed to auto-calculation of dependencies, I just prefer it be optional. Your last paragraph suggests you're okay with such an approach. With that in mind, I think it would be fine to extract out the parts of tup that calculate dependencies, for example, and wrap your .do scripts in that. Then you have the choice.

-----


I use tup: http://gittup.org/tup/ but I'd like to hear from the author of Ninja what his opinion is on tup.

-----


This always comes in three, someone mentions ninja, then comes redo, and lastly tup :)

(Saw in two places, reading mailing lists). LOL, And I'm brewing my own solution for brewing other people's solutions :)

-----


I also use and enjoy tup, which has the significant features of automatically constructing most of the dependency graph, and proven optimality and correctness for incremental rebuilds. Not to mention that it's highly usable standalone.

Does Ninja offer any of that? It doesn't seem to, judging from a skim of the docs.

-----


I (the author of Ninja) think tup is a fine choice for your project.

Ninja was designed to work within a specific pragmatic context: a very large project (Chrome) that had existing constraints on how the build works. (This design also makes Ninja suitable for use from CMake, which means you can use Ninja to build e.g. LLVM in a manner faster than the existing alternatives.)

Here is a longer post on the subject. https://groups.google.com/group/ninja-build/msg/b52e7d3b77bb...

Most projects should probably not use Ninja. A previous iteration of the home page tried to scare users away. I could probably improve that.

-----


Ninja is a build system of my dream - minimalistic, clean and simple. I've tried it on some small projects about a year ago, and it seemed very fast and stable.

That days I generated .ninja with shell scripts, so now I'm happy to hear of CMake support. Still sorry that ninja build system is not included in most Linux distros, I believe that would significantly increase its popularity among developers.

-----


More ninja benchmark numbers: https://plus.google.com/101038813433650812235/posts/irc26fhR...

-----


I've been looking to replace a projects build system from ant to something else, and this might just be it. Now to see if I can fiddle with Qt to see if I can get it to generate .ninja's rather than Makefiles.

-----


How is this better than http://industriousone.com/premake ?

-----


Wrong question; they're working on different levels of abstraction. Ninja is designed to be a very fast and lightweight tool for doing work based on a dependency graph: actually knowing how to build anything on its own is an explicit non-goal.

Premake, on the other hand is a full-blown build system that can make decisions and knows about how to perform specific code-related tasks, and can then generate rules for other tools to follow. Unless I'm mistaken, Premake could hypothetically output a build.ninja file.

-----


It seems to be faster (starts building faster) and gives progress information make lacks. See http://neugierig.org/software/chromium/notes/2011/02/ninja.h... (for motivation) and https://plus.google.com/108996039294665965197/posts/SfhrFAhR... (for benchmarks). TLDR: Firefox takes 14m to fully build with make and a nop build takes 57.9s, Chrome takes 30m with ninja, but a nop build takes only 0.74s.

-----


I don't know the design goals of premake but ninja is very very fast at deciding what to build (a no-op build of a large project that might take a traditional make system several seconds, is almost instantaneous with ninja, allegedly).

For those already using CMake--ninja will slot right in.

Are there any large projects using premake? Ninja+gyp is used by Chromium.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: