
Chromium Notes: Ninja, a new build system - mattyb
http://neugierig.org/software/chromium/notes/2011/02/ninja.html
======
thristian
I think it's worth mentioning another alternative build system I came across
recently, redo:

    
    
        https://github.com/apenwarr/redo#readme
    

Rather than yet another custom syntax, build-scripts are ordinary shell-
scripts (or at your option, scripts written in any other language that can be
called from a hashbang line). And yet, redo makes it much, much easier than
make to record dependencies and track changes, and hence rebuild the exact
minimum number of files necessary.

(previously: <http://news.ycombinator.com/item?id=2104803>)

~~~
kkowalczyk
This approach is not portable: it'll only work on Unixy os.

~~~
Stormbringer
_"This approach is not portable: it'll only work on Unixy os."_

I weep for the lost souls of those poor benighted developers still stuck on
Windows. No, no wait. why should we care about Windows developers? If they're
still stuck there most likely they are deeply committed to the Microsoft
Visual Studio toolset, in which case they don't (or can't) care about what
goes on in the rest of the universe, and they really don't matter. (By which
of course I mean that as _fellow human beings_ they matter, and of course I
have sympathy for their suffering, but by their own choice they are locked in
such an impregnable ivory tower that it is neither practical nor economic to
try to break in and rescue them)

Other than that, who else uses Windows? Oh yes, you have corporate teams doing
'enterprise' Java. They'll typically be using some colossally retarded Ant
build system that takes 7-30 __minutes __to run (sadly, I'm not even
exaggerating for dramatic effect). The problem with Ant is it is so easy to
just bolt on 'one more thing' that it rapidly evolves to some horrible beast
of a thing that lumbers around sucking everything into it, Katamari Damacy
style. Really, there's no help for them either. You can't for example suggest
that blowing away the entire database and recreating it from scratch in order
to run the entire set of automated unit tests is something best left to the
continuous integration server, or that should be done once a day - maybe out
of hours or at lunchtime. No no, it has to be done every time you do a build.
And that doesn't even touch on the ginormous mess that is the enterprise
components, where each application server has its own arcane and unholy
rituals to create its beans from the blood of unicorns and tears of virgin
developers... try messing with that abomination and you're in for a world of
pain. There's just no helping them either, though in their case usually they
want to be helped, but they are captive to the primary problem of enterprise
development, which is that the people who choose technologies and mandate
tools and processes are usually so far removed from the actual use of those
tools as to be completely immune to the pain, and unable or unwilling to hear
the wailing and gnashing of teeth of the programmers.

Don't even get me started on checkstyle with rules like no line can be longer
than 80 characters (despite there being absolutely no good reason for this
other than the horrible horrible UI of eclipse - in the absence of that you
can easily get way more than 80 characters on the screen).

 _twitch_

~~~
bzbarsky
> Other than that, who else uses Windows?

Anyone writing cross-platform software of any kind. In the context of this
article, say, the Chrome team.

~~~
Stormbringer
In the context of this article?

The same article which specifically mentions that most of them are running
Linux? Or a different article?

------
jedbrown
He should probably look at tup (<http://gittup.org/tup/make_vs_tup.html>)
which, when using inotify (otherwise stat must be called O(n) times), pretty
much always start building (or reports nothing to do) in a few milliseconds.

~~~
eru
Thanks for reminding me of tup again. I read about it earlier, and had always
meant to go back to it and give it a try.

And I wonder whether we can make a version of git that uses inotify.

If he's willing to endure a long-lived server process, he can probably have
no-op builds with a tup-like system in less than a few milliseconds.
(Basically as long as it takes to run through a single `if' and return to the
shell; since no news is good news.)

~~~
tonfa
You can definitely make a version of git that uses inotify (mercurial has been
doing it since a couple of years).

~~~
eru
Thanks. I wasn't aware of that mercurial extension.

------
cdavid
His point about scons is unfortunately well known. You can find an explanation
from Steven Knight (head of the scons project) about why scons failed for
chromium on scons mailing list ([http://old.nabble.com/why-Chromium-stop-
using-SCons-for-buil...](http://old.nabble.com/why-Chromium-stop-using-SCons-
for-building--td29482303.html)).

It would be interesting to see what would happen if they were using waf
instead of scons. Waf is also in python, and started as a fork of scons (but
is so different that it can now be considered as a totally different design
and codebase). Waf is much faster than scons (easily one order of magnitude),
to the point that I think it would be hard to be much faster without losing
features and/or system specific features (notifying systems, using checksumed
file systems, etc...).

Samba has been using waf for > 6 months now, and they seem quite happy with
it. As a former user/contributor of scons, I much prefer waf now, and anyone
interested in complete build systems should look at it IMO.

~~~
DTrejo
_I wish I had not used the WAF build system, it works - it’s okay, but it
introduces more WTFs than necessary. I can perhaps dig out from under WAF at
some point but it would be a monumental undertaking._

— Ryan Dahl, in reference to Node.js

[http://bostinnovation.com/2011/01/31/node-js-
interview-4-que...](http://bostinnovation.com/2011/01/31/node-js-
interview-4-questions-with-creator-ryan-dahl/)

~~~
cdavid
I cannot find a link, but if samba team seems happy from their choice of waf,
and samba is closer to chromium in terms of what it needs from a build tool
(large multiplatform compiled codebase). Your link is not really informative -
he does not like it, but we don't know why. I am not surprised it has WTF for
something like nodejs (which let's be honest, has rather simplistic needs for
a build tool compared to samba or chromium). Waf is far from perfect for sure.

I have experience with quite a few build tools, from autoconf/make to waf,
with custom ones, and waf is by far the one with the least WTF so far if you
want to do something which is hard. It gives you the power of a real language,
which is needed for complex builds IMHO. It looks like node.js is now using
cmake, and its macro language is quite weak and error prone IMO, although it
definitely works for non trivial projects. Waf is also fast, small enough that
you can hack it if wanted (compared to cmake with C++ + architecture based on
autogenerated makefiles...), and just enough usage by non-trivial projects I
would expect for a tool I may depend on (samba and ardour are two quite big,
multi-year, > 100 kloc of cross-platform, multi-language tools).

------
dilap
I'll second the suggestion to take a look at tup -- it is based on some really
good, clear-headed foundational thinking about how to make incremental builds
fast, plus the implementation looks good (though I have only tried it out on
experimental toy setups, and it is still pretty new, so who knows).

Regarded the specifically cited point of including dependencies on compilation
flags, unless I am confused, I believe it can be done much more quickly in
standard make, in one of two ways:

First way: make the build path of the object file dependent on the build
flags. This has zero performance penalty, and also has the nice side-effect
that when changing flags (e.g., from release to debug build and back again),
you don't have to recompile everything, because you still have the previous
build sitting around.

Second way: store the build flags in a separate makefile snippet (which you
can either include or get the value of using $(shell)), and add that as a
dependency of the object files. This has minimal performance impact since it's
just another normal dependency for the object files. (This second trick is
from one of the articles linked to about redo posted a few days ago; sadly I
don't recall exactly which.)

------
edsrzf
Direct link to the GitHub project: <https://github.com/martine/ninja>

I'm always interested in alternatives to Make because I just find it so
painful. However, I'd say that only about half of Make-related pain comes from
its dependency management. The other half, to me, is in using its language,
and Ninja doesn't seem to do anything to ease that pain. Its manual says: "You
should generate your ninja files using another program." That seems like a bad
sign to me.

Tools like CMake can be helpful when there are lots of configurations
available and dependencies to check, but on a small project I want to write a
quick script that will just work. CMake and its ilk add another layer of
complication that I don't want to have to deal with most of the time.

~~~
pyre
Well, Ruby has Rake (<http://rake.rubyforge.org/>), and Python has SCons
(<http://www.scons.org/>) or Waf (<http://code.google.com/p/waf/>), and as
others here have mentioned there is redo (<https://github.com/apenwarr/redo>
or <http://apenwarr.ca/log/?m=201012#14>).

------
TimothyFitz
This is the exact reason why Chad Austin started working on ibb (I/O-Bound-
Build): [http://chadaustin.me/2010/03/your-version-control-and-
build-...](http://chadaustin.me/2010/03/your-version-control-and-build-
systems-dont-scale-introducing-ibb/)

------
samuel1604
He says that git is used a lot by googlers and he's using github for his
project, it would be nice then to integrate git on code.google.com

~~~
andreaja
It really would, but last I heard they'd decided on (and implemented) hg as
their DVCS on the serverside. Fortunately, "clientside" git integrates with
more serverside VCSes than any other system I'm aware of :)

~~~
newman314
I was under the impression that they were still using Perforce or has that
changed?

~~~
durin42
Nearly everything internal is still Perforce. The Android group (and maybe
bits of Chrome? not sure) is nearly the only user of git.

~~~
js2
Chromium uses svn, ChromiumOS uses git.

------
listic
Wow, I haven't been working on projects of such scale, but I supposed that
existing infrastructure (make, gcc etc.) is good for large projects.

Does it mean that there's something wrong with the current state of affairs
that you have to rebuild your infrastructure for a large project? Or does it
mean that Google is so unbelievably great that everything is not good enough
for them so if it's important they have to redo it from scratch?

~~~
kkowalczyk
The post answer your question: he did this because existing solutions were too
slow.

~~~
theFco
Chromium still uses a gnu make based build system. He did this "for fun" to
investigate a potentially faster way. I am very envious, my "for fun"
activities never result in something so cool.

~~~
MikeGerott
IMHO it would be much more productive to use/improve existing build tools.
Such as Tup <http://gittup.org/tup/>

------
Aegean
Why do new build systems have to use some clunky old make style syntax? For me
speed is hardly a primary goal. A build system must be understandable,
readable and easy to debug. For starters, it should have an easy to read
syntax.

If you have a build system which your users are also concerned about,
readability and maintainability are a lot more important. SCons managed to
achieve most of this by using a Python syntax. But its behavior can be quite
unpredictable at times.

------
krakensden
It's interesting, but probably not that surprising, that the Linux port has
faster buildtimes, given that building the Linux kernel is the primary metric
that kernel developers are interested in, and one they try to improve
constantly.

~~~
rg3
I don't think you can take the merit away from the build system author so
easily. As the article states and the manual mentions, a clever build system
based on make, without using recursive Makefiles, took 10 seconds to start
compiling after modifying a file, and the new build system takes less than one
second, on the same Linux system. No doubt the OS has many things to say
regarding process start time, filesystem access, etc. But the big differences
come from the build system, entirely in the userspace side.

~~~
pyre
I think that he meant that the Linux tools were faster because the Linux
kernel devs also use them and contribute back to those projects in an effort
to improve their own build times. (i.e. if make was slowing down the kernel
build times, then some kernel devs would either build something else or
improve make)

------
MikeGerott
Why not to reuse redo/Cmake/tup/Waf or myriad of other build tools? Why Google
reinvents a wheel one more time?

Is it because its Not-Invented-Here culture?

~~~
Stormbringer
I had a similar initial reaction, perhaps even stronger than yours (mine was
of the 'good grief does the world really need another build system') but the
article is well written enough and interesting enough to justify itself I
think.

