Hacker News new | past | comments | ask | show | jobs | submit login
Ninja: A simple way to do builds (jvns.ca)
122 points by Bella-Xiang on Oct 27, 2020 | hide | past | favorite | 90 comments



For comparison, here’s a makefile for the trivial example (though you’ll have to fix the indentation):

  all: $(patsubst ${things_to_convert},%.svg,%.pdf)

  %.pdf: %.svg
      inkscape $< --export-text-to-path --export-pdf=$@
${things_to_convert} is left undefined, as in the source article. It might be something like $(wildcard [STAR].svg) or $(shell find src -name '[STAR].svg'). [Turn each [STAR] into *; HN formatting idiosyncrasies are in play.]

I like makefiles in substantial part because there’s a very high probability of Make already being installed. I’m comfortable enough with them that I didn’t need to look anything up to write any of what I wrote above. It’s a bit of a pity about the sigils; though would people really find $first_prerequisite and $target better than $< and $@? No idea. Ninja uses $in and $out, but it’s simpler in a way that lets it reasonably get away with that. Make, on the other hand, is complex in a way that is pretty powerful if you know what you’re doing, but generally more manual (e.g. Ninja looks like it tracks dependencies roughly automatically, whereas dependencies are manual in Make, as many have found to their sorrow) and it makes it fairly easy to shoot yourself in the foot if you do the wrong thing, which is easy to do if you’re not expert, which not many are.


You are not wrong, but if the world where not full with even worse build systems it'd be a bit saying I prefer to drink from puddles because I find them wherever I go.

Make is everywhere, but apart from UX issues such as the dumb syntax or the different flavors you'll encounter, the fundamental problem with it is that it just can't build things properly (the existence of make clean is the dead give away here). That's fine for trivial stuff, but for real work you really owe it to yourself (and society at large) to use a build system which can correctly build things.

Some examples applying to the above:

- if your paths have white space in them, you are generally fucked

- if your change your Makefile, nothing will be rebuilt

- if inkscape writes out a partial file because full disk and you fix the problem, the partial svg will no be rebuilt

- if you install a new version of inkscape nothing will be rebuilt


> - if your paths have white space in them

This is probably my second biggest complaint about Make.

> - if you change your Makefile, nothing will be rebuilt

Heh, there’s a common trick for that: just slap Makefile on the end of prerequisite lists. I’ve worked with more complex projects that extend this to versioning the makefile with a `build/.makefile-v${MAKEFILE_VERSION}` common prerequisite so that you can bump the version if the change is such that you’ll need to rebuild everything.

> - if inkscape writes out a partial file because full disk and you fix the problem, the partial svg will no be rebuilt

Or the more common form, you run a fallible program and it produces errors but still produced a file (e.g. if you used piping, `> $@` instead of a `--output=$@` command line argument that only creates the file after processing has succeeded).

I think “build errored but produced a file” is probably my biggest complaint about Make. You can work around it in a few ways; `|| (touch --date=@0 $@; false)` is probably my favourite, which I used in https://chrismorgan.info/blog/make-and-git-diff-test-harness....

> - if you install a new version of inkscape nothing will be rebuilt

I’m curious: would Ninja rebuild on a new version of Inkscape? I’d be pleasantly surprised and view it much more favourably if it does.

In the scope of Make, prerequisite satisfaction is all reckoned with timestamps, which is a thing I’m not always fond of, so you definitely can’t do this properly. I think the closest you could get is generating a file containing the versions of all your prerequisites, and arrange for it to only be updated when they’ve changed. Basically a slight variant of the MAKEFILE_VERSION trick discussed earlier, though definitely much more involved, especially if you want it to have one filename always rather than just being a hash of all the versions.

And yes, all this is supporting your point that Make is not the best thing in this space, which I quite agree with. But for simple things I reckon it’s often still worth using despite this.


Re a new version of inkscape: Ninja does not force fully-correct builds, so it's on the author of the build.ninja file to either model this or not. (This is part of Ninja's objective of not mandating policy.)

To get this fully correct, note that you would need to model all inputs that Inkscape might read, which includes not only binaries but also configuration files or dynamically-loaded plugins. Some systems (I think tup?) track this by tracing executed binaries to see which file system operations they do.

Even with tracing it can be really subtle: for example if a C file has an #include "foo.h", creating a new file named foo.h earlier in the include search path is a meaningful change, which means a file-access-tracing build system also needs to track file-not-found results from previous executions.

(For C programs, also note that getting header dependencies correct includes similarly tracking all the headers in /usr/include that your program transitively includes. See the -MD vs -MMD flags to gcc.)


> Even with tracing it can be really subtle

The correct way of doing this is not tracing but a hermetic build. If you do a sandboxed build with nix, you will get a specific version of inkscape with a specific hash, which includes will include any inkscape extensions you include. And since the build will only be able to read files in the sandbox, none of the problems you list above can happen -- you can't pull random stuff from the network either.

https://nixos.wiki/wiki/Nix#Sandboxing


What you wrote is true, but it also seems to me unlikely that the OP is going to set up a sandboxed hermetic build to process a handful of pngs for her zine. I was instead explaining why Ninja doesn't attempt clever tricks to try to better approximate correctness.


> What you wrote is true, but it also seems to me unlikely that the OP is going to set up a sandboxed hermetic build to process a handful of pngs for her zine.

But it's trivial!

Assuming you fix the OP's Makefile (so much for Makefile syntax usability):

     # put into Makefile
     THINGS_TO_CONVERT := $(wildcard *.svg)
     all: $(patsubst %.pdf,%.svg,$(THINGS_TO_CONVERT))
          inkscape $< --export-text-to-path --export-pdf=$@

     # put into default.nix
     with import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/7ad5e816faba3f808e2a8815b14d0f023a4e2160.tar.gz") {};
     stdenv.mkDerivation {
      name = "zine-images";
      src = ./.;
      buildInputs = [ inkscape ];
      installPhase = "mkdir $out && cp *.pdf $out";
    }
Now make sure you have sandbox=true in /etc/nix.conf, run `nix build` in the directory, and boom done: hermetic build of zine images. Good thing too, I got a warning about --export-pdf being deprecated when I ran this. Note that running make or the dependency on make is not explicitly specified, mkDerivation has some basic smarts to figure out from the presence of a Makefile that it should run make.

(Hardcoding the version of nixpkgs directly into the default.nix just for demo purposes, better to use niv or similar to manage a json file with versions for you).


Of course we could also get rid of the Makefile and simply add a buildPhase like this:

      buildPhase = "find -maxdepth 1 -name '*.svg' -print0 | xargs --null -I'{}' -n1 inkscape '{}' --export-text-to-path --export-pdf={}.pdf";

This should also handle filenames with spaces etc. correctly (although I have not tested it).


> I’m curious: would Ninja rebuild on a new version of Inkscape? I’d be pleasantly surprised and view it much more favourably if it does.

I haven't used ninja, but nix and (unless memory fails me) bazel derived build systems will. Nix basically has a completely hermetic world, and everything is referred to by hash that includes the recursive dependency tree.


I've never seen the `touch` version of that safety versus broken output files before. I've only seen the `rm` version. Is `touch` advantageous? Edit: oh, oops, I should've followed the link; the output file in the example seems to be test output which would be more distinctly useful to inspect after the fact on partial failure. That makes sense.


> > - if your paths have white space in them

>

> This is probably my second biggest complaint about Make.

If only more of the world used rc instead of bash, we'd be in a better place.


> If only more of the world used rc instead of bash,

Make does not use bash. Make uses whatever shell is pointed to by the SHELL env variable. If you don't pass any, it defaults to sh.

Meanwhile, you can use Python in a Makefile for what it's worth.

https://www.gnu.org/software/make/manual/html_node/Choosing-...


rc?


https://en.wikipedia.org/wiki/Rc

An alternate shell, from Plan9.


> - if your paths have white space in them, you are generally fucked

Paths containing ':' are also fun. That make doesn't handle paths - and most likely other things - as a unit, but just substitutes their string representation into the makefile, is just asking for trouble.


> Paths containing ':' are also fun.

To be fair, anyone who uses reserved/special characters in file names is asking for trouble in pretty much any usage.


But linux filename basically allows any byte sequence except EOF and /, so I think it is "make" fault to not being able to support it. Other proper build tools support it correctly.


> But linux filename basically allows any byte sequence except EOF and / (...)

Could/should/would...

Even though UNIX doesn't impose many restrictions on file names that doesn't mean it's in your best interests to use any weird character in them.

I mean, do you honestly believe it's a good idea to have file names that are 256-characters long, have line breaks in their name, and have substrings such as "sudo rm -rf /"?


> I mean, do you honestly believe it's a good idea to have file names that are 256-characters long, have line breaks in their name, and have substrings such as "sudo rm -rf /"?

Yes. It is usually some badly written programs or scripts choke on space, newline other things.

BTW, I would like to impose a restriction that filename must be valid UTF-8 string instead of random bytes.


The point isn’t if it’s a good idea, but that a program shouldn’t misbehave.


Well, just because make is faulty, doesn't mean linux isn't too ;)

Spaces -- ok. But there is absolutely no use case for allowing near arbitrary byte sequences in filenames, it causes no end of trouble, and it would have been trivial to fix in a non-disruptive manner ages ago (mount -o allowreallydumbfilenames, off by default).


Having to be aware which character in a path is considered as special by which program, just doesn’t scale very well.

It shouldn’t differentiate what an os and a program consider as a valid path.


Still, it's not great to trip over them.


> - if inkscape writes out a partial file because full disk and you fix the problem, the partial svg will no be rebuilt

You can use the `.DELETE_ON_ERROR` phony target to solve this problem. You can read more about this in this short article:

https://innolitics.com/articles/make-delete-on-error/


> You are not wrong, but if the world where not full with even worse build systems it'd be a bit saying I prefer to drink from puddles because I find them wherever I go.

That assertion is quite wrong. If you really want to go with that metaphor them the alternative to drinking from all the puddles you have readily available is to ignore them all and instead resort to waste time digging your own hole in the ground each and every single time you feel a little thirsty.

Then, in the only practical and conceivable advantage you get from all the extra work is to allow you to enjoy a particular earthy flavour that in the end can't be objectively described as an improvement.


> Ninja looks like it tracks dependencies roughly automatically, whereas dependencies are manual in Make

Ninja has built-in support for the ".d" files generated by gcc's "-M" family of flags, but you still have to make your gcc command-line actually pass the right flags, and you have to tell Ninja the name of the ".d" file. See https://lwn.net/Articles/706404/

Another neat feature of Ninja is that it's much better at "pruning" downstream targets if you rebuilt something (because of an out-of-date timestamp) but then it turned out to be bit-for-bit identical as it was before. In this case, downstream targets that depend on this intermediate target don't need to be rebuilt.


> I like makefiles in substantial part because there’s a very high probability of Make already being installed. I’m comfortable enough with them that I didn’t need to look anything up to write any of what I wrote above.

There's an even higher probability of Perl being installed, and I'd much rather use that than Make. I've even been known to write a Perl script that calls Make after doing something complex that would be "not fun" to add to a Makefile.


> There's an even higher probability of Perl being installed

You're showing your ignorance there, champ. Make is one of the utilities that's required by the UNIX specification,which means that each and every single UNIX or UNIX-like OS is required to make Make available to users in order to be considered an UNIX.

So to make this quite explicit, for starters each and every install of macOS ships with make.


… sort of. Last I checked, the Make features required by POSIX were very anemic. What winds up all over the Web in tutorials etc. is mainly GNU Make, which is extended way beyond that, including in ways which can become practical necessities pretty fast.

Which would be why my projects use the “GNUmakefile” file name explicitly, even though e.g. I tend to write to POSIX sh where I can rather than following the “bash scripting” trend. I also remember seeing a project in the wild that had a separate GNUmakefile and BSDmakefile, presumably for similar reasons.

If you use only POSIX make syntax, great for you, but there's definitely a barrier compared to what a lot of people are used to thinking of as Make nowadays.

(This is also ignoring the part where “higher probability” at a glance refers more to popularity than to standards compliance, and I don't read the parent as implying being in an environment where POSIX/SUS compliance is directly related to the set of target systems.)


> ... sort of. Last I checked, the Make features required by POSIX were very anemic.

You're trying to move the goal post here.

The question is not which obscure feature is made available only by a specific implementation of Make. That's completely besides the point.

The point is that Make is widely available as it is a required component on all UNIX operating systems, and this means that any standard Makefile you might have is assuredly supported on any UNIX installation out-of-the-box.


They said 'installed', not 'available'.


In your personal opinion why do you believe that installed software is not available?

Because make is indeed installed on all UNIX, otherwise they are not UNIX.


macOS is just about the only common modern dev OS I can think of that's "certified Unix". Conversely, most Linux distros don't install dev tools out of the box at all, including make.


The ninja author has a (relatively) recent blog post about design decisions and lessons learned:

* The success and failure of ninja http://neugierig.org/software/blog/2020/05/ninja.html


Thanks for the link, it was a very nice read!

The author wrote Ninja initially as a weekend project. I've written a small build system like that as well, but I think I am the only person using it though :D funny to think that your weekend project could've accidentally become something widespread like Ninja (used to build Chrome, Android, Swift etc).


I wish more people were taking a look at Bazel, and the Bazel-family, of build systems. I've looked at Meson/Ninja/Make and it works but it's truely very complicated and very difficult to get hermetic & reproducible builds from these systems. In my opinion, Bazel will be the future once the currently ongoing external dependency management design doc is implemented. I've migrated quite a few builds to bazel at work and I've seen a dramatic reduction in compile times - not even from compiling source code! Doing things like packaging zips, generating code, generating docs (swagger), etc all adds up in CI time and automating, and caching, absolutely all of it is astonishingly useful.


Bazel is a decent idea but poor execution. I think Nix is the future because it already has the "external dependency" part on lock and it's just a matter of making more fine-grained stuff.

Meson + Nix skipping the Ninja part I would really like to see. I prefer it when the "big dumb builder" and DSLs are developed a bit separately, too. Agreed Ninja's lack of purity is a real bummer.


I'm not so sure about Nix since it seems much more complex in presentation. The meat and potatoes of Bazel is the BUILD langauge. There may be some warts right now but the community is attempting to address these warts. Difficulty around specifying versions of packages, semantics for building containers, etc.

Nix users do not seem to think it's build language is complex. I think Nix might be the worlds best attempt at something like this but it's the same reason why Haskell hasn't gotten adoption: "eeh it looks complicated"

Everyone understands `some_rule(some="argument")` though.


> There may be some warts right now but the community is attempting to address these warts.

Some of Bazel's design problems do feel like the outcome of a poorly executed requirements gathering process. Last time I checked, it assumed for some reason that C/C++ interface headers shared a common root folder with libraries, and if that root folder is not the immediate parent folder then it ends up including all files within that tree branch in its scope.

I also recall that the process of including system libraries that don't support Bazel is needlessly convoluted, and involved adding libraries and interface headers in separate targets just to be able to control visibility. This design oversight was specially disheartening because it made it quite clear that the use case of supporting external packages benefitted from zero attention.


This, 100%. I had a very long chat with the Bazel team about this in a Github issue. They do not want to budge.


Monorepos do have advantages, but Google fundamentally uses them as an excuse to fail ot understand modularity and composition. That's a fatal mistake.


I think you are missing something about both tools: the BUILD language is still quite high level, and closer to CMake/Meson than Ninja. The Nix language is also the higher level, though not that high level.

The actual "meat and potatoes" of Nix is the derivation graph. I think/hope Bazel has an equivalent, but I'm not sure how exposed it is (advertise your layering, people!). This derivation graph is simple simple as hell---each derivation is a path to an exe, arvgv, environ, etc., and derivations output paths and can depend on the outputs of derivation. This is what Meson/CMake/etc. should learn to output.


In the general category of Bazel-alike build systems, I'd humbly submit that Pants v2 is worth taking a look at. It's a generic, high performance build system for large projects, implemented in Rust with a Python plugin API. The project had a large announcement today in fact! https://news.ycombinator.com/item?id=24911148

For supported languages (the 2.0.0 release and API is brand new, so only Python and Bash are supported so far), the amount of per-file boilerplate is nearly as low as it goes: https://www.pantsbuild.org/docs/how-does-pants-work#dependen...


For same no one looks at the GYP, GN, and now Bazel. Google bury their build tools just as fast as other products.

And something about Java dependency.


To be fair, gyp is quite awful. I mean, a low-level language to generate project files for a few IDEs from a definition file where IDE-specific settings had to be added anyway? While the source of truth is left out of the equation, rendering those project files read-only and forcing developers to reconfigure their IDEs each and every single time they updated the project? This concept seems to have been thought out by masochists.


Do you know whether/how it can deal with database tables dependent on source .csv files?

My job involves lots of ETL-type work, where the extracted source file(s) for a particular table might have names that include the extract date or version, and a table might depend on other tables. I've been looking at make for dealing with the process, but automatic dependency management seems hard, and dealing with the variable source file names even harder (possibly because I'm not more than a make novice).


Bazel likely won’t deal with this at all, but you could write your own rules to implement something like “insert to a database”.


Out of curiosity, how difficult did you find the migration process? Especially in terms of having to restructure the project.

I have no direct experience, but from what I've heard Bazel is very opinionated about structure. Which makes it particularly challenging to migrate pre-existing projects that may not conform to its view. But again this all hearsay, so I'd like to hear from someone with direct experience.


It was pretty easy for the software I was migrating because it was already in a monorepo and we were already using some very similar conventions. Anyone writing Java code that uses Gradle or Maven shouldn't have much pain migrating. I'm acutally currently supporting both Gradle and Bazel on the same backend code base to give another team (who depends on my code) a chance to gracefully migrate.

The "hardest" thing to migrate has been C and Python code. For these the package for something is based on an absolute file path from the root of your repo. So `company/foobar/thing.py` is `from company.foobar import thing`.

I find this very useful personally because it's now standardized through the codebase (no one just plops a setup.py somewhere and does whatever they want) but it makes the migration difficult because you need to change the paths.

There's some work being done in rules_python and in cc_library/cc_binary that lets you control this behavior and hack it to align with your current company standards though which is really nice to use for a gradual migration.


For cc_library, I think now you can add include paths so `#include "somethingUnderYourPath"` will be easier.


do you have a link to the design doc?

EDIT: for those searching, I suspect its https://docs.google.com/document/d/1moQfNcEIttsk6vYanNKIy3Zu...


Zim was inspired by all of these and has drastically cut build times for our team. We use it to build hundreds of Go, Python, and Node artifacts for use in AWS. https://github.com/fugue/zim


If you don't know it, CMake can use ninja instead of make for compilation so instead of "mkdir build && cd build && cmake .. && make" you can run "mkdir build && cd build && cmake -GNinja && ninja".

It feels faster on my side projects (haven't benchmarked it really), but most importantly, when running a parallel build it does not interleave the output of the different steps, making it much easier to read warnings and errors.


A small warning for code with many CMake ExternalProject dependencies: the defaults for cmake+ninja will recursively run `ninja -jN` for each level of the dependency tree, which can spawn a very large number of compile jobs.

There is a proposal to add jobserver support for global process count management [1], but it has not been merged for philosophical reasons (IIRC). The CMake folks maintain a soft fork with this patch applied, available in the `ninja` package on PyPI and probably elsewhere.

[1] https://github.com/ninja-build/ninja/issues/1139


You can also use "cmake --build <dir>" to build regardless of which generator was used.


Ninja is especially beneficial on large projects with lots of source files. Where Make can take 30+ seconds to run a "no-op" build, when nothing needs rebuilding, ninja does it instantly.


The conclusion at https://david.rothlis.net/ninja-benchmark/ (2016) was that this doesn't have an impact until you get to really large projects (assuming you are able to utilise make -j).


If anyone wants to know more about preventing interleaving, try using `make -O` or adding `GNUMAKEFLAGS += --output-sync` to the top of your Makefile.


For cross platform-ness you can do

    cmake -H. -Bbuild -GNinja
    cd build && ninja


It is a mistake to imagine Ninja just replacing Make.

Ninja is the "rebuild-it" part of Make, made really, really fast and clean, a joy to use. The Ninja config file is not something you write or, even, look at. It's an intermediate file generated by your build-dependency tool such as CMake, Meson, or even (gods forbid) SCons.

Those are smarter than Make in various ways, but are really not very good at all at actually running builds, so you get them to just make a ninja file, instead, and then get the hell out of your way.

This is good because you don't want to analyze the dependency tree on every build when you haven't changed any of it. So, you just run ninja in your edit-build-test loop.

Ninja is smart enough to rerun Meson or whatever if you do change the build configuration. It understands build directories separate from sources, and ccache compiler wrappers to cache build targets, things you don't want your build-configuation infrastructure to bother with.

Besides being fantastic to use, ninja is worthy of study as a truly superb example of a program that does one thing really, really well, and integrates cleanly with other programs that do other stuff: build analyzers on one side, compilers on the other. It demonstrates a design discipline that we would all be better off if everyone were to follow it.


The ESP32 SDK (ESP-IDF) now is based on CMake and defaults to Ninja for the build target. They still support Makefiles as well but Ninja is quite noticeably faster.


I use ninja over make with LLVM for this reason.


I've only worked on one project (https://www.dpdk.org/) using meson+ninja. Overall it seems to be an improvement, barring the make idiosyncrasies that needs to be forgotten.

That being said, if the project had used make properly from the beginning (i.e. avoid recursive calls), it would not have been plagued by such latency for incremental builds. But then, make should not have made it so easy to fall for such mistake.

I'm still not a fan of having two steps (meson + ninja), but I guess it's always needed for multi-platform projects.

I'm wondering if there is a make rewrite project that tried to make the recursive make antipattern less painful? Maybe trying to recognize sub-calls and avoid creating a sub-process? I guess it would take creating a small internal shell with only make as a builtin command...


There are other things in make that could be improved. If your rewrite fixes those, too, I'm wondering why not use Ninja at that point?


I've been compiling some libraries and projects on my raspberry pi, and I've been having to invoke meson and ninja quite a bit. If nothing else, they've been getting it right on the first try more often than the cmake && make projects. That may be biased though, with newer/more active projects using newer build systems supporting newer hardware.

Anyways, it's been enough that I'm looking into it, but still working (hacking? copy/pasting?) in make


I don't know I did a few things in Chromium and admittedly it's a really big project BUT ninja and autoninja get really cryptic and complex pretty fast.

To be fair it's better than make or the older gyp setup BUT for non trivial workloads ninja is pretty fast but leaves a lot to be desired in terms of DX.


Are you sure that is a Ninja issue and not an issue with Chromium's build generator GN?


Of note, meson is mentioned to optionally use ninja as a build system, but the same is true of CMake.

On thing to note is that the simplicity of ninja means that a lot of things need to be repeated. For a project I'm working on, the input build.ninja file is 142MB. Just generating that files takes a few minutes.

I think an improvement would be to be able to split it in sub-trees and have ninja produce a "mark" file that has a date that represent when the last time the sub-tree was touched and to not even read any sub-tree files, even the sub-ninja file if an entire sub-tree date is older than the mark file.


Are you sure Ninja is the cause of the problems there? I agree that it’s repetitive, but you can at least put common expressions into build variables, so the overall build script size should scale about linearly with the number of files.

142MB is massive! How many files are being built and how much work is being done per file? 142,000 files with 1KB of unique command-line flags each?

Also, even though that’s a very large build script, I would have thought generating it should only take seconds, not minutes. How is the script being generated, and how long is spent generating it versus executing it?


The one thing that's keeping me from using Ninja for the use-case Julia is describing is lacking support for reading environment variables in build.ninja. I understand why that's not part of Ninja, but it does stunt its usefulness.


There's an obvious fix for that... Mix in some M4... ;-)

https://en.m.wikipedia.org/wiki/M4_(computer_language)


She missed the step where you just say [ "$dst" -nt "$i" ] || svg2pdf "$i" "$dst" in her for-loop. That is about as much extra typing/text as her comment and makes that build script about as fast as alternatives for that simple scenario. { This is not to say there isn't maybe more motivation for grander designs..or various failure modes..just that she missed a step. :-) }


> for filename in things_to_convert:

Undeclared variables and random indentation levels. Is it safe to assume these examples are abridged? Or is it a case of magic?


Since the comment above that line says this:

> # some for loop with every file I need to build

It's safe to say that it's not a complete example. That said, the indentation levels are not random. These are multiline strings.


Meson still doesn't get submodules right. So far, CMake gets the closest. The Meson developer is a bit of a jerk about this subject, too - he has a very dogmatic view of how a project should be structured and has designed meson around that.

For example, you cannot have nested subprojects with Meson.


That's just BASH with extra steps.


Ninja hasn't been created for small one-liners like in this blog post, but for massive C++ builds with tens- or hundreds-of-thousands of source files arranged in a complex dependency tree.

Figuring out quickly what needs to be rebuilt for massive builds like this isn't trivial, but that's essentially the one important thing that ninja does.

Also see:

https://ninja-build.org/manual.html#_design_goals


... What mechanism is used to perform incremental builds? It is referenced in the article and unexplained from what I can tell. What enables "quickly" for you to determine what needs to be built? Make simply compares timestamps I believe.

... What mechanisms/tricks speed up builds? Or is it just the "simplicity" that is the BFD?

Edit: It sure would have been helpful if somewhere was stated: "the build system of Chrome":-)

That statement indicates a huge difference between some rando build system on someone's personal project, and a build system on operating-system-size codebase that is over a decade old.


http://www.aosabook.org/en/posa/ninja.html is a book chapter in why ninja is fast.


I'm not sure it's the responsibility of others to ensure you have enough information not to look silly in a hasty post ;)


Ninja uses time stamps. See the “Comparison to Make” section in the linked manual.


Isn't using mtime a bad idea? https://news.ycombinator.com/item?id=18473744


No? The top comment in that thread is the author of Ninja discussing how he addressed the problems described in the article. And, despite its title, the article itself goes on to discuss why and how to use mtime in a build system.


The advantage are the incremental builds they mention. If just one input file changes and you run ninja again, it'll only re-run one step instead of converting all files again.


I'm pretty sure make does that too. The problem is that most people don't know how to write a proper Makefile, even for a simple project, so make ends up doing more work than it should. For small projects, I've tried ninja a while ago, and frankly the effort seemed unjustified, since (a) I already could do the same things with make, and (b) make is already available on most unix-likes.

Ninja seems to be better suited for big projects, where make's slowness in dependency resolution shows. Ninja files also look easier to generate automatically.


Yes, Ninja files are very easy to generate, that’s it’s big selling point for me.

I have a project with dozens of C/C++/asm/precompiled libraries, a few thousand source files in all.

I would have trouble writing Makefile(s) for everything that would even be correct, let alone performant, either by hand or generated. But I have a set of Python scripts to generate a big Ninja build file and it all works very nicely.


Correct, simple.


The article keeps on saying that make is "complicated" ... make is an abomination in many, many ways, but claiming that Makefile language is complicated is just sheer ignorance.


“esoteric”?


We use task management apps to save us time and give more visibility over the work. However, the app can become so complicated to use by itself, that we may choose to go for very basic note-taking apps to avoid the complexity.

I was looking for a minimal, simple and user-friendly app for daily task management, so I developed Renoj.

Fast to-do task management in Desktop for ultimate productivity.

Website: https://ribal.dev/renoj




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: