Reversing the arrows looks pretty silly at first, but apparently the idea is that if you start with the files that changed, you can read partial dependency graphs and start building any downstream dependencies that changed before the entire build graph is loaded. If you start from the top down, you have to load and compute the entire dependency graph up front.
This will matter for large build systems where just loading the build files takes significant time. Another way to solve it is just to make loading the build file very fast so that it's a non-issue, which is what ninja [1] does.
Apparently there's some influence: "There are many other build systems that are more user-friendly or featureful than Ninja itself. For some recommendations: the Ninja author found the tup build system influential in Ninja’s design, and thinks redo's design is quite clever." [2]
Every time tup comes up, there's a question I have that nobody seems to know the answer to: how does tup know what changed? Make does this by starting from what it needs to build (which it knows) and traversing the dependencies. Does tup have some sort of daemon that monitors the file system? What if the file system isn't local (NFS, say)?
You can have a daemon (tup monitor), but it is not required; in the second case tup will compare last-mod times of files with those from the last run (stored in the .tup dir).
I used this tool in a very large engineering org (100+ engineers on the same project), and it really held up for our C and C++. I highly recommend this tool for anyone considering what to do about builds with c/c++ and don't want to hassle with overwrought things like waf or scons.
We have been using redo for the longest time until parallel builds broke on Mavericks. The project basically has 0 maintainers so we had to switch off of it. It's a shame because redo was pretty great while it worked.
It's probably not hard to fix; the codebase is tiny. I've been looking for an excuse to use it, but none has surfaced, since the only compiled language I use at work is Go.
Technical merits aside, 'tup'[1][2] is a poor choice for the name, especially since it's hosted on a site that includes 'git'[3][4] in the URL. It would be difficult to talk about the tool professionally without risking offending someone.
The one thing I really miss in Tup is the '* * ' glob, i.e. '* ' that also recurses through subdirectories.
Other than that, it's great. I was able to replace my C++ project's (utilizing generated sources and headers and other nontrivial things) custom-written and brittle makefiles (of 100-ish lines) with under ten lines of Tup.
I've been using it for a couple of years now, but for simple projects, it's a bit overcomplicated. In other words, it's a great tool for the right projects, but not a silver bullet.
I use tup in my web projects to monitor my folders for things to "compile". This includes CoffeScript, MoonScript, SASS/SCSS, Less and more. Very fast and painless.
This will matter for large build systems where just loading the build files takes significant time. Another way to solve it is just to make loading the build file very fast so that it's a non-issue, which is what ninja [1] does.
Apparently there's some influence: "There are many other build systems that are more user-friendly or featureful than Ninja itself. For some recommendations: the Ninja author found the tup build system influential in Ninja’s design, and thinks redo's design is quite clever." [2]
[1] http://martine.github.io/ninja/ [2] http://martine.github.io/ninja/manual.html