
Tup – A file-based build system for Linux, OS X, and Windows - mynameislegion
http://gittup.org/tup/
======
RossBencina
I'd not heard of tup, thought I'd try it out on Windows. Unfortunately I hit a
bug straight away: Tup is not directly compatible with MSVC 2015 (without
disabling VCToolsTelemetry.dat generation in the registry) [0].

I don't fancy adopting a tool that forces me to opt-out of being able to send
compiler debug telemetry to Microsoft the next time I hit a compiler bug.

There is a nice (but a bit dated, 2010) review here [1] which discusses some
other features and shortcomings.

[0]
[https://github.com/gittup/tup/issues/182](https://github.com/gittup/tup/issues/182)

[1] 2010: [https://chadaustin.me/2010/06/scalable-build-systems-an-
anal...](https://chadaustin.me/2010/06/scalable-build-systems-an-analysis-of-
tup/)

------
bluejekyll
I didn't see any examples of .phony type rules. Can tup do this?

I've recently found myself returning to make for multi-build system
orchestration, e.g. Rust and C and PHP libraries.

Does anyone have examples of using tup for this type of thing?

Also, I've found myself really enjoying declarative build systems more, e.g.
Cargo or Maven. It seems like for C there could be a simple set of standard
tup files that are run by a tool like Cargo over a standard tree layout. I
didn't notice this in there, but could see a simple wrapper to tup to give
this experience to almost any language. In fact, maybe using Cargo as a base
and adding tup as a supported src type or something through a Cargo extension.
It would probably need to be a default for the entire project for sanities
sake.

~~~
DSMan195276
I heard about tup a while back and finally attempted to look into it today and
try replacing some of my Makefile's with Tupfile's. Unfortunately my googling
and researching all seem to indicate that tup simply doesn't support any type
of .PHONY targets. To that end, `tup` also doesn't seem to support even
"basic" stuff (from my POV) like a `clean` and `install` target - which is
fine, except that without .PHONY targets you can't add support for such things
in your Tupfile's. Even the tup project's Tupfile doesn't support any way of
directly installing it, I had to just copy the files by hand.

IMO that's just a killer for me, which stinks because I don't really feel like
such a feature should be that complicated, but the developers seem to take the
stance that phony targets, and `clean` and `install` are unnecessary.

~~~
aidenn0
> ...phony targets, and `clean` and `install` are unnecessary.

For clean, there is a solution if you are using git; tup can generate a
.gitignore for all the output files, and there is a git command to remove all
ignored files. I'm with you on "clean" being something that is both easy to
implement and useful in tup.

The author's stance on phony and install isn't that they are unnecessary, but
rather that they are orthogonal to the problem Tup is trying to solve. install
and phony targets are handled by an external script. I will have either a
Makefile or a shell script that perform the tup invocation as part of the
process.

The thing that tup does well is prevent you from making certain mistakes in
your build system. If you have a missing dependency, it will error out; I have
actually found bugs in makefiles that I converted to tup.

~~~
DSMan195276
I read about the solution using `git`, but it definitely seems like side-
stepping the problem. If you're going to provide a way to get a list of all
the generated files so you can remove them by other means, why not just allow
you to do it directly? But I don't think we're in disagreement here.

Perhaps I just don't completely understand what problem tup is trying to solve
then. When I read about tup, I picture it being a complete replacement for
something like make (And indeed, it says as much right on the website).
Packaging a Tupfile along with a Makefile seems like an annoying solution to
something that I really don't think should be a problem in the first place. It
seems like taking a stance to a bit of an absurd degree for a feature I really
don't think is that big of a deal. The author is free to do what they want,
but I think they're sacrificing usability for purity.

Tup is appealing to me, but a tup/make combo isn't nearly as appealing.

------
ggambetta
While I can believe the claim it's better than make, it would be way more
interesting to see how it compares to Bazel. After using it (or rather Blaze)
at Google and now Bazel at Improbable, I consider it the gold standard in
build tools.

If anything, I wish Google would open source the rest of the build "ecosystem"
that together with Blaze let you build the whole codebase in seconds. It was
pretty amazing.

~~~
simonhorlick
I've also been using Bazel for a couple of months now and it's fantastic.
Simple, declarative syntax, blazingly fast. I can't recommend it highly
enough.

------
jpgvm
We use Tup to build Flynn [1], it's a pretty neat build system. The only real
drawback is that it's hard to get to work on some operating systems because of
dependency on FUSE. That said it still beats the crap out GNU Make or CMake
etc.

[1] [https://github.com/flynn/flynn](https://github.com/flynn/flynn)

~~~
falcolas
> That said it still beats the crap out GNU Make

How? Their description was useless in describing why it's "obviously" so much
better.

~~~
nerdponx
> See the difference? The arrows go up. This makes it very fast.

I don't think I've seen a more intelligence-insulting description of a
product.

~~~
todd8
Oh, you didn't read far enough; the description is written with a sense of
humor, essentially parodying the very "intelligence-insulting descriptions"
that one sees frequently on the internet.

------
todd8
This reminds me of DJB's ideas for a build system, redo [1]. However, it never
seemed to gain any traction. (or did it? [2])

[1] [http://cr.yp.to/redo.html](http://cr.yp.to/redo.html)

[2] [http://apenwarr.ca/log/?m=201012#14](http://apenwarr.ca/log/?m=201012#14)

~~~
sigil
I can recommend apenwarr's redo implementation [0]. There's occasional
activity on the mailing list [1], which leads me to believe people are using
it, but not promoting it much.

The apenwarr implementation includes a full Python implementation as well as a
minimal version in < 200 lines of sh. The minimal version doesn't support
incremental rebuilds -- the out-of-date-ness tracking in the full Python
version uses sqlite -- but it's good for understanding the redo concept. Also
good for embedded contexts.

I used the full redo implementation for some data processing tasks once, with
mixed results. It was a situation where I couldn't declare the dependency
graph up front. With redo, each target declares its dependencies locally when
it builds, and redo assembles and tracks the dynamic dependency graph. It's
pretty neat, but become difficult to reason about and debug. Could be that I
never got comfortable with the new paradigm, or could be that essential
tooling was missing, not sure. I still think redo is promising.

Anyway after a decade of messing with shiny new build tools, I finally learned
to stop worrying and love the bomb (make). It's weird and warty but
surprisingly capable. Worth the learning investment. Oh and jgrahamc's "GNU
Make Book" is great. [2]

[0] [https://github.com/apenwarr/redo](https://github.com/apenwarr/redo)

[1] [https://groups.google.com/forum/#!forum/redo-
list](https://groups.google.com/forum/#!forum/redo-list)

[2] [https://www.nostarch.com/gnumake](https://www.nostarch.com/gnumake)

------
aidenn0
My $0.02 on Tup:

First of all, I cannot express how much more I like it than make. If Tup is an
option, I will use it.

What it does well:

1) It prevents you from making dependency mistakes: it hooks into the FS layer
using fuse and tracks all input and output files that are inside your build
directory. If you make any mistakes that could cause a future incremental
build to be improper, it errors out rather than continuing.

2) It is opinionated about how your project should be structured. This has
some negatives if you are trying to duplicate a particular structure from
Make, but all in all does guide you in the right direction.

3) There isn't a lot of syntax to learn. This is good because the syntax is
very different from anything else I've used.

#1 is really the killer feature for me; the amount of time I want to spend
debugging makefiles is just slightly less than zero.

What it doesn't do, but I'm not bothered by:

Tup literally only manages commands that have 1 or more inputs and one or more
outputs, and which must be run IFF the inputs have changed or the outputs do
not exist; the outputs must be within the hierarchy of the project.

1) Configuration must be done before Tup is launched

2) Anything you might use a .PHONY for in make needs to be done outside of tup

3) Install commands must be done outside of tup. This means that configuring,
installing &c. must be done outside of tup.

I find that having a make file that handles the above 3 steps works fine;
others using tup tend to use a shell script.

What it doesn't do that I wish it did:

1) No clean command; I currently work around this by having it generate
.gitignore file and git clean -X; still it's annoying that this isn't
possible.

2) It does not handle paths with spaces. This is actually safe as it enforces
relative paths, so if it works on your system it should work everywhere even
if the project is unpacked to a path with spaces.

------
qwertyuiop924
The FUSE dependancy is pretty unfortunate. Is there any way to get rid of it?

Other than that, it's pretty cool. And the creator clearly has a sense of
humor, something which is far rarer than it should be.

~~~
adekok
The same functionality should be available in various inotify / dnotify
implementations.

~~~
majewsky
Not really. inotify requires you to set up a watch on every single file or
directory that you want to watch. To see how this escalates, install the
inotify-tools and do

    
    
      inotifywatch -d /path/to/directory/with/a/lot/of/files

~~~
qwertyuiop924
What about kqueue or eventports?

~~~
krakensden
Kqueue and OS X's file watcher thing are worse than inotify. They give inexact
reports of changes, so you have to do a bunch of manual scanning afterwards.

I really don't understand the grandparent's gripe. Inotify scales well,
supports race free "watch a whole directory tree", and has a nice API.

~~~
codyps
There is a per-user limit to the number of inotify handles available
(max_user_watches) and the default value is 8192.

The limit exists because there is a ~1KiB kernel memory overhead per watch
(though there should really be a way for them to take part in normal memory
accounting per-process).

If one wants to watch a directory tree, one needs an inotify watch handle per
subdirectory in that tree. On large trees (or if more than 1 process is using
inotify), that number of watches can be exceeded.

As lots of folks are looking for recursive watches, they aren't happy with
needing to allocate & manage a bunch of handles when they see what they want
as a single item.

That said, I'm not sure the way the kernel thinks about fs notifications
internally would allow a single handle recursive watch at the moment.

In any case, the amount of info one can obtain by using fuse (or any fs like
nfs or 9p) to intercept filesystem accesses is a bit larger. At the very
least, one can (in most cases) directly observe the ranges of the file that
were modified (though that's not quite so important for tup, afaik). There
also aren't any queue overruns (which can happen in inotify) because one will
just slow the filesystem operations down instead (whether this is desirable or
not depends on the application).

------
brudgers
The related linux distribution, Gittup:
[http://gittup.org/gittup/](http://gittup.org/gittup/)

------
vq
How does it handle building from LaTeX sources where you need to "rebuild" the
document multiple times to get page numbers and references right?

~~~
shoover
I don't see how that could work in tup or any deterministic build system. If
there is any possible way to parameterize explicit filenames unique to the
stages, that will map better to tup. It requires you to be explicit about
every file generated, even temp files. I've searched the mailing list for a
few scenarios and the response is often to wrap tup in a makefile.

------
ccostes
Been using tup in production for about a year now and absolutely love it. The
speed is nice, but compared to make projects where you need to 'make clean' to
be sure everything gets properly rebuilt the confidence that tup will do the
right thing every time is fantastic.

------
ricardobeat
The index page makes it sound like a parody of something, took me a while to
figure it is functional software.

~~~
ubertaco
I do love the quirky sense of humor though:

"In a typical build system, the dependency arrows go down. Although this is
the way they would naturally go due to gravity, it is unfortunately also where
the enemy's gate is. This makes it very inefficient and unfriendly. In tup,
the arrows go up. This is obviously true because it rhymes."

------
sriram_sun
Checkout ninja-build. Generate ninja build files automatically using CMake.
Ninja is really fast.

~~~
wahern
CMake-generated ninja builds aren't fast in an oranges-to-oranges comparison:

    
    
      http://www.kaizou.org/2016/09/build-benchmark-large-c-project.html
    

(TL;DR: scroll to bottom of page to see "The raw results".)

A non-recursive Make build is actually pretty darn peppy for all but the
largest projects. The reason ninja was faster at scale in those tests was
likely because GNU Make has some very inefficient code internally which
theoretically could be improved substantially with some refactoring. ninja had
the benefit of a fresh implementation, avoiding decades of feature creep.

I've been writing non-recursive-style Makefiles for years. But all I ever hear
is whining by project contributors about the unfamiliar syntax (e.g. having to
prefix all targets and sources with a path). Yet in the same breath I'll be
told to use CMake or ninja or tup or whatever the build-system-du-jour, using
vastly different syntax. Ah well....

I keep dreaming of a Makefile generator that will auto-generate a non-
recursive build, or perhaps add the feature into GNU Make directly. The
problem with the latter approach is that one major reason to use Make
(including GNU Make) is portability out-of-the-box, but OS X is stuck at GNU
Make 3.81.

~~~
sriram_sun
Thanks for the info. I just looked into it. You are right about non-recursive
Makefiles. My own experience with cmake generated makefiles are actually
comparable with ninja build times. Why would you want to write Makefiles by
hand anyway? With CMake you can generate VS 2015 or MinGW or ninja pretty much
any project file. QtCreator deals with CMake project really well. KDevelop
too. I think CMake is the way to go. Then we won't be talking about ninja vs.
make vs. tup

------
piyush_soni
To the authors : It would be really great if you could compare the SCons build
system to Tup (with some numbers) - so that I can convince my managers to
switch :).

------
bleair
I may have missed how but..

Cmake solves the problem of "locate the library FOO of version X.Y, add the
compilation flags, link flags, include folder, link folder, static link
options, dynamic link options" and all the other details needed to make use of
another software component. Sometimes that component is found in my operating
systems "default" spot and other times it's in an install directory that I
explicitly input. How do I tell Tup to find these components/libraries and
then have Tup also add in everything needed for all the commands related to
building things that use that component.

Also, I often have very different components going into different build
targets that my project makes. How do the rules chain and build. In other
words, just because I link one of my libraries with libssl doesn't mean I want
every single source file and library I create in my project to then be linked
with ssl

~~~
aseipp
> How do I tell Tup to find these components/libraries and then have Tup also
> add in everything needed for all the commands related to building things
> that use that component.

You don't do that, because it's not really the job of Tup to do that. It's
better to think of it as an alternative to Make as opposed to CMake, really.
Because CMake is more like a generic build system (which is 'compiled' to a
variety of other systems), with a billion built in rules and utilities and
libraries for making the common cases easy amongst all them.

Tup is really just not in the same design space, although it is still a build
tool. It'd probably be more appropriate to think of Tup as a thing that CMake
would target, like Makefiles, MSVC Projects, or Ninja build files.

------
agumonkey
I can't recall what exactly but I hit expressiveness problems with tup (after
proper RTFM). Probably some self referential issue. For the use case listed
it's indeed very nice and very fast.

------
wreft
I first saw Tup a few years ago as it seemed to be the preferred way to
filewatch/automagically-transpire moonscript to lua.

------
toolslive
We've been using it for a few years. It's great. The only issue we ever had
was when we tried it inside a docker container. It's related to fuse.
[https://github.com/docker/docker/issues/1916](https://github.com/docker/docker/issues/1916)

------
gravypod
Is tup capable of building out of tree? By this I mean having:

project_1/src/main.c project_1/src/something/a.c project_1/src/something/a.h

Is there some way for tup to manage discovering the files to build and
everything else needed or will I need to add every file path in manually like
make?

~~~
codyps
regarding "out of tree": I'm not quite sure about your explanation here (just
looks like a list of source files), but presuming you mean "creates output
files in a seperate directory from source", it doesn't really have complete
support for that. You can use "variants" to place output files in a
subdirectory of the source tree, though.

> "some way for tup to manage discovering the files to build"

Well, no. It's not a "convention" build tool like rust's `cargo` where you
just place things in the default locations and it figures it out.

You can use the `run ./script args` mechanism in tup to run your own script
that emits tup rules, though.

The manual has details:
[http://gittup.org/tup/manual.html](http://gittup.org/tup/manual.html)

~~~
gravypod
By out of tree I mean discover all the source files from the file tree.

------
RossBencina
Previous HN discussion from Nov 1, 2014:

[https://news.ycombinator.com/item?id=8539564](https://news.ycombinator.com/item?id=8539564)

------
asgardiator
> tup, transitive verb: To have sex with.

[https://en.wiktionary.org/wiki/tup](https://en.wiktionary.org/wiki/tup)

~~~
aidenn0
It's an archaic britishism. The only time I've ever seen it actually used is
in the works of Morgan Howell.

~~~
jonathonf
Not all that archaic. Male sheep kept, or just added to a field, for breeding
are known as tups (which are distinct to rams).

------
xwvvvvwx
How does this compare in speed to cmake + ninja?

------
tf2manu994
Damn this is quick.

Also, the URL made me think it was going to be on so.e GitHub competitor
called gittup, heh.

------
adekok
Tup's main problem is it's unusual, and it doesn't have a library of build
rules. But it's fast!

On a related note, I've always wondered if it was possible to have a build
system based on dynamic library injection / strace.

The idea would be that you just write your build rules in shell script. Then,
you run it with a special shell that catches open(), etc. in child processes
(via library injection, etc). These system calls get tracked, and stored in a
special build table. One that you _don 't_ have to edit.

Then, when you want to run the build again, you just re-run the magic shell.
It catches the various commands, and checks their inputs / outputs, and then
_skips running the command_ if the targets are up to date.

e.g.

    
    
      $(CC) -c foo.c -o foo.o
    

Hmm... "foo.o" is up to date with "foo.c", so I don't need to run the
compiler. I just return "success!"

That would get rid of _all_ magic build systems. All build syntax. All
dependency ordering. The build system would just take care of it itself.

I've played with this before, enough to note that it's likely possible. But I
haven't got far enough to publish it.

~~~
exDM69
There is such a build system, but I can't remember the name right now.

It tracks system calls to see every file opened by the compiler to produce
exact dependency graphs (assuming compiler is deterministic).

The downside is that it's Linux only.

If anyone remembers the name, please do share.

~~~
codemac
Well.. that's what tup does so it's probably what you're thinking of!

~~~
exDM69
Yeah, tup does something similar with FUSE (afaik) but that's not what I'm
thinking about.

