

"I think that djb redo will turn out to be the Git of build systems." - jerf
http://pozorvlak.livejournal.com/159126.html

======
wnoise
This essentially turns "make" inside out. The makefile lists the dependencies
and then has minimal glue to run the build command based on the dependencies.
Instead, redo has the build commands invoke helper-tools to specify the
dependencies.

Overall, I quite like it. One downside of the current design that I see as
being somewhat hard to fix is that it doesn't handle build commands with
multiple outputs. There are klugey workarounds, but they require either lying
to redo about the true dependencies (which always fills me with dread), or
making parallel build unreliable.

~~~
palish
Those downsides sound massively important to fix.

~~~
wnoise
I've been agitating on the mailing list. True cases of this seem to be fairly
rare, and I have come up with a slightly more effective (but ridiculous)
workaround: have the multiple target command create an archive of its output,
and then each actual target depends on the archive and extracts itself from
the archive.

~~~
microtherion
There is yacc, as a fairly common standard case, and I find that quite a few
of my homegrown tools for little languages fall into this category (e.g.
translating a table into a header file, a source file, and a ruby script).

------
DanielRibeiro
Bernstein's work is really understated. On his site you can see a better
glimpse in what he's worked on: <http://cr.yp.to/djb.html>

~~~
aidenn0
I've always found something about djb a bit offputting. On the otherhand I
have huge respect for the fact that when he sees something that he thinks is
crap, he doesn't just bitch and moan, he makes an alternative (he bitches and
moans as well of course).

------
justinsb
I agree with the blog, in that it seems disappointing that it doesn't
integrate fabricate.py/memoize.py automatic-dependency-detection via strace.
Those projects seem to advance the state of the art a lot more as a make-
replacement. The git analogy is great though; while in general I think there
should be a very high bar for throwing away working systems like make, in this
case there's a potentially worthwhile set of different ideas: hashing for
change detection, strace for dependency detection, and using scripts/whatever
instead of a make-specific DSL.

~~~
unshift
how does automatic dependency detection using strace work?

~~~
pyre
Maybe you get strace output from a running program and it uses that to
evaluate what dependencies/versions you need?

~~~
unshift
how does that fit into a build system? why would you require inspection of a
running program to resolve build-time dependencies? and moreover, it's not
like running a binary requires it to open all of the header and source files
needed at build time, so how would you glean any useful data from it?

and what about systems that don't have strace, like BSD or OSX or solaris or
windows? there's dtrace on those platforms (minus windows), which requires
root, and requiring root access for dependency resolution isn't exactly a
great idea.

~~~
prodigal_erik
It sounds like you don't strace your project, you strace the compilers and
linkers to find out which headers and libraries they actually use. Then you
know exactly which builds are possibly out of date at any point, even after a
platform upgrade. Nobody bothers to note stuff like libc in their makefiles, a
mistake which make can't help you avoid.

------
julian37
Here is a link to the actual project:

<https://github.com/apenwarr/redo#readme>

------
jonhohle
I was enjoying the README and then I hit the part about using hashes to
determine staleness, which is a good idea, but in the Python implementation
those hashes are stored in a sqlite database. That seems a little excessive
for a lightweight build system.

This is something that seems like it could be handled by the file system:

> $ cat .redo/artifact-source/src/path/to/file.c >
> f572d396fae9206628714fb2ce00f72e94f2258f

Or the reverse (hash to source path).

------
epistasis
Reading the title, I thought djb had implemented a build system and I was
extremely excited. Still quite interested, but it doesn't have nearly as much
allure.

~~~
jerf
Yes, there's code. I had to choose between linking this or following further
down the chain and thought this actually did add something, in that while I
could have gone to apenwarr's post directly or linked the the design doc it
would have sunk without a trace.

~~~
epistasis
Yes, and I thank you for linking the way you did! I was mostly commenting on
djb's reputation for creating extremely robust software, and that it's
difficult to refer to this without implying that djb implemented it. However,
I'm sure that apenwarr is also an excellent programmer, and as a regular
abuser of Make, I'm going to give his redo implementation a shot, especially
since djb's design fits my use case much better.

~~~
bostonvaulter2
I would agree that Apenwarr is an excellent program. I've been using his
sshuttle for some time now.

<https://github.com/apenwarr/sshuttle>

------
wybo
As for the problem of cross-platform portability with new make-tools, and the
complaint about having to learn a new language with make-replacements (two
complaints raised in the posts comments), it might be possible to have a
simple shell-script build the build-system, which then builds your code.

I made such a thing for C++ years ago. It adds a header and a footer to your
pure C++ build-script, compiles it, and then runs it (compiling your program).
It also included extension libraries that did some more advanced things, such
as detect local dependencies, but still was very toy-like and unpolished. I
never advertised it widely, especially after I started to use Ruby instead of
C++ for the project it was related to.

Still, the basic idea of including the build-system in source-form with ones
code might be interesting to some people... at least makes it easy to fix,
deliver, and extend.

[http://sourceforge.net/project/shownotes.php?release_id=3730...](http://sourceforge.net/project/shownotes.php?release_id=373050)
(unix / linux only)

------
bioh42_2
Anyone know how redo compares to SCons/CMake/Ant?

~~~
beagle3
\- hundreds of times simpler (literally, if you count source code lines) \-
much easier to use in the sense that if you know the compile command you need
to use (or the shell command or whatever), you already know how to use redo.
\- strictly a build system (CMake and SCons also do autoconf stuff)

------
btipling
The .do files are not even remotely readable. Try scons. At least you can
understand what it's doing even if you know little about it with just a
glance. None of this a.c.c.c.c,$1 gibberish.

~~~
dirtyaura
IMO, build system for daily work has two very important properties:
correctness and speed. Scons seems to get correctness right, but fails to be
fast enough (see e.g. <http://gamesfromwithin.com/bad-news-for-scons-fans>)

Now, there is a third important property, which is clarity. But clarity for a
new-comer is less important than a clarity for a person that uses build system
daily.

I investigated several alternatives to make for our C++ game framework and
settled to Waf. It's quite complex and side-effect of that has caused that we
haven't integrated many of our tools to build system, just because doing so
requires deep understanding of Waf model. Which I haven't acquired, well,
mainly because of laziness.

Thus, clarity can affect both correctness and speed in practical situations of
lazy people like myself.

What I like about redo is the simplicity. Based on my initial experiences, it
seems that aside multiple output files problem, it doesn't get in to your way.

------
regularfry
Interesting that git and redo share the problem of providing POSIX utilities
on Windows. Interesting also that nobody seems to have suggested a merge
between the two. If redo is that small, would it not be feasible to package it
as git-redo and not solve the same problem twice?

~~~
bryanlarsen
Interesting thought, except the article specifically says that git does NOT
have this problem.

~~~
krakensden
That's because there is nothing about git that requires you to implement it
with sh, that was just the way it was written. `redo' has you literally
writing shell scripts to describe the construction process, which means you
either need sh, or you need to write entirely new scripts for running on
Windows

------
njharman
> just learned the basics because nontrivial version-control tasks just got so
> complex, so fast, it was usually quicker to go outside the system or even
> redo your work. Even thinking about what was going on was hard.

Is that really true? If so, that's pathetic and my respect for my fellow
developers as largely competent professionals is misplaced.

~~~
mbreese
In the old days (CVS), in some circumstances it was easier to avoid CVS than
actually use it. CVS had a lot of warts that people generally just worked
around.

