
Ekam Build System - ash
https://github.com/sandstorm-io/ekam
======
carapace
Anyone interested in build systems should have a look at Redo (implementation
here: [https://github.com/apenwarr/redo](https://github.com/apenwarr/redo) )

------
aikah
Ah, build tools... I dream of a day when they are not needed anymore... they
are supposed to be the "dynamic part" of statically typed language, but even
dynamically typed ones somehow manage to need them (nodejs...)

If there is a problem to solve in development it's definitely that one...

~~~
jb55
nix is starting to replace all my build tools, building c code is as easy as
putting together a small nix file like this:
[https://github.com/jb55/polyadvent/blob/master/default.nix](https://github.com/jb55/polyadvent/blob/master/default.nix)

The nixpkgs community is then responsible for making sure that the build
compiles on all platforms. It's not just for C code either, nix files can be
used to declare any kind of dependency. It's a full blown declarative,
functional, domain specific programming language for packages. So your project
can depend on haskell or python code and everything just works.

There's a heavy linux focus right now but there's a whole group of people
making the packages work nicely on osx and cygwin. Check it out:
[http://nixos.org/nix/](http://nixos.org/nix/)

~~~
lmz
But it still looks like you have a Makefile (although a rather simple one).
Are you relying on nix to install the "build-depends" for your app?

~~~
jb55
Yup, it works in conjunction with other build tools. nix knows how to build
make projects, cmake projects, cabal, etc. This is what allows you to depend
on pretty much anything. Then I just use nix-build and it will build all the
dependencies in a virtual environment and symlink the result when its
finished.

~~~
lmz
So a bit like virtualenv + pip for anything. That sounds fun.

~~~
codygman
Except it also handles system dependencies as well.

------
pmr_
At first I thought "a build system which automatically figures out what to
build and how to build it purely based on the source code" included figuring
out library dependencies. It looks like I was wrong though and you still have
to set linker and compiler flags yourself. I think actually understanding your
dependencies (minimum versions, different build configurations and all) and
handling them in a unified manner without too much special casing for the
peculiarities. CMake already does this quite well, but there is still lots of
room for improvement.

~~~
hawski
It's something that someday will come with clang modules support:
[http://clang.llvm.org/docs/Modules.html#includes-as-
imports](http://clang.llvm.org/docs/Modules.html#includes-as-imports)

With proper caching it could make local build systems obsolete for many
purposes. Maybe it would be easier then to delegate module building in style
similar to distcc, but less fragile.

~~~
pmr_
I'm not intending to put you, your enthusiasm, or the people behind Modules
down: I've been hearing that story since C++11 was still called C++0x (this
might be an exaggeration) but I don't see anything happening in terms of
standardization. And as long as modules are not standardized they are not
going to see widespread adoption in C++ and certainly not in the open source
community.

Because I like to walk down the memory lane I've looked at my HN comments
regarding modules. Here is one from exactly one year ago [1], 855 days ago
[2], 1100 days ago.

[1]:
[https://news.ycombinator.com/item?id=7491149](https://news.ycombinator.com/item?id=7491149)
[2]:
[https://news.ycombinator.com/item?id=4836499](https://news.ycombinator.com/item?id=4836499)
[3]:
[https://news.ycombinator.com/item?id=3613636](https://news.ycombinator.com/item?id=3613636)

------
bmh
Exactly what problem is this solving? Do C/C++ programmers really have trouble
specifying their dependencies? I'm all for less "accidental" work for
programmers, but I just can't imagine this tool ever solving a problem that I
have ever had.

~~~
kentonv
It's automating a task that most of us consider to be tedious busy-work.

Though to be honest, the most valuable feature turned out to be continuous
building (in which Ekam will watch (via inotify) the source code for changes,
immediately rebuild the affected targets, and communicate errors back to my
editor). Instant feedback from my compiler simply makes me more productive.
Other build systems could easily implement this too, and a few have, but
somehow most build systems still require you to manually invoke a shell
command.

Honestly, it would be really great to see some sort of inotify-based
continuous building added to gmake (thought don't forget that you also need to
get the error log back into your editor for it to be really useful).

(You may be tempted to suggest `while true; do make; done`, but in practice
this has all sorts of performance and usability problems.)

------
michaelmior
Tup[0] does similar tricks to trap what files the compiler reads to
automatically determine dependencies. Not as magic as Ekam though.

[0] [http://gittup.org/tup/](http://gittup.org/tup/)

~~~
kolev
Tup also relies on FUSE. Do you know if Ekam does as well?

~~~
kentonv
Ekam uses LD_PRELOAD to intercept system calls. I considered FUSE but it would
have involved considerably more overhead and synchronization issues.

\- The LD_PRELOAD trick simply rewrites filenames on open() and similar calls,
allowing subsequent I/O to happen directly. FUSE unfortunately requires going
through the userspace FUSE server for every read or write operation.

\- FUSE would require either mounting a new FS on-the-fly for each build step
(probably inefficient) or using a pool of FUSEs where each one is reused for
multiple commands (requires carefully making sure all state is purged in
between).

\- Something would have to be done to ensure that no other process on the
system (say, a backup or indexing daemon) accidentally wanders into the Ekam
FUSE mounts, performing I/O that Ekam interprets as coming from the build
step. Mount namespaces could solve this, but assuming you don't want to run
Ekam as root, this would require user namespaces, and currently there's no way
to mount FUSE devices inside a user namespace.

LD_PRELOAD is not without its own share of problems, though. On Linux, you
cannot override intra-library calls, so it's not enough to just override the
syscall wrappers, you must also override all libc calls that take paths (e.g.
fopen()). Also, some people don't use the libc syscall wrappers, preferring
syscall(2) or even raw assembly code (especially true of languages that don't
like calling into C), which can't be intercepted by LD_PRELOAD. Oh yes, and
statically-linked programs are a problem. (Luckily I haven't run into these
problems in anything I actually wanted to run as a build tool.)

If I were rewriting Ekam today, I'd probably use seccomp to raise SIGSYS on
actual system calls, and use LD_PRELOAD only to install a SIGSYS handler. This
should be cleaner and work for a wider range of programs, though it would be
totally Linux-specific. (Currently Ekam only runs on Linux, but its strategy
has been shown to work at least on FreeBSD and OSX in the past.)

~~~
kolev
Thanks for taking the time to clarify. FUSE was actually a big drawback of tup
and especially on the OS X platform, so, I'm definitely gonna look into Ekam!

------
amelius
Here's a much better idea: write your makefiles in your favorite
(functional/scripting) language (no special syntax required). While making,
memoize calls and data-dependencies, and use ptrace to record dependencies on
files. When re-making, use this recorded information to redo as little as
possible.

------
snambi
c/c++ needs this

