Hacker Newsnew | comments | show | ask | jobs | submit login

IMHO, the build system could do with a little work:

* The various bits generated from and added by the autotools shouldn't be committed. autoreconf -i works really well these days. That's INSTALL Makefile.in aclocal.m4 compile config.guess config.h.in config.sub configure depcomp install-sh ltmain.sh missing mkinstalldirs.

configure.ac:

* Needs to call AC_SUBST([LIBTOOL_DEPS]) or else the rule to rebuild libtool in Makefile.am won't work.

* A lot of macro calls are underquoted. It'll probably work fine, but it's poor style.

* The dance with EXTRA_LIBSNAPPY_LDFLAGS seems odd. It'd be more conventional to do something like:

    SNAPPY_LTVERSION=snappy_version
    AC_SUBST([SNAPPY_LTVERSION])
and set the -version-info flag directly in Makefile.am. If it's to allow the user to provide custom LDFLAGS, it's unnecessary: LDFLAGS is part of libsnappy_la_LINK. Here's the snippet from Makefile.in:

    libsnappy_la_LINK = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) \
            $(LIBTOOLFLAGS) --mode=link $(CXXLD) $(AM_CXXFLAGS) \
            $(CXXFLAGS) $(libsnappy_la_LDFLAGS) $(LDFLAGS) -o $@
* There should be an AC_ARG_WITH for gflags, because automagic dependencies aren't cool: http://www.gentoo.org/proj/en/qa/automagic.xml

* Shell variables starting with ac_ are in autoconf's namespace. Setting things like ac_have_builtin_ctz is therefore equally uncool. See http://www.gnu.org/s/hello/manual/autoconf/Macro-Names.html :

> To ensure that your macros don't conflict with present or future Autoconf macros, you should prefix your own macro names and any shell variables they use with some other sequence. Possibilities include your initials, or an abbreviation for the name of your organization or software package.

* Use AS_IF instead of directly using the shell's `if`: http://www.gnu.org/software/hello/manual/autoconf/Limitation... and http://www.gnu.org/s/hello/manual/autoconf/Common-Shell-Cons... .

* Consider adding -Wall to either AUTOMAKE_OPTIONS in Makefile.am or as an argument to AM_INIT_AUTOMAKE. If you don't mind using a modern automake (1.11 or later), also call AM_SILENT_RULES([yes]). Even MSYS has automake-1.11 these days.

Makefile.am:

* Adding $(GTEST_CPPFLAGS) to both snappy_unittest_CPPFLAGS and snappy_unittest_CXXFLAGS is redundant. See this part of Makefile.in:

    snappy_unittest-snappy-test.o: snappy-test.cc
    @am__fastdepCXX_TRUE@   $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(snappy_unittest_CPPFLAGS) $(CPPFLAGS) $(snappy_unittest_CXXFLAGS) $(CXXFLAGS) -MT snappy_unittest-snappy-test.o -MD -MP -MF $(DEPDIR)/snappy_unittest-snappy-test.Tpo -c -o snappy_unittest-snappy-test.o `test -f 'snappy-test.cc' || echo '$(srcdir)/'`snappy-test.cc
    @am__fastdepCXX_TRUE@   $(am__mv) $(DEPDIR)/snappy_unittest-snappy-test.Tpo $(DEPDIR)/snappy_unittest-snappy-test.Po
    @AMDEP_TRUE@@am__fastdepCXX_FALSE@      source='snappy-test.cc' object='snappy_unittest-snappy-test.o' libtool=no @AMDEPBACKSLASH@
    @AMDEP_TRUE@@am__fastdepCXX_FALSE@      DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@
    @am__fastdepCXX_FALSE@  $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(snappy_unittest_CPPFLAGS) $(CPPFLAGS) $(snappy_unittest_CXXFLAGS) $(CXXFLAGS) -c -o snappy_unittest-snappy-test.o `test -f 'snappy-test.cc' || echo '$(srcdir)/'`snappy-test.cc
* snappy_unittest should be in check_PROGRAMS, not noinst_PROGRAMS. That way, it's built as part of `make check`, not `make all`.



I wouldn't be surprised if this used to be part of a big internal build system, and they hacked up an autoconf/automake setup for the public release.

-----


Definitely - this is the case with pretty much every Google component that gets open sourced. The only exceptions are projects like Chromium and Android that were designed from the start to be open-sourced.

-----


Please file a bug http://code.google.com/p/snappy/issues/entry

-----


Done.

-----


Sorry to be a little offtopic, but do you have any pointers for where to learn this kind of knowledge (more "best practices" than "getting started") for autoconf et al? I just started using them and I am sure many of your criticisms would apply to my code; I'd like to do better... Thanks.

-----


In case anyone wants to know, this is my go-to recommendation for "getting started": http://www.lrde.epita.fr/~adl/autotools.html . Extremely thorough, and shows how all the different pieces work together. I'd read it anyway, even though you've got started. It's very, very good.

I've also found Diego Pettenò's "Autotools Mythbuster" to be quite good: http://www.flameeyes.eu/autotools-mythbuster/index.html . His old article "Best practices with autotools" isn't bad, but a little light: http://www.linux.com/archive/articles/114061

You may also want to browse the autoconf and automake tags of Diego's blog: http://blog.flameeyes.eu/tag/autoconf and http://blog.flameeyes.eu/tag/automake

A thorough reading of the autoconf and automake manuals also points out common pitfalls.

-----


My opinion is that many people hate autotools because they don't understand them. It was designed to handle just about any crazy build scenario you can throw at it -- of course it going to be a little complex.

I started by spending a few hours forcing myself to read the following. It explains how autotools works and (importantly) why.

http://www.freesoftwaremagazine.com/books/autotools_a_guide_...

It is critical to understand what the autotools are actually doing before you just go and copy'n'paste somebody else's configure.ac and/or Makefile.am.

When you are ready to get your feet wet, here is a really good quick summary from the GNOME project:

http://library.gnome.org/users/anjuta-build-tutorial/2.26/cr...

Once you understand what is going on, finding specifics in the GNU manuals (and being able to interpret them) is much easier. Yes, you should understand how m4 works (it's not rocket science).

-----


> The various bits generated from and added by the autotools shouldn't be committed. autoreconf -i works really well these days. That's INSTALL Makefile.in aclocal.m4 compile config.guess config.h.in config.sub configure depcomp install-sh ltmain.sh missing mkinstalldirs.

This is a matter of opinion. When you have dependencies which rely on specialist tools, it is a good idea (and accepted practice, though this obviously could be argued), to commit your generated files too. This means that the files don't change depending on the version of autoconf/bison/etc that's installed on a user's machine.

As I recall, years ago some folks wrote up all these "version control best practices" for some conference, and this was one of the "rules". But it's common sense too - autotools is deep enough magic that most people won't know to run bootstrap.sh, or autoreconf -i, or whatever.

(Ironically, google code doesn't agree with this, and won't let you trim generated code like this from the diffs they send. See http://code.google.com/p/support/issues/detail?id=197).)

-----


> As I recall, years ago some folks wrote up all these "version control best practices" for some conference, and this was one of the "rules". But it's common sense too - autotools is deep enough magic that most people won't know to run bootstrap.sh, or autoreconf -i, or whatever.

I disagree. At this point in the game, autotooled systems are pretty common, and to assert that they are deep magic like saying "CMake is deep enough magic that most people won't know to run mkdir build && cd build && cmake .., or whatever".

Developers are not like most people. A release tarball (created with `make distcheck`, say) will absolutely contain configure and other generated files. That's one of the points of the autotools - to have minimal dependencies at build time.

Developers, on the other hand, will need to look up a project's dependencies, install other special tools (parser generators, for instance). This should be documented somewhere, along with the bootstrap instructions.

-----


These days (in my experience, which I believe is typical), no-one even uses release tarballs, everyone just uses the repositories.

Maybe you can require your developers to install esoteric tools, but that's no way to solicit contributions. For a compiler I worked on, we had a dependency on libphp5, which was enough to discourage anybody. But we also had dependencies on a particular version of flex, gengetopt, bison, autoconf, automake and _maketea_. The final one was a tool we wrote and had spun out into a separate project, which was written in Haskell, and required exactly ghc 6.10. Do you think we were going to get contributions from people who needed to install all of those just to fix a minor issue?

-----


Bad comparison. Yours was an extreme case. Requiring semi-recent versions of autoconf and automake (in addition to any other dependencies the app has) is not unreasonable. They're not "esoteric tools." Neither is the need to run autogen.sh instead of (or before) configure.

-----


> When you have dependencies which rely on specialist tools, it is a good idea [...] to commit your generated files too.

The problem I encounter regularly is that people have slightly different versions of autotools (e.g., autoconf 2.62 and autoconf 2.63).

When they commit with svn (and are not super careful), several commits have small diffs in the generated files (added newline, '\n' vs. real newline, etc.), making merges painful.

-----


Good point. I generally had the rule to use the version of the tool that was last used (upgrading the tool could be done separately). This was a small chore, but only for the people who touched that portion of the code base (and touching the parser/lexer/IR definitions/build system is individually pretty rare).

-----


Autoconf generates makefiles and such which make assertions about the current state of the machine it ran on. Running a build using those those generated files out on a different machine (or the same machine in the future!) would seem to be defying reality. I think you have to accept that your build isn't deterministic unless you ensure your build has no dependencies except on tool binaries you check in right along with the code.

-----


I believe you've misunderstood me (I can see how what I said was ambiguous), so let me clarify.

What I mean is that you're using a different version than is being tested by the other authors. So for example, if developer A used flex 2.35, but developer B has flex 2.36 installed on their system, they might get a different result (or a bug, or a security flaw in the resulting binary) etc.

I'll add that this isn't without cause - it has caused me personally many hours of errors.

Note that you can use autoconf to require exact versions, but then you require that the developer install an exact version of bison, even though they aren't touching the parser (or exact versions of autoconf even though they aren't touching the build system).

-----


As I see it, you should either check in the flex and bison binaries themselves (and whatever runtime dependences they have) and only use those during the build, or accept that your build will fail and require maintenance as time goes by, doing whatever's simplest without trying hard to avoid that. Autoconf is intended to facilitate dependencies on components outside source control which happen to exist on a single machine, so it's only appropriate to use at all if you go the latter route.

-----


Most people don't commit Makefile, but do commit configure and Makefile.in (both of which are still generated automatically, but it's not quite as bad as committing config.status or Makefile).

-----


Soon, somebody will convert the build system to CMake, move it to github, clean up the insulting directory structure, then nobody will look at their google code page ever again.

-----


People act like CMake is an improvement, but it's not. The language is not very good (lists as semicolon-separated strings, seriously?). For example: its pkg-config support is completely broken. It takes the output of pkg-config, parses it into a list (liberally sprinkling semicolons where the spaces should be) and then the semicolons make it into the compiler command line, causing all manner of cryptic errors.

Stick with automake. Seriously.

-----


CMake is also very difficult to debug (e.g. to find out why a library test is failing), harder to fix once you've debugged it, has strange ways of accepting extra compiler/linker flags from the environment, has poor --help, tries to allow creating Xcode projects but mostly produces nonsense, etc...

I think it might actually be worse than autoconf in every way, which is surprising considering how bad autoconf is. The handwritten non-macro-expanding not-much-autogenerating configure/makefile in ffmpeg/libav/x264/vp8 is easier to deal with than either.

-----


I disagree with you that autoconf is bad. Its design came from a lot of locally-optimal choices that don't look so good in 2011, and there's a lot of legacy code being copied around in people's configure.ac files.

To me, it's not perfect, but it's pretty good. Then again, I'm known to my friends as "that guy who knows automake" :-).

-----


On Windows, automake is an order of magnitude slower to compile than the projects generated by CMake, not to mention that compiling with MSVC is very difficult to make work at all with autotools. automake just isn't a viable option if your projects need to be portable to Windows.

-----


Speed differences like this are often down to process creation.

A lot of automation routines designed on unix-a-like systems involve creating short lived processes with reckless abandon because creating and tearing down a process in most Unix environments is relatively efficient. When you transplant these procedures to Windows you are hit by the fact that creating or forking a process there is relatively expensive. IIRC the minimum per-process memory footprint is higher under Windows too, though this doesn't affect build scripts where generally each process (or group of processes if chaining things together with pipes and redirection) is created and finished with in turn rather than many running concurrently.

This is why a lot of Unix services out there are process based rather than thread based but a Windows programmer would almost never consider a process based arrangement over a threaded one. Under most unix-a-like OSs the difference between thread creation and process creation is pretty small so (unless you need very efficient communication between things once they are created or might be running in a very small amount of RAM) a process based modal can be easier to create/debug/maintain which is worth the small CPU efficiency difference. Under Windows the balance is different: creating processes and tearing them down after use is much more work for the OS than operating with threads is, so a threaded approach is generally preferred.

-----


I always used MinGW on windows.

-----


OT: If I understand correctly, CMake was purpose-built to support building Kitware's visualization application. Their app uses Tcl as an embedded language; how they could already be using Tcl in their app then insist on building an ad-hoc language into CMake (versus using Tcl, which already supports looping, conditions, variable setting/getting etc) is an occasional wonder to me.

-----


I hate CMake. I hate autotools, too. But, if there's going to be a replacement for autotools, it's gotta be better than CMake. At least autotools is standard on every Linux/UNIX system these days. CMake is just another build dependency for very little gain.

-----


Don't use them. Makefiles are fine. So far i haven't seen the need for configure magic. On the other hand, i do not work on software, which needs to compile on archaic AIX systems.

For a Win/OS X/Linux portable build, a Makefile should suffice. Example: https://github.com/MatzeB/cparser/blob/master/Makefile

-----


That's probably fine as long as you're just compiling .c to .o. I suspect it breaks down in the face of anything more complex. For instance, I've got a problem right now where I need to use objcopy to turn arbitrary binaries into .o files, and that requires knowledge about the toolchain on the user's machine which, as far as I can tell, can only be gathered by compiling a throwaway file and sniffing its output with objdump. That's exactly the sort of task which autotools is good at, but I'm desperately trying to find a different way to get at the information so I don't have to introduce autotools to what is otherwise an already... erm... interesting build chain.

-----


Why not just convert the binary into an array literal in a C source file and compile that?

-----


Whoah! Hold on, now. autotools (and CMake) exists for a reason. They, in many cases make your life easier than maintaining makefiles. I do think make is much easier to work with (with some warts) than autotools, and it's certainly more comprehensible, since it's so much smaller and contains far fewer bits of magic. But, to just throw out everything autotools were designed to deal with doesn't make sense.

Sure, if your build is simple, make works fine. But, the projects I've worked on where autotools was used, simply using make would have been a horrible experience. And, in most cases, the projects started out using make by itself and then moved to using autotools when the number of platform specific makefiles became too big to maintain.

-----


Configuration magic aside, have fun supporting everything in http://www.gnu.org/prep/standards/html_node/Makefile-Convent... . You never know which features your eventual users will rely on. That's one of the main problems automake solves.

-----


Custom-made build systems tend to break conventions that are useful to packagers (destdir) or home-directory installs (prefix) or developers (ccache, though your example isn't guilty of the last one). I'd rather build something that handles all this and integrates well everywhere than see variations that need custom patching.

-----


What are your thoughts instead on djb's redo, which is being implemented and so far working nicely at https://github.com/apenwarr/redo ?

-----


redo and ninja are replacements for make. They have a dependency graph and build it. Tool integration, configuration, feature detection, lifecycle (install, release, deploy…) have to be handled by something else.

-----


I think that tup build build tool is wa-a-ay better solution than CMake. http://gittup.org/tup/

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: