1. Do you want to compile certain source files with different build flags? Hah, no! Automake only supports it through obscene hacks, and those break if you're using Libtool.
2. Do you want to make a plugin? Best way is to ditch Libtool completely, make an executable target in Automake, and add the linker flags yourself. It makes you feel like you're banging two rocks together and it's not portable but at least it gets the job done.
3. Do you have any linker flags? Hah, no! Libtool will mess with them and they won't work.
The other bit is that Libtool is basically all magic, and Automake is basically all macros. The amount of magic that Libtool does to fool you into thinking you're not writing dynamically linked code is enough to make you puke and makes a mess when it breaks, and Automake's macro system is terrible.
Just imagine, if you will, that you want to compile one file in your library with the flags -msse3, to produce a dynamic library that has to run on systems both with and without SSE3. You can use cpuid to call functions in that file or not at runtime.
All of my searching has lead me to the conclusion that this is impossible if you want to use Automake and Libtool, and easy if you ditch both of them.
It is vastly inferior from the perspective of the end user. Any autotools package can be handled with "./configure && make && sudo make install", and you can change it up by "./configure --without-gtk --enable-network --prefix=/opt/crazy CFLAGS='-O0'" and then make a package with "make install DESTDIR=/tmp/package". Try doing that in CMake, you'll see what I mean.
The syntax is also really terrible, in particular, If/Else. The documentation is not so good, in particular, it's available online as one big block of text and they want to sell you a book.
Finally, CMake doesn't solve the one problem I was complaining the most about, which was the ability to set per-file compilation flags. This is not an optional feature in my eyes -- it's necessary for avoiding cross-contamination of -D_GNU_SOURCE turds picked up by pkg-config, and it's necessary for writing libraries that use processor features that might not be available at runtime, like SSE3.
I think CMake, like Autotools, has a niche in which it does best. "Simple things easy and complex things possible" is a nice motto, but the truth is really boring -- CMake and Autotools make different things easy and different things possible. CMake's niche is medium-large projects with medium amounts of complexity. Projects that are complex, like Firefox or FFMpeg, tend use Autoconf with custom build systems.
Edit: I should also add that, as an expected corollary, if you use ghc -M to generate your Makefile dependencies parallel make works fine. I used ghc that way for years. I think there was a corner-case if you interrupted a parallel build, but it was easy to deal with.
Cabal is probably the #1 thing keeping me from writing more Haskell code. I've had so many issues with conflicting versions of various libraries being required, and incredibly cryptic error messages upon failure.
Which is sad, because Haskell otherwise seems rather attractive.
It's also ironic, because I would have imagined that a package manager for a purely functional programming language would be a bit more robust.
Cabal has recently been causing problems because the fast pace of Haskell lib development and some big changes in the language and certain core libraries has been causing lots of incompatibility problems.
cabal-dev is a new tool, based on cabal, that will use a per-project virtual environment and should "fix" all the version incompatibility issues there have been. cabal-dev is similar to python's virtualenv, ruby's rvm and other similar tools.
If that's becoming standardized, the way pip/virtualenv are becoming a standard part of Python 3, that's good news. Maybe this problem is already on its way to being solved - let's hope so!
People: hand wavy complaints that do not give concrete examples are hard to fix. Please communicate concrete instance of your problems so that either a) someone can help you solve them now or b) there is a concrete example of a problem so that folks can fix it.
That said, any serious dev work really should use the cabal-dev tool, it solves having those build problems that new ghc users hit.
Is there ever a negative connotation to asking for concrete examples? (I'd actually like a concrete example if there is one)
A: "I looked up everywhere I could before coming to ask for help"
B: "You looked it up everywhere? Ok then, give me the name of at least one place you looked it up at."
$ sudo apt-get install ghc6 cabal-install
$ sudo cabal update
$ sudo cabal install hakyll
That said it seems that cabal-dev is not packaged for Debian. Come the wheezy release of Debian the hakyll project I wished to try out will be available as a .deb, so I'll try it again then.
However, you can do "cabal install cabal-dev" and rely on cabal-dev for the rest. It means you don't have a global package cache, but in this case it's really for the best. Don't forget to add .cabal/bin to your $PATH if you want access to cabal-dev.
We have been at ghc 7 for quite a while now (the Debian package is `ghc`).
You might start to see bit rot.
> Likewise, when you have concrete problems, have you asked for help on #haskell irc or the Haskell-Cafe mailing list?
Yes, and I will say that the Haskell community is one of the most friendly language communities in my experience. I was very impressed with the attitude on #haskell, and I wish other development communities were more like it.
That said, I don't think these problems should be happening in the first place (or at least as often). For contrast, a Python package that is correctly developed inside a virtualenv with a proper requirements.txt should be easily installable via pip on any POSIX machine. Haskell's language design is much more strict that Python's, so it surprises me that something like virthualenv is only in alpha rather than the standard approach.
If I remember correctly, my problem was that I wanted to use Snap for my application, but Snap would only compile with an older version of the same HTTP client library that was also needed for some other API library that I needed to use. Maybe there's so
A (separate) problem that makes these hard to debug is the way that Haskell modules (and their documentation) are organized. The documentation is highly focused around types, which makes sense, but this causes problems if you (A) aren't sure how to use the relevant constructors, or (B) aren't sure which module provides those types.
My favorite error message was something along the lines of "Data.Text.Lazy expected but found Data.Text.Internal.Lazy". Since I had at the time imported neither Data.Text nor Data.Lazy, I was left to figure out which module relied on those packages, and then which module I needed to import in order to construct the types that this module needed. This is a problem when the package names and the import paths are sufficiently different - maybe not to a veteran, but certainly to a newcomer.
By contrast, certain languages have very strong cultures of documentation-by-example. Go and Python would be two easy examples. It's incredibly helpful when most packages provide a minimum of one single, straightforward example of how to use the package. It makes it much easier
(As for the namespace issues, I really like the Go approach to this: the import path is just the same URL that is used for installing - ie, 'import "github.com/jbarham/go-cdb"'.)
> People: hand wavy complaints that do not give concrete examples are hard to fix.
It's hard to get much more specific when I'm not actively trying to solve the problem (anymore); I can't readily recreate error messages from several months ago.
 The possible exception being issues with compiling C extensions, but that's not even the biggest issue I've found with Haskell. And yes, I know it's taken Python 20 years to reach this state, but now other languages are starting to follow in its footsteps - look at npm, for example.
It's bad enough I start to wish everyone would just use the JVM, where you have maven. Truly reproducible builds, depend on whatever range of library versions you work with (and it's no problem if two programs want to use different versions), a structured and testable plugin system for extending the build system, and you can use it to build any language (there are some benighted fools who write their own tools like SBT, but they will at least stay compatible so can usually be safely ignored).