I really appreciate the advice from Daniel Pfeifer [0] where you treat all your build-tree sub-projects as out of tree dependencies, by overriding the `find_package` macro.
This way, you can just find project dependencies using the `find_package` as if you were referring to something pre-built. Save a lot of hassle.
This requires a lot of extra work and doesn't often work like you'd expect. However, this is a shortcoming of CMake not having a good dependency model aside from the old (and IMO not far from obsolete) system dependencies model that Linux popularized. It made sense then but things have changed, and dependency injection is something CMake fails miserably at.
We shouldn't have to do macro/function overrides, but I'm glad they exist in CMake. Current status quo is "it's really annoying and a huge headache to do, but at least CMake allows me to do it, unlike others".
Because you're ignoring how find_package works and how dependencies use the variables inside of them. Not every find script creates some sort of target.
This is a really weird way of doing things. You can just add_subdirectory from the top level all the way down as intended. Then if you only want to build A, you just do "make A" instead of parameterless "make". Am I missing something that makes this approach bad? IMO this is the idiomatic way to do it.
This doesn't work all the time. The problem is that there's a difference with how targets are built up vs. how find_package() specifies you should detect system deps, and they're inherently incompatible with each other.
This is talking about internal dependencies though, not system ones. That said, my strategy for system deps is to just not use them and vendor everything I can, rewriting their build system in cmake if I have to.
Yes, exactly. It doesn't always work like you'd expect. Custom find modules can use custom variables and thus you have to define them manually in every case. This macro gets you half the way there, at least.
As long as root has a CMakeLists.txt and includes base, a, and b, cmake will figure it out. After the build tree has been built with cmake, you can even run make from the build/root/a/ directory and base/ and a/ will be built. The author, for some reason, does not want a cmakelists.txt in root/ and wants to run cmake from a or b alone so they have to use a more convoluted approach.
One reason against the CMakeLists.txt at the root: When you generate a Visual Studio "solution", every target becomes a "project". At work this results in a few hundred projects and VS gets slow. Someone build a Python script to trim it down. This is treating symptoms instead of root causes and adds accidental complexity.
Of course, the root CMake also needs to parse all sub directories which requires time. However, this has not been a big problem in practice so far. Only on the "annoying" level.
That sounds like you've just got too many projects. Why not merge them? You also save wasted time on linking static libraries (at least on Linux this is significant, dunno about windows).
My normal strategy is too just dump most of their build system and remake it myself in cmake. Even if they use cmake, I often end up commenting out most of it. If you keep things super simple in your cmake, it all works very well. Writing super simple cmake is also pretty easy, as well. Regardless, this article was talking about internal dependencies, not external ones.
Other than the fact that it generates VC++ projects, I haven’t been able to find any advantage of cmake over plain make. (I think auto downloading stuff during the build is a bug, not a feature. I’d also rather never use find_package. Many people disagree. That’s OK.)
Are there any reasonable raw make based solutions for VC++?
- I do not target nor care about Windows systems (which is in agreement with the GP I guess - cmake may be useful as to get project files for VC covered.)
I agree with the GP post and:
- I do target different platforms (Linux, BSDs and Solaris)
- I do target different compilers (GCC, clang and ICC)
- I do cross compile and maintain multiple toolchains (I have 16 GCCs and 8 clangs on my box)
- I do juggle different compiler flags
- I work on a codebase with 1.24M lines
- I have experienced the pain of working with dependencies we need that use cmake.
The only thing cmake has given me is having to learn another way to do all the same things I already know how to do for 10 other build systems.
NB: I do not agree with the GP post that make by itself is a good tool. In most cases, the best tool is simply the most popular one, which - sadly, because it's a pretty bad tool - is autotools for my use cases. I know how to deal with it, my co-developers know how to deal with it, and my users know how to deal with it. Even if I learn 10 other build systems, that still leaves my co-developers and users hanging.
If we disregard this and go purely with technical merits, my personal opinion (= I don't care to argue for this, feel free to ignore) for the best tool becomes Meson.
(I'm aware this post is a bit condescending, but I feel that this is an appropriate response to the parent post's equal condescension. ["You probably feel the way you do because:" - really? What gives you that "insight"?])
GP here: I’m also ambivalent about autotools. It integrates in well with distro package managers. It’s ridiculously complicated, but at least it handles all the corner cases of cross-compilation, etc.
It's definitely showing its age & quite shitty in a lot of places, but no argument can be made that it isn't the "standard" :(. If it were showing up to the party right now as a new entry, it'd be laughed out of the room...
[libtool isn't even actively maintained! Like, seriously, I'd guess >90% of packages on any Linux/BSD systems use it in their build and it's essentially abandonware!]
I have years of experience with cmake, and all the other things you mentioned. I work in environments where the correctness of the binaries is important, and cmake fights that at every possible step. The documentation is poorly organized and overly verbose. 99% of the details in the docs are irrelevant 99% of the time, and the important details are missing or relegated to a non-discoverable page elsewhere on their site.
Come on... make is a perfectly good choice (and for me, preferable) for any project that is small to medium sized. However if you get to a stage where you have to compile for 3 or more platforms and need to account for different compiler versions & systems that people may be using, 'make' quickly becomes hell to use. The possible build combinations just get out of hand, and you can't if-then-else your way out of every situation. Ultimately you get to a point where makefiles can't deal with the growing complexity of a big project.
That is where CMake really shines - it takes the headache out of managing complex, sprawling builds once you get it setup, and you won't have to keep tweaking the config to manage every other dev's system. I grant that the documentation is not perfect and there is a significant time investment in getting everything 'just so', but the long-term time savings make up for that completely in my experience.
Most of my career has been in non-mainstream software development. There's always a ton of rare, unique, or broken stuff that I have no control over. In many situations, adding an extra layer via a generator (such as CMake) adds more work than convenience.
I've also found CMake's documentation to be useless most of the time.
By the time I track down and fix all the issues, I might as well have just written a Makefile (or whatever obscure, ancient build tool they use).
Though I agree with your points, and do not like CMake either (I tend to much prefer autotools or plain make) there has been a time when I saw the use of it: maintaining a software with developers both on Windows using VS and Linux using make.
Please substantiate the claim about how "cmake fights that at every possible step." There's so much hyperbole in your statement that it really does seem like you've spent "years of experience with cmake" futzing around.
So, assume that you need byte-for-byte identical upstream dependencies, and also assume you want to run in a modern CI environment. In make, you build a docker with a line like:
RUN tar -C opt xvf /…/foo-1.2.3.tgz
RUN <build the thing if needed>
And a few lines like this in make:
INC += $(wildcard /opt/foo-*)/include
LIBS += -L…
This generalizes elegantly to subdirectories. (See “Recursive Make Considered Harmful” for the right way to do it), and (crucially!) it won’t build if you don’t have the build environment set up correctly.
It also lets you prevent the CI build container from talking to the internet, so outages of upstream server infrastructure, or package-manager-du-jour serving bitcoin miners can’t directly break production binaries.
It also generalizes to building in other people’s operating systems, etc.
With cmake, you need to read the documentation for find_foo, which invariably has a different calling convention than find_bar (if it exists at all), and it will try to look places it should not for the library. Adding a library this way takes hours.
Also, the docker + make approach works well with languages other than c/c++; cmake does not, unless cmake happens to include built-in support for the language in question.
You can just add_library a filepath in cmake too. You don't have to use the find module. It isn't idiomatic cmake, but it's 100% possible. It sounds like you have specific niche needs, so it's a reasonable approach for your case. IMO it's unfair to criticise cmake for this, since it can do the exact same thing as make in this instance.
I don't know what exactly the poster was dealing with, but I've been trying to get reproducible builds out of cmake and it was a serious pain due to cmake insisting on using absolute file paths.
I can't tell you whether this is/was innate cmake behavior or whether something was wrong with the specific project, but cmake passed all source file names to the compiler using the full file system path rather than a relative path.
This is a serious (though slowly becoming a non-issue) problem since compilers encode source file names into the output binary's debug information and strings (__FILE__). The "-fdebug-prefix-map=old=new" GCC option and its cousins are what is slowly making this a non-issue, but those are a relatively recent addition.
I will say with well-defined projects you can turn it over to ide developers and they can turn on and off options and generate release and debug builds in the gui without having to become experts.
But yeah, figuring out what linker options were generated or even how to version a project takes a lot of digging into the bowels of cmake or a couple laps through the documentation.
You can certainly run regular Make in Windows, either natively or in one of the unix-ish environments, and have it call CL.EXE?
Having it emit VS/MSbuild projects is more of a problem, since MSBuild itself is rather like Make with different terminology and a strange pre-existing library.
> (I think auto downloading stuff during the build is a bug, not a feature. I’d also rather never use find_package. Many people disagree. That’s OK.)
You can easily make Make targets that download stuff too; although it gets messy if it needs to work everywhere, because there's no http downloader in posix, so you're at the mercy of the environment.
The C in cmake means cross-platform. That is the reason.
The windows folks buy into it.
Other than that, it is much more procedural than make and matches the way people write code.
Make is not procedural and is full of footshoots for the uninitiated. You need to write one big dependency tree, and it is defined out-of-order with very opaque rules for things like variable expansion.
With cmake, you can just write your CMakeLists.txt from top-to-bottom with if statements, indenting and understandable variables.
Not to mention that people hate build systems. They have the same popularity as taxes, which everyone wants to get in, get out and forget until next year.
That said, cmake is better for well-defined projects - you can lose the time saved on the front end is eaten up later when you want to do something fancy like change the compiler or compile an external library with unsupported flags.
I like to have a meta project with the root CMakeLists.txt.
In this meta project I either just add_subdirectory all necessary components manually, or I use Findxyz.cmake scripts for each module with that "header guard". They check if target is already defined, if not they add_subdirectory the required component.
For each component (i.e. cmake project) I can then just find_package (module mode) all dependencies and target_link them. Like any other dependency.
For consistency reasons I like to put cmake find modules for all external (installed) library as well and then let the find modules use find_package in config mode for installed libraries, and add_subdirectory for my own components, or for projects I want to include with source and build along with the other stuff.
This way I have one point where I can control the source of all dependencies.
Another way would be to just install all components as libraries in the system with proper cmake config files for find_package in config mode.
I dont like to clutter my system with all kinds of application specific libraries, so that is not the way for me.
"CMake is a conservative and popular build system [...] . Yet, it does not scale well to large projects". CMake was literally funded by NLM back then to handle a large open source project, ITK (then VTK). There are many, many other examples of large projects using CMake (MySQL, KDE, Minecraft, Second Life, Netflix internally, etc).
Initially my draft had the title "CMake and the Diamond". When the article was finished, I decided to make it more clickbaity because I'm not Paul Graham or Scott Alexander.
Rumors are that words like "How to" and "You" make it look more interesting.
I weakened it intentionally by adding the "can" because I don't claim that this approach is the one best solution.
I think the article title is fine, I'm saying that the HN thing that removes numbers and "how to" and so on (to prevent clickbait titles) has done the opposite here. :)