Hacker News new | comments | show | ask | jobs | submit login
An Introduction to Modern CMake (cliutils.gitlab.io)
267 points by uyoakaoma 23 days ago | hide | past | web | favorite | 122 comments

Somehow I'm never happy with CMake guides. The official book is ridiculously outdated among other things, the official documentation is complete and current but provides no guidance, and none of the blog-documentation about it gives a complete picture. Some of it is bad advice. This one doesn't talk about the dependency graph, something I consider very important in a build system. It is also missing a lot of other stuff. (Fair enough, it's provided for free.)

I wrote the CMake training material for KDAB (www.kdab.com) while I worked there. In my biased opinion it's the best CMake guide around, especially after it was extended and polished by coworkers :). I tried to mix explaining the "inner logic" of a build system, and CMake in particular, with examples, resulting in something I'm reasonably happy with. The main difficulty was ordering the topics to avoid too many forward-references while also not starting with a lot of not-obviously-interesting groundwork [1]. (I don't work there anymore btw, so the interest I have is getting my work used)

This intro to modern CMake should be good because Steve is a major CMake contributor and good at explaining. The presentation even contains a dependency graph! https://steveire.wordpress.com/2017/11/05/embracing-modern-c...

[1] Aside: I figured that "There are two types of bad software documentation: math textbooks and cooking recipes. The former does not explain why you are doing things and other helpful context, the latter won't help you if you need to do something differently, which is almost always."

The official documentation is sub-par. Want to know the difference between include_directories and target_include_directories? Well, good luck going through docs with no examples and then parsing through several year old stackoverflow posts.

That is what I mean with "no guidance". In this case, you have a common best practices question that you have to figure out by diffing the two pieces of documentation yourself. The information is usually there.

> the official documentation is complete and current but provides no guidance,

hmm, I think on the contrary that the documentation actually does a good job of helping you making your first CMake file : https://cmake.org/cmake/help/v3.12/manual/cmake-buildsystem.... - then you ddelve deeper when what's provided here doesn't cut it anymore because of less common requirements.

It's even available as man pages !

Exactly what I am trying to fix [1] (if I somehow manage to actually finish the book hopefully soon). I was missing a hands-on guide as well that actually guides you through a typical to large-size project with many internal/external dependencies. This is what I am discussing in the linked book with working approaches that I have tested and verified in production.

[1] http://effective-cmake.com

The recommended compatibility boilerplate for new projects is tedious. There must be a way to include this knowledge in cmake itself rather than requiring every "properly" configured project to get this right:

  cmake_minimum_required(VERSION 3.1)

    cmake_policy(VERSION 3.12)
This is all before you even start thinking about your own project.

Why do this? Why not choose a minimum version that has the features your build system depends on and just leave the first line at the top?

It's not like you can use features from the newer version anyway, if you seriously want to support the older version. And if the same line of code has two different meanings in two versions, you want the old meaning (as you would get without target_version(<new version>)) because presumably you wrote your CMakeLists assuming that behaviour.

Edit: If I had read TFA before I posted I would've seen that's where the quoted code comes from. As the first thing that it teaches you to put in your CMakeLists.txt that is a pretty unfortunate failure IMO. If an old version of CMake has a behaviour you don't want (such as "wrong linking behavior on macOS" quoted in the linked article) then bite the bullet and use a newer minimum version of CMake. You could do this on only the cases it would cause a problem (e.g. just on MacOS, or just when a cache variable is set) but do so unconditionally of the version, so if someone tries to use an old version of CMake in that configuration they'll get a hard error. That's what you want!

This syntax is also uber ugly. I can’t understand why C still hasn’t a proper build system that is either using convention over configuration (like Go or Rust) or something like cmake but with a proper syntax (maybe something in pyhon?)

The syntax... `list(GET mylist 4 val)` - oh, you mean `val = mylist[4]`. And the strange `function(ARGUMENTNAME argument1 OTHERARG arg2)` function syntax, and that out value names are always passed into a function as this language does not know `=` assignment, and the semicolon separated lists. And worst of all, the weak to absent typing, reading from a nonsense variable never fails.

I hope this is all abstracted enough so one day cmake can become saner to read and write. For now you just have to carry this otherwise useless syntax knowledge with you, or switch to meson or waf. Most IDEs seem to prefer cmake though, especially with the new cmake server mode it might be here to stay.

And a big thanks to all the saints who want to make cmake understandable for the rest of us.

For a domain-specific language, CMAKE is a total failure at making things clearer and simpler and more reliable.

And of course the wizened software engineer in me has to be restrained: "Oh hey, I can make a better build system than that..."

Why restrain oneself from starting an interesting project?

The world is full of interesting projects, and you have to do triage or go crazy with too many things to work on. While I have a fair amount of experience with build systems (starting with a public domain version of make that I wrote in the early 1980s for Vax/VMS and MSDOS), it's not something that I want to spend a couple years on.

On the other hand, I complain enough about CMake, Gradle and similar tools that maybe I should. I've got some ideas to try out . . . . :-)

It'd be nice to see those ideas realized. I have some ideas myself but haven't started anything yet because I've been focusing on my studies. I already have one small project that I work on in my free time.

My problem with most build systems is they're general purpose tools. I need to teach them how my projects work every single time. It'd be so much nicer if I could encode my conventions on the system itself.

Starting a new project should be as easy as initializing a git repository and putting the right files in the right directories. The build system should be able to at least figure out where the sources are based on convention and determine the dependencies between files.

Because making a new build automation system is akin to making a new standard. Or trying to build a better mousetrap.

I’ve used a few different build systems over time. A handful of them I’ve found quite agreeable, others not so much.

I once even went as far as writing a rudimentary build system from scratch myself for use in one of my personal projects. In that particular case, implementing my own build system turned out to be the right choice, but I also gained some insight into just how difficult it is to create a good build system so I would like to echo your sentiment about thanking the people that spend time developing fully fledged build systems.

I’ve never heard of meson before. A cursory glance at the meson docs is telling me that meson is worth looking into further. Thanks!

Meson is likeable and with ninja has a great user experience. I do think it is much easier to understand than cmake (and I have some years of cmake usage under my belt(

Before CMake the "proper" thing were ./configure and ./automake scripts which are an atrocity from any point of view.

More than "proper", or simply "nice looking syntax" the thing I like from CMake is that a single file makes my project compile in several versions of Linux and Windows using a variety of compilers.

Any suitable replacement system should also do this.

+1,000. CMake may be bad in many respects, and a bit unhelpful in others, but it makes managing multi-platform projects pretty straightforward. There's not much more cruft than you'd expect, and there's convenient support for the standard tools for the major platform - and in exchange for that, I can forgive it quite a lot.

And it does actually seems to be improving over time, too, so over time I seem to have to forgive it less, even if still measurably more than not at all.

For single-platform projects it's less essential, but I think the PUBLIC/INTERFACE project settings stuff is still a draw, and its support for Ninja build files too.

To me autotools have the great advantage that I can make projects using them work relatively easily if they need adjusting in one of the many things I build, and I can write the support for a new project easily. They're pragmatic, and I don't think deserve "atrocity" by any means. In contrast, cmake often causes me to throw up my hands. It's not my experience that cmake projects just work even on GNU/Linux when needing, say, to build shared libraries or install in a different sort of tree. I've used autotools quite painlessly over many years on assorted platforms including in GCC "Canadian cross" (cross-building a cross-compiler -- between GNU/Linux, SunOS, and MS-DOS originally). I did find a new level of horror in bazel when faced with building tensorflow for a different target, despite explicit instructions for a different release.

You should write an article detailing your experience with both systems.

It should make for an interesting read.

There is a python one, waf, https://en.m.wikipedia.org/wiki/Waf It is awesome but not very popular. I wish it was more popular. As a python and C++ developer the build files and up so much clearer than anything else. I write and use cmake files every day and I don't like it at all.

Interesting, thanks for that.

Haven't dug into it yet to say thumbs up or down, but the fact that it uses a full-fledged generic programming language instead of a DSL is a promising start.

Here's a waf "makefile":


Have you tried doing anything complex? It's been a long time since I last used Waf, but I found it very difficult to extend.

It has been a while but I wrote an ARM and Atmel cross build system for Arduino boards. It was hard keeping compatibility with the IDE environment but the result worked and was not half as ugly as a cmake setup.

Meson seems increasingly popular. Never tried it myself though.

They also went the cmake route of not trying to be the build system, but the meta buildsystem which just writes out ninjafiles, or makefiles and lots of other obscure formats in case of cmake. This is probably also the reason why cmake 'won' so far.

I think they did it in order not to duplicate make's functionality. Make is useful for partial rebuilds (when you change a single file only it is recompiled). You either use make or have to implement this yourself.

They did it because they didn't want to duplicate Ninja's functionality. It does partial rebuilds, it's been heavily optimized for just doing that, and it's a Google-created project that they're already using for other non-Bazel projects.

I'm loving meson. I have around a hundred thousand lines of code building with it and it's the cleanest build system I've used yet. Much less obscure than make or cmake.

Bazel, the open source sibling of Google's blaze.

I feel like there isn't a community outside of Google that uses Bazel strongly, and therefore it has weak Community Support.

Maybe we need a language that compiles down to CMake!

Well call it autocmake!

Love it. You start with AutoCMakeLists.m4. Yes, m4 macros are back in the game again!

Using autocmake, you build that into a CMakeLists.txt. After that now you can use cmake to generate your Makefile. Voila.

I'll create the root makefile that invokes AutoCMake, then CMake, then Make and then all the edge cases CMake can't handle, plus add a much more usable interface.... Take out AutoCMake and it's actually what I'm doing right now.

> Yes, m4 macros are back in the game again!

Actually, I did some codegen experiments recently and found M4 to be really nice once you get past the bad documentation and actually understand it, along with ignoring any experiences with autotools of course. Besides the general substitutions and macros, the divert/undivert primitives are simple yet this is where the real power of the language is, they easily let you write to a number of named/numbered buffers and then pull them back in where needed. The wikipedia article probably shows this off more clearly than the documentation: https://en.wikipedia.org/wiki/M4_(computer_language)

> found M4 to be really nice once you get past the bad documentation

The gnu m4 manual is one of my favourite technical documents, what's wrong with it?

Like most gnu projects, the manual is very good, the problem is it's a manual and just a reference for looking up specific things, it doesn't really explain how to use m4. I'm not sure if this is by design or lack of effort in that direction.

Take divert for instance like a mentioned above, it's one of the foundations that make m4 good but it's not mentioned until chapter 10 and even then the examples are hard to follow and not particularly good, they could have gone with something more real world:

    define(INDEX, 1)
    define(BODY, 2)
    define(ADD_CHAPTER, heading, text

    #guts of the generation go here
    ADDCHAPTER('one', 'paragraph one')
    ADDCHAPTER('two', 'paragraph two')

The syntax there isn't correct, but I think an example like that shows off how and why you'd use this feature more than the examples in the documentation.

Shake looks promising to me. It has a Haskell front-end, and clearly has had a lot of thought put into it. That said, I have never tried it.

Despite CMake's attrocious syntax, the underlying compilation model has always worked more or less as I expected. I can't say that about most of the alternatives I have tried. I was frequently left confused and frustrated by my build systems until I found CMake.

SCons is a build system for C and C++ (and some other languages) that is configured in Python. I haven't tried it much though.

I've been recommended Boost.Build, aka b2, the build system Boost uses to build itself, but also usable for any other project:


Beware! At a previous job, Boost Build was easily one of the most unpopular aspects of the tech stack. It had 5% of people evangelizing it, and they were also the only ones who “got” the complicated Jam file arrangement. The rest of us couldn’t stand it. Just way too abstruse IMHO.

I used Boost Build for years, but I never figured out how to do anything but copy/paste our existing build files. I found scraps of documentation on Jam here and there, but it never made much sense when I looked at our code. I later discovered CMake and found it much, much easier to learn.

On an unrelated note, the best comment I have ever come across is related to Boost Build: https://github.com/openembedded/openembedded/blob/fabd8e6d07...

Boost.Build is so good that even Boost is moving to CMake

Did they also recommend any documentation or examples? Because I've yet to find anything that comes close to the extensive CMake documentation. Heck, even GNU autotools have better docs and examples than b2.

Scons (https://scons.org/) is one alternative. The configurations (and Scons itself) are Python.

Scons has soooo many problems. You still can't pass in positional linker flags without completely hacking how it calls the linker. :(

Configuration-by-full-language-by-default is IME a relatively big red flag for things like this. Much rather have a limited configuration language, possibly with the option of going full bore if needed.

Case in point, qbs is much more workable. Declarative-glue-predefined-stuff-together most of the way, JavaScript capability if needed.

It's awful. It's not even a build system; it's a framework to make your own custom special snowflake build system that is unique to your project and which contains 1% of the functionality you get for free with cmake.

Take a trivial problem: enabling threading in a cross-platform manner. CMake: "find_package(Threads)" Autotools: "ACX_PTHREAD" SCons: Write it yourself.

Now multiply this by every single other detail of your project. It's wheel reinvention writ large, and no one else will have a clue how to build or work on your project.

I liked scons but it is what I would consider a legacy project that I would not introduce in new software or actively switch to.

If you like scons check out meson, which feels similar I think.

Worse, if you try to leave those lines out because just seeing them at the start of every build file, the whole ball of spit an glue pesters you with warnings. Ugh.

It’s better than every other build system, but god I wish they had just used a preexisting language for it. The syntax for CMake just feels so janky and weird, and it’s another set of things I have to remember and (eventually) forget. It’s not like it has these amazing language concepts nothing else can do; and if they’re worried about size or dependencies something like Lua would add like no overhead and be super simple to link.

I agree. CMake's scripting language has a lot of the same pain-points as Bash. For example, the lack of a clear distinction between lists and strings.

I've sometimes wondered if CMake would benefit from getting an additional / replacement scripting language. While retaining the same underlying object model and other code, that is.

My only fear would be that it would be a first step in morphing into Bazel, which I find unbearably complicated.

Clarification: I know that Bash has arrays. But environment variables such as "PATH" use colon-separated strings.

Couldn't agree more.

Any system that designs its own DSL should be suspect: designing languages is bloody hard, and anyone who think he can cobble his own to solve a problem as hard as build systems is doomed to produce something like cmake.

It's kind of a historical accident. Originally, CMake was supposed to be just a list of commands (CMakeLists.txt). Of course, it was heavily extended to what we have now.

Do understand that things change over time, from simple beginnings to complex offspring. But I agree with many commenters here; it is frustrating and a ball-ache to have to learn yet another syntax to manage build dependencies and instructions. On that, I also detest how YAML, Python and so on make space indentation a core aspect of how the files are interpreted and processed. That idea stinks. imvho.

> it is frustrating and a ball-ache to have to learn yet another syntax to manage build dependencies and instructions.

Oh, I agree as well. I was just providing context.

The problem with modern cmake is that it cannot run on older systems. So if you want to use it you end up either static compiling an incredibly hard and tedious depedency tree or have to use some just released OS.

What I want out of a make system isn't bleeding edge features. I want to be able to use it for more than 3 years.

A nice tip is that you can install a very recent CMake from pip with most systems.

Just `pip install cmake` and then you can require version 3.12 and don't have to worry about ancient Ubuntu packages or whatever.

On systems were you don't need to do this (new enough) it'll work. On systems where you need to do this it will break things. And because it's pip, it'll be incredibly hard to figure out what broke and how to fix it.

In general you can avoid pip breaking things by doing so inside a virtualenv. The following should work on at least the last few years' worth of OSes:

    $ virtualenv /tmp/ve
    $ /tmp/ve/bin/pip install -U pip setuptools
    $ /tmp/ve/bin/pip install cmake
    $ /tmp/ve/bin/cmake ...
    $ rm -rf /tmp/ve
On some OSes (e.g. Debian and derivatives) you'll need to install virtualenv itself from the OS first, but that won't break things because it's from the OS.

On sufficiently old OSes, you may need to set PIP_INDEX_URL=https://pypi.org/simple/ and PIP_TRUSTED_HOST="pypi.org files.pythonhosted.org" to disable certificate verification. (I'm not sure of a good way to work around this problem. In theory, Python 3 would solve it, but those same old OSes have an old enough Python 3 that the latest version of setuptools fails, and I can't figure out how to install an old enough setuptools.)

Also - if you need to unbreak your system Python, in general it suffices to ensure that /usr/local/lib and ~/.local/lib have no pythonX.Y directories with anything in them. (Empty directories are fine.) At my last job where we needed to give non-sysadmins root access on certain machines, I added a Nagios check to /usr/local/lib/python*, which was remarkably effective at catching problems before they turned inexplicable.

Well, the first page touches briefly on this:

> You should at least install it locally. It's easy (1-2 lines in many cases), and you'll find that 5 minutes of work will save you hundreds of lines and hours of CMakeLists.txt writing

I agree that it's cumbersome wanting to use tools that are not readily available in most commonly used systems. On the other hand, wanting to keep support for old systems that you may only hypothetically want to use shouldn't be limiting your choice of tools (or better versions of a tool). Achieving an easy and straightforward means of installing should be a goal for the tool itself, as it is the case for latest versions of CMake (assuming that phrase about 1-2 lines is true).

I agree that 3 years, or even 5, is an acceptable amount of time to keep using the same version of a tool. I'm currently at CMake 3.5, the one that comes with Ubuntu 16.04

What systems does modern CMake not support?

Android NDK supports CMake 3.6 only for example.

There are two different things: CMake support within the NDK (by Google) and NDK support in CMake. The later is usually easy to use and supports many NDK revisions.

See: https://cmake.org/cmake/help/v3.7/manual/cmake-toolchains.7....

There are plans to eventually update cmake support.


But I guess you already know how slowly things in the NDK progress anyway.

If so, that sounds like a lacking of the Android NDK and not the other way around.

Sure, doesn't change anything for me.

> The problem with modern cmake is that it cannot run on older systems.

... uh ? they ship fully static binaries that work all the way back to centos 5 and other 10+ year old linux distros


I have a 3 something version running on OpenSuSE 11.2 (that thing is ancient) . Spend maybe a day disabling various features to make it compile. Still didn't run into anything that needed cmake to support encryption.

The problem with modern cmake is the huge amount of dependencies it requires.

The only required dependency to build CMake is libuv. Are you talking about something else?

This is quite welcoming. I have tried to digest cmake docs and get so lost.

Am I the only one that can’t grok cmake docs?

Edit: spelling fixes

I found the function-level documentation baffling until i carefully read through the 'buildsystem' documentation, which lays out the fundamental model of how cmake works:


Its not handholdingly simple, but it is precise and fairly complete, so you have a chance of understanding what is actually going on. All of the tutorials i've seen are just cookbooks with no explanation of how anything works, and mostly describe out-of-date or just plain bad approaches. The rash of "modern cmake" stuff avoids the latter, but really assumes you already know cmake. So that page of documentation was really crucial for me.

The 'language' documentation is also helpful:


The only way I've been able to make any sense of CMake docs is in conjunction with examples -- searching on Github/enormous open source projects using it (e.g. LLVM). Just reading CMake docs to understand how to use directive x almost always leads to failure.

Yup agree, I’m like I know this X project compiles file, let me Look at their cmake file....copy pasta.

I’m not proud of it, but it works.

Yeah cmake docs suck. I dig the concept but figuring out how to use it is/was painful. I have not looked in a while.

It used to be the only chance you had to learn it was a book you could buy.

Oh no, you're not at all alone. I've actually used CMake as a "how not to do docs" antiexample in the past. More than once, actually.

What I find a big shortcoming from CMake is that it does not have support for building for multiple architectures at once.


Quote: "The fundamental problem with supporting multiple architectures is that pretty much all of CMake is designed to support one architecture at a time. Modules/*, CMakeCache.txt, etc. are all built around only finding, using, and building one artifact per library (OS X universal binaries work with multiple architectures because they are still only one file). I think even your "toolchain scope" approach would end up being used in practice to wrap the entire CMakeLists.txt file."

CMake replaces `configure` scripts. It would be difficult to imagine what multi-architecture support in a single build directory / command-line invocation would look like.

Instead, what we do is to wrap calls to CMake in a script that makes choices about build directories, which flavours to build by default, which mobile SDKs exist, etc. When producing a release, we notably don't use this, because we are precisely interested in building for each architecture in parallel on different machines.

Yes, for cross-compiling to targets with different configurations we also evoke CMake with lots of options.

However, that you have to wrap calls to CMake in your scripts is quite ugly. Are those scripts cross-platform for starters? As soon as you start to write code to use CMake, this seem to defeat the purpose of a build generator.

OpenCV includes a python script to build a framework for iOS. I agree, it’s not great.

> It would be difficult to imagine what multi-architecture support in a single build directory / command-line invocation would look like

That's exactly what MacOS fat binaries are. Single binary contains sections for multiple architectures.

Please don't use cmake, the dependency chain it pulls in on a bare machine is massive.

I hate and am deeply suspicious of dependencies but I just did 'brew deps cmake' on my machine and it printed nothing.

Some CMakefiles have many and gratuitous dependencies but that's the fault of those authors.

If dependencies are a problem, you could always use the pre-compiled version from cmake.org. It uses the bundled 3rd party libs.

I prefer software that uses mature 3rd party libs over NIH driven rewrites. Please also keep in mind that CMake is available for many platforms sone with fading user base like AIX, HPUX, or Solaris. Using a well tested event library like libuv helps to keep those supported.

What does it require over and above what you'd need for development anyway? The vagrant provision script for my Ubuntu VM installs this lot before grabbing the CMake source and building it:

    apt-get -y install subversion git make gcc g++ subversion emacs git-gui zip gksu synaptic gdb valgrind
I'm pretty sure that not all of these are actually required. You do need gcc and g++, I'm sure, but I expect Valgrind and GDB are strictly optional...

Gentoo lists:

    emacs? ( virtual/emacs )
    system-jsoncpp? ( >=dev-libs/jsoncpp-0.6.0_rc2:0= )
of which most have dependencies as well which is honestly ridiculous for a build system.

Those look like exactly the kinds of dependencies I'd go for if I needed to write a new C build system. Nothing ridiculous about them.

The emacs dependency is odd to say the least, but I'm pretty sure CMake doesn't actually depend on it. I don't know much about Gentoo but perhaps the question mark after it means it's just a suggested dependency.

It will be the cmake-mode for editing CMakeLists.txt files.

Anything dep a ? is optional

Perhaps they configured it that way? Maybe these are optional dependencies that provide additional functionality.

I was able to build it on a Ubuntu VM after installing no more than gcc, g++ and make.

apt list --installed suggests this VM does have a surprising amount of stuff installed (more than I'd expected at any rate), but of the dependencies you list it appears to have zlib, curl, and nothing else.

Gentoo gives 'use' flags for configurable options, I stripped those out and posted the hard dependencies.

FreeBSD only lists 6:

> libcurl.so : ftp/curl > libexpat.so : textproc/expat2 > libjsoncpp.so : devel/jsoncpp > libuv.so : devel/libuv > librhash.so : security/rhash > libarchive.so.13 : archivers/libarchive


I also build CMake from source on Ubuntu (because they used to ship ridicously ancient versions and I have keep doing it since) and there is the possibility to use bundled versions of those dependencies. Also, for a development tool I see nothing wrong with those dependencies, I do not have to redistribute all of this.

And for those compiling everything from source like Gentoo or FreeBSD ports users, I do not understand it because most of the time is spent on building gcc or clang anyway which is absolutely not optional.

What's wrong with dependencies?

Every dependency increases the fragility of your program. What if you update the dependency and it breaks your program? What if you have two programs dependent on the same library -- but different versions? This just scratches at the surface of the problem.

Sometimes the risk is worth it: you need some complex functionality not worth writing yourself. In that case it's a good thing. But understand that it's a tradeoff.

This is a very dangerous argument for anything but the most domain specific or simple logic. Every time I roll something non trivial rather than using the widely testing and "battle hardened" alternative that increases the fragility of my program.

There are times when it makes sense, but those are the special cases, and carry a cost which should be considered.

> you need some complex functionality not worth writing yourself.

well, CMake supports downloading stuff from the internet - git repositories, etc. If you want to be able to download from https:// addresses I sure hope that you won't reimplement it yourself.

cmake actually has an issue where a dependency it has depends on cmake, worlds of fun.

dunno on which system you are, here is what it pulls on my system :

curl libarchive shared-mime-info jsoncpp libuv rhash

and you can always download a prebuilt static binary from their website: https://cmake.org/files/v3.12/cmake-3.12.1-Linux-x86_64.tar....

What do you advise using instead?

There is the problem indeed.

In 2018, the age-old problem of the build system remains entirely unsolved.

CMake is terrible, but so are all the others.

I believe the CS community as a whole has gravely under-estimated how hard the build system problem actually is.

We're only starting to recognize this.

Ideally there would be something like premake but with a smaller and more focused DSL.

I could also recommend Craig Scott's recent CMake book: "Professional CMake": https://crascit.com/professional-cmake/

IMHO it's the best available CMake book available right now. I especially liked the recommendations at the end of every chapter.

I think the best advice I can give to someone learning about cmake is to use meson instead. I once had dozens of codebases using cmake, and have since moved most of them to meson and start most new new projects with meson.

I dislike meson, it mixes configuration with generation when a separate project (if even really needed as sh is fine) for configuration could exist.


It mixes configuration as in ./configure related with generation as in build generation.

CMake is the perl of build systems: useful and feature-fat but one of the worse DSL syntax I've had to grapple with, barely better than that of a Makefile

The CMake language is much, much worse than the beautiful makefile syntax.

lol, calling the makefile syntax beautiful is quite a stretch, but maybe having been exposed to enough cmake code slowly rewires your brain to the point where that can happen :)

This is really not an introduction to CMake. It is more like a collection of the author's opinions about CMake, barely categorised into sections and some quite contentious.

An introduction would be example driven, starting with a very short but complete example:

    cmake_minimum_required(VERSION 3.1)
    add_executable(Foo foo.cpp)
    add_executable(Bar bar.cpp)
It would explain projects and targets (noting the different meaning of “project” to some IDEs including Visual Studio: project<->solution and target<->project). Then you would progressively build from there. Start by adding a dependency such as protobuf to show find_package() and target_link_libraries(), showing both the new protobuf::libprotobuf target dependency and the old-style ${Protobuf_INCLUDE_DIRS} and ${Protobuf_LIBRARIES} variables. Then make some shared code between your two executables into your own library, discussing shared vs static. I would not even mention variables until after this point (even though ${Protobuf_INCLUDE_DIRS} already is a variable).

In other words, an introduction should be top-down and pedagogical. This is the exact opposite of that: picking through a CMakeLists.txt from the bottom up, one line at a time, before you even know what the point of it is.

Perhaps I misread the title: I read it as “introduction to CMake [but using modern techniques]”, but maybe it’s “introduction to the modern bits of CMake [assuming you already know the old stuff of CMake]”. But even that could be example driven, admittedly with more effort: start with an old crusty CMakeLists.txt with plenty of bad habits and make it better one step at a time. Or have lots of little CMakeLists.txt with one bad habit at a time, and fix each of those.

I am not convinced by some of the recommendations it makes either, although I think there will always be some disagreement about some of these things. I have already given my view on the “cmake_policy” atrocity (which is the very first thing in “Introduction to the basics”!) in another comment here. And in the examples there are workarounds for old versions of CMake that don’t have targets for e.g. find_package(boost) by creating interface targets using the old _LIBRARIES and _INCLUDE_DIR variables. These are very neat but not very accessible to CMake beginners. It would be much simpler to put up with using the old variables, or commit to increasing your minimum supported CMake and forget them entirely.

I used CMake and I liked that it can produce NMake files, which can be used for building with Windows SDK (not bloated Visual Studio, SDK contains just headers and compiler) on Windows XP.

And generally it is much easier to use than manually write Makefiles or use Autotools with weird unintuitive syntax. As I remember, they use `dnl` keyword (download?) for comments!

You like CMake in much the same way people like Visual Basic: it has lots and lots of features and therefore people find it useful.

But, just like VB isn't a good solution for an embedded language, it doesn't mean cmake is a good solution to the general problem of building large codebases.

No, I like CMake because it is better than writing Makefiles manually or using autotools.

One of my problems with "modern" CMake is the same as my problem with "modern" C++:

Their proponents argue for using certain new language features to get things done. But because of backwards-compatibility concerns, the system still supports the old, quasi-deprecated constructs for doing those same things.

And so you end up with an overly complicated language that seems to have multiple, reasonable ways to do the same thing. And codebases containing a mix of the two, even for newly written code.

IMO it would be better for CMake to make a clean break, and have CMake 4 only support "modern" CMake.

(edit: And, as I've posted elsewhere, switch to a more robust scripting language.)

There are some good recommendations in here! Does anyone know of a way to check for, or enforce, these recommendations? The only CMake linters I could find seem to focus on whitespace and naming issues.

Can anyone with experience with Bazel, Buck, or Pants chime in?

They are quite different, speaking of Bazel it’s designed to be an all inclusive build system for any language at essentially infinite scale. Which means it:

- Has nice Python based DSL

- Strongly encourages 100% explicit dependency specification

- Has built in caching (local and remote)

- Built in test result caching

- a fully featured build graph query language

- Built in distributed execution support (for any build step)

- All work scales by part of tree you are building not it’s absolute size

- Support for fetching external source code and binary deps

- Has first class support for executing and testing containers

- Has first class support for code gen (ie you can build a compiler use it to generate code, then build its output,and everything works, no hacks in one build system)

The downsides:

- Works best if you all your code is built with Bazel, which makes deps hard

- Support for Python is weak, Ruby non existent, and Node is beta quality

- Has essentially zero convention, everything must be explicitly configured (ie explicitly describing go deps)

- Has a memory hungry local java daemon

- Has a more overhead than ninja (but is more accurate because of hashing and isolation)

One downside I found: it makes heavy use of symlinking which doesn't always play well with other tools (in particular: Go tools) and I found made things particularly annoying when I used Bazel from inside docker and outside docker (the symlinks left don't match what's on my host).

I've had to fight through Bazel somewhat when working with Tensorflow.

My impression was that it probably suits Google's internal needs very well, but it's really complicated compared to CMake. I'm not sure that for most developers the learning curve would be worthwhile.

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact