It's already bad enough that the developer needs to understand both target-based CMake and legacy CMake, now there need to be two more languages (slide 13) that must be understood? It's been 9 years since Modern CMake came out, and I'd estimate adoption at 50-50.
This deck does not address the drawbacks of doing this at all, and the only recognition of this problem is in "Stretch Goal: Research development of tool(s) to translate legacy CMake to new CMake language."
The current CMake language is fine. Nothing amazing, but good enough.
But every moment that I spend configuring CMake is a moment I don't spend doing something actually useful. I don't want to disparage the team's work--CMake is awesome software--but the way they see it is very different from the way typical developers see it.
> The current CMake language is fine. Nothing amazing, but good enough.
Lol CMake was a huge reason I got out of C++ development almost a decade ago. I was tired of scripting my own shitty build systems, especially ones which couldn't even manage packages. I never actually need the power CMake offers, but the stuff I do need (reproducible package management) CMake punted on. 99.9% of the time, all you need is a build tool that takes a list of dependencies and a lockfile (or similar).
It's really quiet glorious how absurdly complicated the C++ development environment is. Want to build something? Well, you'll need a c compiler. Want multiple files in it? Well, you'll need a Makefile. Want a dependency? Well, now you have to decide if you want cmake, autotools, bazel, scons, meson, ninja, or to write your own bash script? Why? Who knows! Oh, does that install the dependency for you? No, don't be silly, those tools simply determine what your current platform has.
Yeah, and the real tragedy is that these are all solved problems in other ecosystems. The C++ folks often don't know what they're missing out on. Which is too bad, because while C++-the-language gets a lot of flack, modern C++ is actually pretty cool (still a bit of a bear to figure out which features not to use). But the tooling is just insanely cumbersome (it's been a while since I've developed C++, but last I checked even getting decent editor support was absurdly hard and the various Vim/etc plugins that promised said support never worked out of the box).
> The C++ folks often don't know what they're missing out on.
This assumption implies that C++ developers are all one-track wonders that lock themselves in a basement and have no contact with any other programming language or tooling.
> But the tooling is just insanely cumbersome (...)
It really isn't. Node.js with npm or yarn is a far greater mess than anything that happened in C++.
I'd say that 9 out of 10 times people complain about the C++ ecosystem, they are actually talking from a place of ignorance and at best are only complaining about the mess they needlessly dug themselves into.
Case in point: who in their right mind feels the need to bolt on a different scripting language to cmake, a build system that only rarely needs a damn if statement in the project? These guys are clearly deeply invested in making their lives far hard and worse than they need to be. I mean, where's the need?
If anything, this wave of contrarian bullshit targeting C++ only shows the high volume of inexperient and clueless developers who decided to skip learning the basics and instead thought it was a good idea o dive head-first into wheels reinvented very poorly, and from there in proceed to depict their own mess as representative of everyone else's experience.
I don't have problems with dependencies in C++ using CMake. Maybe I'm doing it wrong xD.
It is harder to get 400 dependencies than with other ecosystems, I'll admit that. But I see this as a feature. I can't even start to imagine how people deal with security when the most basic app they write gets tens (hundreds?) of transitive dependencies. I guess they don't. Nobody cares about security.
That was not the point. It was “want to develop something?”
Not sure why a makefile is so bad.. it’s just a file generation tool which can check out-of-date files and specify dependencies by pointing out which input files are needed for an output.
Your answer is only good if you're writing an utterly trivial program. For any real application, you need some kind of build system, not just a g++ command line.
Makefiles don't work for extremely large and complex applications. That's the whole reason we have junk like CMake and Gradle.
The question you should be asking yourself is "why? why does my C++ project need 18 programming languages to build and java needs an xml file, javascript some json, or rust a toml file".
how many times did you have to build a java or javascript project where you are counting kilobytes, or are pre-generating data tables for maximum efficiency which you want to be configurable as a build option because you're going to need different tables for different hardware/target ? people with this sort of problem just aren't even considering using java or javascript so of course in C++ we end up with a hairier solution. A lot could be solved if there was good compile-time reflection for sure but still.
An example: the main software I work on, along with a few other software I know, has to generate a SDK as part of its build so that people can build plug-ins for it. How often do you have to do this in java or JS ? almost never. But this requirement implies having a build system powerful enough to do that.
Another example: another software I work on must produce a Max/MSP package ; this implies:
- working with multiple versions of the Max SDK which changed their structure on disk
- setting different build options such as file naming depending on the target architecture: .dll on win32, .dll64 on win64
- generating a relatively complex tree structure which combines a Windows and a macOS build together because people don't want the hassle of having two different download options
- generating some stub C file that will declare a function to be exported
- generating some .json file with various build information, build date, etc
- generating a symbols.txt file which contains all the symbols that are known to be explicitly exported to minimize binary size ; the build system has to: list the symbols (one per plug-in we build), write them to a symbols.txt file, pass that file to Apple's dyld like this:
> these are the requirements we have. can you solve these requirements with just Java's XML
Yes, actually. They don’t come up very often, but maven in particular has the ability to do everything you’ve mentioned. Most of them built right into the standard plugin set, like setting various compiler options, adding build information, or even platform specific build shenanigans (usually when dealing with javafx).
Maven also has a rarely used escape hatch plugin (ant run) which allows you to run custom scripts if you really need to.
can you show an example of how you'd parse, say, a .java.in file from Maven (say match some lines with regexes), and generate a .java file from it, that you'd add as part of the build? And could you really say that this would be cleaner than the same thing done with cmake?
> can you show an example of how you'd parse, say, a .java.in
The canonical way to do such a thing is through the java annotation processing api [1] and using a tool like java poet [2]. Before you did that, you'd probably decide if you wanted to instead use bytecode generation with a library like bytebuddy [3]
But, assuming for some reason, you wanted to torture yourself and actually consume a java.in file and apply a regex, then you'd probably pull out the "maven-replacer-plugin" [4] and configure that for the task at hand. (or use your favorite templating language plugin. There's a million of them).
Though, to be fair, this really isn't something that comes up in regular java programming due to the nature of the ecosystem. Anything you'd want to codegen likely already has a library and anything you didn't would receive (legitimate) push back.
> And could you really say that this would be cleaner than the same thing done with cmake?
Probably not. But not really the main point. cmake becomes a problem when you want to do anything more complex than string replacement. Imagine, for example, if you did want to use a code generation library within a project. The strength of maven is a new team dev only needs maven. Further, if they need to update that library they only need to change that maven file.
What happens with cmake if you wanted to update a minimum dependency?
I just don't understand in which universe having to look for X plug-ins which may or may not be maintained (the first I looked at had its last update in 2015) is better than cmake ; cmakelang could be literally a befunge variant of INTERCAL than it'd still be a better choice than this for me
Part of the answer is that C++ wants to interrogate your system or build root to understand its capabilities. "Call this function if available" or "Access this variable if it exists" are more common in C++ projects, compared to the other languages you listed.
Rust tries to avoid that by hard-coding availability into the libraries themselves, but the result is less reliable. For example I cannot use `libc::mkostemp` on macOS because Rust has (incorrectly) hard-coded that it is not available on that platform.
CMake is able to detect it, but we pay a cost in build system complexity. I wish instead this detection could be done at runtime.
But that was too boring, so people turned to gradle and we're back at ad-hoc scripting in a bizarre Turing-complete language. Believe me, I've hated maven with a passion (and still do) but this episode taught me build systems are nothing but a zero-sum game and typical junior-grade pastime.
Update: the story with Node.js is a bit different, but package.json was soon augmented by package-lock.json and yarn at about the same time Webpack and similar became kitchen sinks requiring config-generator tools such as create-react-app. The morale is, the grass is always greener on the other side I guess. Though tbh, cmake always was absolute garbage making autotools look sane in comparison. Now I hear there's legacy/unmaintained and new cmake syntax; what were people thinking?
This is a straw man. The objection was never "XML is too boring", the objection was "the XML models an actual programming language, so we're doing imperative programming but without the ergonomics and syntax of a programming language". "It's just XML/YAML/HCL! Way easier than programming in an imperative language!" is such a farce (not only among build systems but especially among infrastructure-as-code solutions).
Just because it accepts XML doesn't mean your build system is meaningfully declarative.
Oh, I fully agree with your argument as it applies to the old ant build tool, which is in fact a programming language using XML serialization - the author even apologized for it when he realized syntax wasn't the hard part. But a maven pom.xml is actually a component model serialization rather than a programming language in disguise - unless you're referring to embedded ant scripts in maven files which is a thing, but not common enough to warrant criticism IMO. Scripted deployment tasks in a Java context were more commonly seen in Jenkins, and only recently spill into gradle build scripts (I've always thought gradle was following Jenkins' Groovy syntax for eventual integration but that hasn't happened).
I have worked with Android (ant, maven, gradle), iOS (CocoaPods, Carthage, SwiftPM), C++ (CMake, autotools, meson) and some Python.
Never needed 18 programming languages to build. I actually don't think CMake is harder than Gradle or SwiftPM. Most people I see complaining about CMake actually write bad CMakeLists, so the way I see it is that they don't want to learn the basics. And that's maybe why e.g. gradle works better: for many projects, the IDE is probably able to automatically handle an imperfect and not-so-readable gradle configuration, so people don't need to learn gradle.
But the result is the same: they don't really master their build tool.
> The question you should be asking yourself is "why? why does my C++ project need 18 programming languages to build and java needs an xml file, javascript some json, or rust a toml file".
But your C++ project never needed 18 languages.
If you feel it does, you need to take a step back and take a serious look at the mess you're making because you're making at least 17 mistakes.
Tell me, what exactly does your build need that is not met with the following:
In fairness, many of the CMake criticisms also apply to Gradle in the Java world, but yeah, Go just needs its go.mod file, Rust just needs its Cargo.toml file.
I would argue that Gradle is fine, just like CMake is :-).
But there is a philosophical question here: I think that developers should master their basic tools to the point where they don't make a mess writing a CMakeLists.txt or a build.gradle.
It feels like many devs think that a tool is too complex if you need to learn it.
99% of the time but Rust also has build.rs as an escape hatch which lets you run your own code at build time (eg code generator like grpc). The sane part is that you do all this in the same language.
Agreed, but “escape hatch” is the salient distinction. It’s not the default, contra CMake. Also, Rust pays dearly for build.rs support in terms of difficulty optimizing compile times, so supporting that extra 0.1% ain’t free.
I mean, my remark was a bit of an oversimplification. Modern build systems do a bit more than take a list of dependencies--they'll also allow for bundling data files into the compiled artifact, platform-conditional compilation units, and a few other things. But they don't expose these things as a fully imperative scripting language / meta build system--a few bits of declarative configuration can cover 99.9% of use cases.
> CMake was a huge reason I got out of C++ development almost a decade ago
That's funny. After thirty-odd years of C++ development, it's only in the last year or so that I've started to take CMake seriously, since a critical mass of other people seem to have (inexplicably) decided to treat it as something akin to an official C++ build system.
> It's already bad enough that the developer needs to understand both target-based CMake and legacy CMake (...)
No. "Legacy cmake" has been made obsolete with the release of cmake 3.0, almost a decade ago. There is absolutely no excuse to use anything other than the declarative, target-based "modern cmake" style.
I'd go as far as claiming that 95% of all use cases boil down to simple straight-forward one-liners with modern cmake, with the remaining 5% corresponding to putting together custom cmake find modules, packaging, and deploying runtime dependencies.
> (...) now there need to be two more languages (slide 13) (...)
I'd argue that if anyone feels the need to write scripts to do something in cmake, they are already doing something awfully wrong to start with.
> No. "Legacy cmake" has been made obsolete with the release of cmake 3.0, almost a decade ago. There is absolutely no excuse to use anything other than the declarative, target-based "modern cmake" style.
"made obsolete" != expunged from codebases. I'm sure there are plenty using cmake 2.x conventions that haven't been updated, and need maintaining. And in order to either maintain or upgrade, one needs to know both conventions.
> And in order to either maintain or upgrade, one needs to know both conventions.
No, not really. You only need to know the declarative, target-based modern cmake. Most of the ad-hoc nonsense done with legacy cmake is either features that since the 2.x days were added to cmake, or nonsense that should never have been written to start with.
You're missing the point -- it was written and still exists, and still needs to either be maintained or re-written. You can't do either of those things if you don't understand it.
> You're missing the point -- it was written and still exists,
No you're completely missing the point. If it exists and needs to be managed then anyone in their right mind migrates their cmake project to modern cmake.
Why? Because odds are any weird thing happening in legacy cmake projects happened because either cmake didn't supported things way back then or old timers made a mess for themselves, needlessly.
Maintaining code does not require you to keep messes around. You're expected to pay legacy debt, learn from mistakes, and right whatever wrongs you made. With cmake projects, this means modern cmake. No excuse.
> and still needs to either be maintained or re-written.
It's not a "or". No one forces you to not fix mistakes. You only keep them around if you wish to, and if that's your own personal decision then the responsibility of creating that problem is on you, not on the tooling.
I'm talking from experience. I've maintained a dozen or so legacy C++ projects, some of which with auto tools, qmake, and the deprecated imperative cmake style put together over a decade ago. Porting stuff to modern cmake is trivial. There is no excuse.
What a fascinating planet you must live on where legacy doesn't exist, or if it does developers can see into the future and know what will be 'nonsense' in 10 years time, or make the whole thing moot because of course they rewrite their build framework for every release of the build system, having unlimited resources at hand. Must be nice there.
Apart from what sibling commenters say, the problem is that cmake's original syntax was introduced with the same glowing fanatism (over plain old Makefiles) that's now on display with cmake 3.0 (or is it 2.0?) over "old" cmake, making this enthusiasm not quite justifiable.
If your old lib can't be build due to a deprecated build system, that's already the greatest possible disaster for a build system, as all your automatic tests etc might not run anymore and you lack a basis for releasing anything at that point. You need to setup an entire new project for recovery.
I seems common for people to try to have CMake run Python scripts, install stuff on the system, and do all sorts of custom tasks. I'm sure other build systems also allow devs to make a mess if they want to. Maybe some are less powerful, so that people cannot make such a mess even if they want to. But "the tool is too powerful, I will shoot myself in the foot because I don't want to learn the basics" does not sound like a good argument to me.
> It's been 9 years since Modern CMake came out, and I'd estimate adoption at 50-50.
> The current CMake language is fine. Nothing amazing, but good enough.
I'd argue these two statements are in conflict.
I came to c/c++ after several other languages and ecosystems -- the c++ dependency/build ecosystem is by far the worst.
CMake does work. It's underlying capabilities are great, even. But "modern cmake" is sort of a joke syntactically. Generator expressions are indecipherable. Semicolon delimited arrays, no distinguishing between input/output variables... I personally would love for a different DSL frontend to CMake.
> I don't want to disparage the team's work--CMake is awesome software--but the way they see it is very different from the way typical developers see it.
I have to agree with this. The underlying core of cmake is actually pretty awesome. Modern cmake is an improvement in capability. The team deserves credit for that.
The absolutely atrocious current cmake language (where you can't even make a list of strings! The ONE THING a make replacement should fix!), in combination with the prevalence of legacy cmake everywhere (even in Find*.cmake files distributed by cmake!), are the two primary reasons why I'm using and rooting for Meson over CMake.
But its biggest issue, arguably, is the extreme proliferation of bad information. Everything on StackOverflow and similar sites is outdated and bad practice for the most part. You're right that introducing a new language which deprecates the old will make this an even bigger issue.
Are you a CMake expert, or are you rather not really good at it? Just to see if there is a correlation between "I don't know how to use it properly" and "I hate it" =).
I have spent more time screwing with cmake than learning multiple other (actually useful) languages that I am an expert at.
I have built cmake systems from scratch and replaced others with vanilla make. I am currently stuck dealing with a cmake project that would make all of Byzantium blush.
I agree that cmake is terrible. The more I learn to use it properly, the more I hate it.
In addition:
If you think cmake is solving any problems that vanilla make does not handle better, then either (1) you don't understand how to build and package software, or (2) haven't spent an hour or two reading the gnu make manual and "recursive make considered harmful".
I got to line 300, and counted > 2^26 possible build configurations, then got bored.
Presumably all of those are tested in CI on each PR?
The cmake you linked to is a classic example of the "not knowing how to build or package software" cmake anti-pattern.
As for QRCODE_INC, I'm guessing that it is just an include directory. Put qrcode.h in /use/include, (or /usr/local/include, or wherever the compiler looks by default) or set an environment variable to tell the build to look in nonstandard paths before looking in system directories.
Problem completely solved.
(Objection one: What if the installer for qrcode.h puts the header in a strange place? Solution: Fix libqrcode's package.)
(Objection two: I don't want to pollute standard paths with conflicting library versions, and can't be bothered to set environment variable overrides. Solution: Docker)
Heh, this comment reminded me of those "The Tao of ..." - in your case Make; "smack, and the novice was thus enlightened" :-D
Based on your PoV, I'm guessing pkg-config is similarly misguided, since those fools allow the end user to specify the prefix, includedir, libdir, and any cflags and ldflags for ther setup. Pssh, idiots. If they'd just "sudo make install" everything, they'd be enlightened
> Presumably all of those are tested in CI on each PR?
You don't have to be an expert to see how bad it is. CMake is as if someone took all the worst parts of C (which isn't particularly good to begin with), namely textual macros, and made a whole language out of it.
Because you can make a mess with CMake does not mean it's necessarily terrible. I'm pretty sure I can make a mess with any tool/language.
Now to use a tool correctly, you certainly need some basics. Saying that those who manage not to make a mess with a tool are probably doing it wrong because you don't know how to do it feels... weird.
I just don't get the hype around make/CMake/autotools. I usually implement the build tool in the language I am working on (build.py for Python, etc.). It seems that Zig[1] also follows this.
This is the sanest idea of all, it seems. C++ could easily have a library of build and dependency-tracking classes with a sensible API. Then you could use anything you want in your build "script" that C++ has to offer and not have to jump through hoops to accommodate the most complicated use cases. (And no, you would not be needing a build script for your build tool.)
The way it works is that if you ship an executable, then you can do whatever you want as long as it ends up building it.
If you ship a library, then you need to provide a way to install it somewhere, and ideally a way to find it (e.g. pkg-config, at least on Linux).
If you need to use CMake because your dependencies use CMake, you are doing something wrong. I typically build dependencies with all sorts of build systems, I just don't mind. I just need them to be installed somewhere such that CMake can find them.
The irony of using “hype” to describe make/CMake/autotools while referencing Zig as an alternative example.
Building a build tool written in C or C++ requires a build tool — hence the existence of make/CMake/autotools, which can bootstrap themselves from a minimal environment.
If I was referencing Zig az the __only__ alternative I would agree. What I wanted to say is to use the same language / tool that your projects uses. Maybe it is impossible for C/C++ as some comments pointed out.
C/C++ compilers don’t have a scripting language built in to construct your build system on top of?
I mostly use distools to build python modules but I have a couple projects where there is code generation, building static libs, &etc… and I find it easier to use cmake to just pop out a python module.
Cmake can also be convinced to make distro packages, like rpms and debs, without too much trouble — not sure how to do that in any other build system.
I don't know about the other two, but the recent resurgence of "make" seems to be around using it as a standard interface for running tasks. For example, "make build" would run build.py in a python project, webpack in a javascript project, etc.
Similarly there's things that have been ad-hoc standards forever, like "make clean" to remove all build and test artifacts, or "make test" for running tests.
Many of my coworkers only use it in this way and are unaware make looks at files/timestamps.
I spent several years creating and maintaining a large, proprietary build system for a Linux-based operating system at a previous gig. Much of userspace was C/C++, and thus used CMake. To further complicate things, most of userspace needed to be cross-compilable for a macOS-based "simulator" environment.
I ended up being fairly proficient with the "modern" CMake language after investing a ton of time into it, but as the size of the company grew, I ended up being a bottleneck. Every other team found themselves subject to CMake's many footguns, and I'd end up having to step in to help.
It was maddening, but I 100% get that other teams just wanted to focus on getting their work done, rather than fighting with a poorly-documented and frustrating build system.
I, for one, would be very happy to see a new CMake scripting language. Ideally, one that is a lot more friendly for beginners and occasional users.
> I 100% get that other teams just wanted to focus on getting their work done
Getting your project to build properly should be part of "getting the work done". Modern software is way too much about stacking up hacks to "get the work done" instead of writing proper code.
Sure, there could be a better way than CMake. But it's not that hard to learn the basics.
The basics are rarely the problem. The issue is that most any large project ends up wanting to do something slightly unusual, and things quickly get weird there.
Eg, you'd think downloading a file in cmake would be fairly straightforward, but it's not really.
We have a test harness, and some of the tests want test files. Committing a bunch of large binaries to git doesn't seem like the right way to go, and the files are only needed if the tests are going to be run.
Now I could build the download into the test itself, but that's extra stuff that's not part of the test proper.
These initiatives are quite bluntly idiotic. CMake's scripting language is a collosal red herring. CMake's scripting language is good enough for all conceivable use cases, and if you feel the need to write scripts, let alone complex ones, then you're already doing something awfully wrong and you should just reach out to the documentation to learn how to do things properly.
My recent gripe with CMake: the language toolchain is still a singleton. You can't easily use multiple C++ compilers in the same build. This doesn't feel like a language limitation.
I also never got why we need DSLs for build systems. It seems like Turing completeness is often inevitable, no matter if people don't like it. There is no reason that a build file couldn't be mostly declarative with a general-purpose programming language though.
> the language toolchain is still a singleton. You can't easily use multiple C++ compilers in the same build. This doesn't feel like a language limitation
If you're still struggling with this, a solution is to use ExternalProject to create a "superbuild" top level project that has various subprojects being built with different toolchains.
> I also never got why we need DSLs for build systems
The same reason they show up anywhere: to have the correct level of abstraction, or its cousin "signal/noise ratio." Trying to debug "gcc -o build build.cc && ./build" sounds like the 8th circle of hell to me. I can show a concrete example: https://github.com/facebook/sapling/blob/0.1.20221201-095354... So, when this raises OSError or FileNotFoundError due to some botched assumption, I guess one just attaches pdb to it and now your job is to be a python expert
All of those are in a general purpose programming language, and they require spelunking through thousands of lines of code to surface the actual commands and dependencies that are buried therein. And, if you become a domain expert in mach, where else are you going to apply that expertise?
> So, when this raises OSError or FileNotFoundError due to some botched assumption, I guess one just attaches pdb to it and now your job is to be a python expert
Well, debugging cmake files is not any better, in my experience. And I can't even attach a debugger. I can rely on printf debugging or pray that cmake with --trace or --trace-source has the relevant information I need (often it doesn't).
edit:
Well, your disgust is justified against ad-hoc imperative build systems that happen to use a general-purpose programming language. It doesn't have to be like this. I liked the idea of scons, for example, unfortunately it had other problems.
I hear you about debugging cmake (although in this specific case JetBrains has a CMake debugger: https://blog.jetbrains.com/clion/2022/10/clion-2022-3-eap-cm... ) but one can also concede that the magnitude of code one even needs to consider are not in the same universe as each other:
I am on-board with something like Starlark, which is allowed to make reasonable build decisions with the information available to it, but is prevented from mutating my system. I try to be very disciplined about reading every .gradle file carefully, because jokers love to do whatever they want when they have a full-blown programming language available to them
The slide deck seems like a pitch for more funding to create a CMake replacement, rather than a description of that replacement, right? It ends with "A $1M+ investment in a new CMake language(s) and translation tools could avoid a future massively expensive transition of ASC [Advanced Simulation and Computing, at Sandia National labes] codes to another tool like Meson that could easily cost more than $10M across the ASC program and negatively impact developer productivity while the transition is taking place."
I'm curious: is there any way to estimate how much US taxpayer support went into creating CMake? Would CMake exist were it not for the DOE and/or NSF grants that Kitware got?
Getting that kind of support isn't a fault of CMake, but I think it's a meaningful property in considering and comparing different software development projects, and evaluating their long-term health.
Disclaimer: author of a build system that will compete against CMake when it's released.
I'm not impressed, for two reasons:
* Backwards compatibility for the CMake language is not desirable. It would be better to allow files to be translated to the new language completely. Sure, CMake could support having files in the old language and files in the new language, but the CMake language is so bad that a break from it is the best path forward.
* The CMake language is not the only drawback to CMake, unlike the claims in the presentation. CMake's model is hamstrung as well, in several ways.
Here are some ways in which the CMake model is hamstrung:
* Generating a build file for another build system. This means that CMake always gives up control of the build. It can only tell the other build system the dependency graph and then sit back and watch. It can't regulate the build itself.
* No dynamic dependencies. If you need dynamic dependencies, you need to call into CMake to generate a new and separate build file for the second build system and then call that. And because the two build files that CMake generated don't know anything about each other, it's hard to regulate the use of computing resources between them; you either have to complete the one and start the other, or you risk over-extending on your computing resources.
* No way to generate a target that doesn't call an outside process. More generally, the only way to generate a target (unless my CMake knowledge is old) is to have the second build system create a child process. This is strictly less powerful than having the same scripting language available during configure also available in targets. For example, LaTeX is only properly built with a loop that checks for a fixed point. If you can't loop in your target, you have to use an outside bash script or something of the sort to make up for it.
* No way to have targets that do not generate files and have other targets depend on those targets. If a target does not generate a file, CMake does not know how to make other targets depend on it (unless my CMake knowledge is old).
* The model of "configure in Turing complete language and then build" also means that configuring often has to happen more than once before a build. You see this if you use `ccmake`: you configure, and new options appear. You set the options and configure again. More options might appear. Using plain `cmake` hides this by using defaults, but that just means the user might end up with a build they didn't want because they didn't get a chance to set all of the options they might care about.
> Here are some ways in which the CMake model is hamstrung... generating a build file for another build system
That's the critical selling point of CMake in my estimation. When I'm on Windows, I can generate a Visual Studio project and use its excellent debugger and code analysis tools. When I'm on Mac, I can generate an XCode project. Or a CLion project, or Makefiles, or...
Decoupling the build "recipe" from the build toolchain is critical for collaborative, cross-platform, cross-architecture projects, IMHO.
> No way to have targets that do not generate files and have other targets depend on those targets.
> That's the critical selling point of CMake in my estimation. When I'm on Windows, I can generate a Visual Studio project and use its excellent debugger and code analysis tools. When I'm on Mac, I can generate an XCode project. Or a CLion project, or Makefiles, or...
I've already addressed some of this in another comment ([1]), but you bring up more points, so I'll address it here.
CMake's model forces it to generate build files for other build systems. The better model does not force the build system to, but it also does not preclude doing so.
Separately from actually running a build (and that's important), my build system will be capable of generating files for VS, XCode, and other IDE's, but the files it generates will simply tell the IDE's how to integrate with my build system by telling them how to call it with the correct build profile. And yes, it will generate multiple possible build profiles, allowing VS users to select the solution the want to build from within VS.
But this means that my build system will not only integrate better with IDE's than CMake, it will still retain full control of actual builds.
This includes using debuggers and analysis tools, by the way. I wouldn't consider my work done on IDE integration until all of that is as easy as possible, and definitely as easy as CMake.
> Isn't that exactly what an INTERFACE library is for?
Maybe? It might be enough. At first glance (and I fully acknowledge that I could be wrong here), it seems like CMake expects the "target" to be a library of some sort. That's not the sort of thing I would personally want.
I would want to be able to depend on any target for any target.
But again, I could be wrong. I've never been able to understand the CMake docs.
That’s just the worst of all worlds and makes for an equally terrible development experience for everyone.
CLion’s native support for CMake is the only thing that makes the IDE story palatable, but everything about CMake’s architecture resists treating CMake as a declarative format and performing that kind of intelligent integration.
First, why is it the worst of all worlds? I have a consistently good experience using CMake-generated Visual Studio projects. Second, what's the alternative? Having CMake manage the entire build, down to linkage? What benefits does that give over the current situation?
VS seems to be the most usable of CMake’s IDE output formats, but it still remains an awkward experience:
1) Two editable sources of truth, one of which is only sometimes overwritten when performing a target build, depending on whether the CMake inputs have changed.
2) Non-universal native IDE constructs can only be expressed if CMake supports them, and if you use IDE-specific CMake features.
VS is as good as it gets, and it isn’t that great. By comparison, Xcode project support is absolutely terrible.
CMake has a semi-declarative semantic representation of your project and its targets, but has to lossily translate that into a number of different, incompatible project formats, all of which are only capable of representing a different incompatible subset of CMake’s features. In several cases, CMake’s features behave differently (or not at all) depending on the generator being used.
I think GP already thoroughly elucidated most of the issues with this model; all of these issues fall out of the necessarily lossy, inconsistent conversion to incompatible output formats.
> Generating a build file for another build system. This means that CMake always gives up control of the build. It can only tell the other build system the dependency graph and then sit back and watch. It can't regulate the build itself.
That is a hard requirement to using Visual Studio and XCode, which I wager account for more than 90% of C/C++ projects.
Sure, but doing so doesn't mean that the generating build system had to have the same model as CMake.
Here's how my build system will do it: the programmer will list default build profiles. Then the build system will generate Visual Studio or XCode files that include all of those default build profiles, but those VS or XCode files will simply call my build system with the correct build profile.
Thus, when a build is run from VS or XCode, my build system is still in full control.
In essence, instead of generating build files for use by other build systems, my build system will generate files to tell VS and XCode how to call my build system. It will tell VS and XCode how to integrate it.
My experience with build systems that do that is that there's consistently IDE features which just don't work correctly. The most obvious one is "build current file", which is very important for large projects and can't work without the IDE knowing how to build things.
My build system will actually have a command-line option to reverse the dependency resolution. Instead of building a given target, updating its dependencies as necessary, it will build the default target(s), marking the files given on the command-line as "changed" and resolving dependents up to the default target(s).
This will mean that building the "current file" will be a simple command-line switch that can be part of the file generated for that IDE.
But yes, I'm going to put in a lot of work to make IDE's work as smoothly as possible.
Yes, I'm a solo dev and not looking for contributors. My style is too eccentric to accept outside contributions.
Funny you should mention Bazel and Buck; I'm coming for them too! I'm also coming for Nix; I want its advantages to be easily accessible for mere mortals.
I think I can do better alone than those teams have because they have all made assumptions that weren't great. I'm learning from their mistakes.
Also, UX matters, and I'm putting a lot of effort into that.
But the other reason is that if you do Nix right, the capabilities of Bazel and Buck just appear.
Every Bazel rule (e.g. cxx_library) has attributes and an internal implementation function in Starlark. The implementation can read the attributes and coordinates the work of actually calling programs (like Clang) and collecting the outputs. The collected outputs are exposed via a list of data-structures of type "Provider". When a target depends on other targets, it can see the providers of its direct dependencies. This allows targets to build on their dependencies. In C/C++, for example, the Providers contain the header files, the objects for linking etc.
I would definitely write (or at least read) a few Bazel rules before writing a new build system that improves on it!
> I would definitely write (or at least read) a few Bazel rules before writing a new build system that improves on it!
You have a point there; I didn't because everything I've seen is that people like the results of Bazel, but they hate writing rules, so Bazel rules seemed like something to not emulate.
But point taken.
And thank you for the explanation. It seems providers allow for multiple outputs on a target. My build system allows that, but bundling such outputs together is a good idea that I need to add.
Doesn't Bazel / Buck do a better job? The only downside might be IDE integration outside of Google / Facebook but I think there's effective polyfills that generate compile_commands.json.
I don’t understand why people are surprised to see similar complexity creep in the “reborn” CMake ecosystem.
There has never been a one size fits all all purpose glue. It’s not surprising to me that neither is it possible to make a one size fits all “glue my software together” stack.
That doesn't seem like a huge deal. Most projects would have some sort of internal discussion about whether to adopt this or not. It's not going to show up out of nowhere.
Well it seems on topic to bring up xmake. It has no DSL, just Lua. Been using for a year (with Cpp. Haven't tried other languages with it yet) and I love it
I recently had the need to propose a build tool. Naturally make came to my mind then I looked into something more modern. I didn't dive deep but I was left with impression that CMake and many others support a certain set of programming languages?
PS: I know that make has many shortcuts to compile a C/C++ codebase but in essence its targets as file system time stamps.
This article is discussing the scripting language of CMake (the code that describes the build) not the program that it builds.
But to your question, the answer is kinda. There is support for a few other languages built in (off the top of my head, Ada and Fortran I believe) and you can provide a custom backend as well for your own language support but most people just use add_external_command
downvoted, i mean what did i expect on a discussion about the idea to add a scripting language on top of a scripting language that is used to generate a script in an other scripting language
It's 2022. Why would I learn even more horrible CMake instead of just going full Meson? Looks like it's the same cognitive cost, but no gains by staying on CMake.
For my curiosity, does Meson emit "native" build files, such as Visual Studio, Makefiles, Ninja, or the like?
That's been my favorite part of CMake (even before CLion used it as their defacto project format): if you can get a run of cmake to complete (which, granted ...), then you can open the project in a sane IDE and have library and build targets available
This deck does not address the drawbacks of doing this at all, and the only recognition of this problem is in "Stretch Goal: Research development of tool(s) to translate legacy CMake to new CMake language."
The current CMake language is fine. Nothing amazing, but good enough.
But every moment that I spend configuring CMake is a moment I don't spend doing something actually useful. I don't want to disparage the team's work--CMake is awesome software--but the way they see it is very different from the way typical developers see it.