Hacker News new | past | comments | ask | show | jobs | submit login
Trying out C++20's modules with Clang and Make (0x1.pt)
90 points by vmsp 11 months ago | hide | past | favorite | 178 comments



I'm super disappointed with modules. It's 2023, we've pushed out a new c++ standard since modules were standardised, and they're not usable.

They're not supported in cmake (unless you set CMAKE_EXPERIMENTAL_CXX_MODULE_CMAKE_API to 2182bf5c-ef0d-489a-91da-49dbc3090d2a if you're on the same version of cmake as I am).

My experience is that no IDE's support them properly, so either you need to ifdef out for intellisense, or go back to no tooling support.

The format breaks every update, so you can't distribute precompiled modules.

Last but definitely not least, they're slower than headers.

So, we've got a feature that we started talking about 11 years ago, standardised 3 years ago, not implemented fully, has no build system support, and is a downgrade over the existing solution from every real world use case I've seen. What a mess.


Module support will be stabilized in CMake 3.28, actually: https://www.reddit.com/r/cpp/comments/16y9qv2/cmake_c_module...

The other points (mainly integration points) are definitely valid though.


That's really great to hear. Having proper build system support will finally let us use this in anger and report actual issues upstream.


Please also report relevant issues to CMake itself. We can't fix what we don't hear about :) .

FD: CMake developer.


I absolutely will. Im usually not cutting edge enough to catch stuff, by the time I find it it's usually already reported.


> I'm super disappointed with modules. It's 2023, we've pushed out a new c++ standard since modules were standardised, and they're not usable.

Important question: Who are you disappointed in?

Keep in mind that the ISO committee does not employ engineers and for international treaty reasons, it reasonably cannot.

Point being, who should be doing this work? Are you talking to them about your expectations? Are you providing actual support to implementations that are public goods (i.e., open source compilers)?


I have gripes with many parts of the process, predominantly the committee.

> Keep in mind that the ISO committee does not employ engineers and for international treaty reasons, it reasonably cannot.

Engineers not being employed by the committee doesn't meani can't hold them to a standard. The standards committee have shown that they're more interested in freesing popular libraries and putting them in the standard library. It's clear that major changes are preferred as library changes not language changes (ranges should be a language feature, as should span and optional).

> who should be doing this work?

I'm not sure what you're getting at here. The standards committee should have used some time between 2004 (when this was proposed first) and 2019 to get this done. The compiler vendors, 4 years later , should have made this usable. The build systems are at the mercy of the compiler vendors, but cmake has had 3-4? years to get this in.

> Are you talking to them about your expectations? Are you providing actual support to implementations that are public goods (i.e., open source compilers)?

It's not fair for you to suggest that it's my fault this is a mess, 15 years on.

To answer your question, I've tried multiple times over the last 3 years, hit show stopping bugs and issues, and have found them already reported in clang's issue tracker or the VS feedback forums. I've spoken with developers here and on Reddit about issues with MSVC, and cmake.


> It's not fair for you to suggest that it's my fault this is a mess, 15 years on.

All I did was ask specific questions. Specifically, who are you griping about?

It's also not fair to expect ISO, a documentation and consensus building body, to act like an engineering org under contract to us. That's not what it does, and it really couldn't do that if it wanted to.

Reporting, clarifying, and otherwise contributing issues is helpful of course. But bottom line is that someone needs to implement fixes.

I just want to make sure we're all aren't just sitting in a big room yelling about why someone isn't just fixing to things for us. Most of the time it seems like we're not far off from that.

We should be holding our vendors more accountable of course. But also our leadership (like CTO offices) that seem fine using all the open source tech for free without contributing back in any form.

But ISO? I think it is being too complicated for volunteer compiler contributions to keep up with, but I don't think putting more language features in C++ is going to help that at all.

Maybe we call C++ dysfunctional and move on to something else, but I don't see how the same problems don't just show up there in another ten years. It's hard to get around lack of funding. Syntax isn't going to fix funding issues. Consolidation could, but we seem to be going the other direction, at least in the short term.


> s also not fair to expect ISO, a documentation and consensus building body, to act like an engineering org under contract to us. That's not what it does, and it really couldn't do that if it wanted to.

This is a bad faith strawman argument. You're the one who said they're not employees. Again, they can be held accountable for their decisions whether they're employees or not.

> But also our leadership (like CTO offices) that seem fine using all the open source tech for free without contributing back in any form.

Speak for yourself here. Many people (myself included) work at organisations that do contribute back either financially, or by reporting issues and submitting fixes (both of which I've done to our OSS dependencies in the last month). It's not fair of you in one breath to say "I'm just asking questions", "what exactly are _you_ doing to fix this", and immediately follow it with "nobody is contributing anything back". And if your response here is that you weren't talking about me specifically, then you need to decouple your interrogation from your soapbox.

> But ISO? I think it is being too complicated for volunteer compiler contributions to keep up with,

The volunteer compilers like MSVC (which is proprietary), gcc (which based on last time I looked is primarily developed by employees from red hat and IBM), or Clang which has been massively contributed to by apple and Google up until very recently?

> s hard to get around lack of funding. Syntax isn't going to fix funding issues.

I'm not sure what you're getting at here, at all.

> Consolidation could, but we seem to be going the other direction, at least in the short term.

Agreed here, unfortunately. We've had a decade plus of consolidation, and what we've ended up with is a camel (IMO)


> ranges should be a language feature, as should span and optional

That’s an opinion. As you notice, “It's clear that major changes are preferred as library changes not language changes”

The spirit of C/C++ is to put just enough in the language itself to allow programmers to implement such things in them.


> That’s an opinion.

That's fair, actually.

It's an opinion I feel strongly about, though. Honestly, I think c++'s reliance on library features to patch over deficiencies in the language is a cop out. We knew about the unique pointer abi performance issue a decade ago, and decided that string view should be implemented the same way.

Ranges are "the kitchen sink", and cause significant compile time issues whether you want to use them or not.

Spending time on libraries like fmt (which is an excellent library) is paying lip service to progress by locking in a 5 year old library (at this point).

> The spirit of C/C++ is to put just enough in the language itself to allow programmers to implement such things in them

To use your own words against you, that's an opinion. Mine is the spirit of C++ (which were talking about here, not C) is to not rock the boat too hard or everyone will have an opinion.


The ISO committee should not be in the business of inventing language features out of whole cloth. That is not how “standardization” works. They should only standardize existing practice.


> The ISO committee should not be in the business of inventing language features out of whole cloth. That is not how “standardization” works. They should only standardize existing practice.

How do you expect to standardize a feature that does not exist yet but the whole community is demanding?

It sounds like the C++ standardization committee designed a feature following a process where all stakeholders had a say. Is this worse than being force-fed a vendor-specific solution?


> How do you expect to standardize a feature that does not exist yet but the whole community is demanding?

Implement it in a popular compiler or library.

I know we are talking about C++, but the ISO C committee, over the past several decades, has repeatedly ignored working, good features in GCC and invented crap in their place.


This is part of the modules mess in the first place. Modules were implemented in MSVC and clang, and neither agreed with the others implementation.

Also if MSVC came along with an "extension" to c++, I can only imagine the uproar suggesting that this is now step 2 of EEE.


I do not expect standardization of features that do not exist.


> I do not expect standardization of features that do not exist.

But that makes absolutely no sense, doesn't it? I mean, think about it for a moment. Specifying a standard behavior is not about picking a flawed winner from a list of ad-hoc implementation or settle with the least common denominator. Specifying a standard behavior is about providing the best possible solution that meets the design requirements following the input from the whole community.


My expectation is that the ISO C and ISO C++ committees recognize that things would be best off if they dispersed.


The progress is frustratingly slow. My understanding is GCC and Clang still haven't finished implementing them fully. Last I read they are still making significant changes, for example moving to the strong ownership model for symbols. Once they're done, hopefully the build system and IDE support will follow quickly. MSVC seems to be in much better shape.


The progress is mostly unfunded. It's not that the work is especially slow (maybe it is a little?). Mostly it's that barely anyone is working on it.

This is especially frustrating for organizations ostensibly paying vendors for high quality compilers.


It's frustrating because consider how many hours are lost to compile times over the whole industry.

I have no idea how many people are slopping C++ code. So lets just calculate it per 1000. Assume modules save an average of 10 minutes a day.

1000 X 250 days/year X 0.167 hours/day -> 41,666 hours a year per thousand coders. Or 21 man years.

Yeah you'd think it'd be worth funding heavily.


When you use a good build system like Bazel modules do not save 10 minutes a day. The time saving is negligible. That's why large C++ shops do not care about modules and only volunteers are working on them.


Not everyone uses bazel. I would wager that a vanishingly small amount of people use bazel. If people using other things was enough to not let things be standardized we wouldn't have asio, ranges, fmt. Precompiled headers also exist, so why would we standardise modules?


They are not slower than headers. Ive been looking into it because modular STL is such a big win. On my little toy project i have .cpp files compiling in 0.05 seconds while doing import std.

Downside is that at the moment you cant mix normal header STL with module STL in the same project (msvc), so its for cleanroom small projects only. I expect the second you can reliably use that almost everyone will switch overnight just from how fast of a speed boost it gives on the STL vs even precompiled headers.


The one way in which they are slower than headers is that they create longer dependency chains of translation units, whereas with headers you unlock more parallelism at the beginning of the build process, but much of it is duplicated work.


Every post or exploration of modules (including this one) has found that modules are slower to compile.

> I expect the second you can reliably use that almost everyone will switch overnight just from how fast of a speed boost it gives on the STL vs even precompiled headers.

I look forward to that day, but it feels like we're a while off it yet


I assume templates can only be partly preprocessed (parsed?) but not fully pre compiled, since final code depends on the template types?


Depends on the compiler, clang is able to pre-instantiate templates and generate debug info as part of its pch system - (for instance most likely you have some std::vector<int> which can be instantiated somewhere in a transitively included header).

In my projects enabling the relevant flags gave pretty nice speedups.


Yes, but template code is all on headers, so it gets parsed every single time its included on some compile unit. With modules this only happens once so its a huge speed upgrade in pretty much all cases.


Whenever I’ve profiled compile times, parsing accounts for relatively little of the time, while the vast majority of the time is spent in the optimizer.

So at least for my projects it’s a modest (maybe 10-20%) speed up, not the order of magnitude speed up I was hoping for.

Thus C++ compile times will remain abysmal.


For some template heavy code bases I’ve been in, going to PCH has cut my compile times to less than half. I assume modules will have a similar benefit in those particular repositories, but obviously YMMV


That's the pinacle of C++ development !


It's still brand new, in C++ feature terms. I'm not surprised by, and would totally expect, all of the problems you cited.


I think we've been talking about this and trying to implement it for long enough that "it's still new" stopped being an excuse 2 years ago.


> The format breaks every update, so you can't distribute precompiled modules.

When will they fix this? People keep using C to this day because these unstable binary interfaces make it so no one can reuse software. Can't write a library in almost any other language and expect it to just work. To this day, C is the lowest common denominator of software because of stuff like this.


Unlikely to be fixed. The module files are basically AST dumps and no compiler keeps that stable over time.

Even flag selections make incompatible BMIs. So if one TU uses C++20 and another uses C++23, they'll each need their own BMI of anything they import.

FD: CMake developer that implemented the modules support


If module files are pretty much memory dumps that makes it all the more frustrating that it's taken this long, given thats what precompiled headers essentially are.


There are new rules about "reachability" and "module linkage". Not to mention things like methods defined within the class declaration are no longer as-if `inline` when within a module. It's not just "a faster preprocessor" mode; there are real rules associated with module boundaries.


C++ has stable binary interfaces just like C.


I've seen compiled C++ code turn out to be incompatible with code produced by different versions of the same compiler. There is no way this is stable.


Most compiler vendors promise forward compatible stability and have done for a while now. You're never going to have perfect compatibility, as I can always just compile with -DMY_FLAG=1 and change the behaviour of everything. But, GCC and clang haven't had a breaking change in a decade, and I believe msvc has been compatible since 2015.


My feeling is that the desire for it has somewhat waned because so many people that care about the elegance of programming languages and use C++ have just moved to Rust. There are still plenty of people using C++ of course, but it certainly feels like more of a dead end than it did before Rust. Why bother putting a mountain of effort into maybe slightly improving it when in 10-20 years you won't be using it anyway?


The number of Rust developers is a drop in the ocean compared with C++ developers. There are more than 5 million C++ developers out there starting new C++ projects every single day. I am one of them.


Not that I disagree that there are a large number of C++ programmers out there, but where did you get that number?


Yeah obviously that's true now because C++ has been around for literally decades and had basically 0 competition for all that time.


And it will continue to be true for a very long time since the world runs on software written in C and C++ and that needs to be maintained and improved.


FD: I'm a member of SG15, author of P1689[1], and implemented much of the support for compiling them in CMake.

It is my opinion that hand-coded Makefiles are unlikely to handle modules well. They have the same required strategy as Fortran modules where the (documented!) approach by Intel was to run `make` in parallel until it works (basically using the `make` scheduler to get the TU compilation order eventually). Of course, there's no reliable way to know if you found a stale module or anything like that if there's a cycle or a module rename leaving old artifacts around.

There is the `makedepf90` tool that could do the "dynamic dependencies" with a fixed-level recursive Makefile to order the compilations in the right order (though I think you may be able skip the recursive layer if you have no sources generated by tools built during the build).

Anyways, this strategy fails if you want implementation-only modules, modules of the same name (that don't end up in the same process space; debug and release builds in a single tree are the more common example though), use modules from other projects (as you need to add rules to compile their module interfaces for your own build).

[1] https://wg21.link/p1689


The Makefile in the article is actually incorrect and only "works" by accident: main.o must have a dependency on mod1.pcm which is one of the outputs produced when compiling mod1.ccm and is needed whenever compiling any transaction unit that imports mod1.


Note that module files (.ccm files in the example) cannot be parsed in parallel like that. The build system has to know what order to parse module files in. I guess if all .ccm files only did #includes and no imports, the provided Makefile example would work, but that seems like a niche use case.

Instead, some sort of dependency graph amongst the .ccm needs to be encoded in the Makefiles. There is an ISO paper describing how to query a compiler for this info (see https://wg21.link/p1689). Other papers sketch out how one might incorporate that into a dynamic Makefile graph, but so far nobody seems interested in going that far. Switching to a more actively maintained build system is probably the interesting alternative.

If you want to learn more about how build systems can support C++ modules, see this blog post by the CMake maintainers. The principles translate to most (all?) nontrivial build systems.

https://www.kitware.com/import-cmake-c20-modules


I'm thinking from Python here so somebody clarify if I'm missing something.

"As far as I know, header files came to be because when C was built disk space was at a prime and storing libraries with their full source code would be expensive. Header files included just the necessary to be able to call into the library while being considerably smaller."

The modules/packages that were built from C/C++ for example in Python and maybe other scripting languages those were all compiled with the inclusion of the .h files. for exactly the reason that the OP states...for efficiency. Header files should really only include definitions, and when you compile it's tight code using only what you included.

Wouldn't modules include everything? isn't that like saying:

import *

as opposed to say:

Import module.package

I'm just confused how this will help statically typed languages like C/C++ into maintaining and compiling to very efficient machine or executables.

Someone enlighten me.


Modules don’t import everything.

In Python, import is an act of bringing in everything into the namespace (under its own or otherwise).

In compiled languages, it is the act of making it available within this compilation unit. Things that are unused are stripped out and only what you either use or re-export have an effect. That is of course for static binaries, for dynamic linkage none of it really matters.

In fact modules can be more efficient than headers, because the way C/C++ work is that everything within the header file is verbatim copied into the file referencing it. This isn’t true with modules.


IIRC the main point of header files was to make compilation 1 pass instead of 2. I.e. when the compiler is processing a function and sees `int a = foo(bar, baz);` how does it know what code to generate here? How do you call foo (i.e. what calling convention does foo have)? Furthermore, how do you know if the code is valid? Do the passed arguments match the declared arguments? What if there is function overloading? How does it know which actual function to call? A lot of other compilers will do two-passes where they scan the code once to compile all of that information, and then do it a second time to then actually write the code out. The optimization for c/c++ was to "cache" the declarations separately in a header file, so that you dont need to parse/scan twice.


>"As far as I know, header files came to be because when C was built disk space was at a prime and storing libraries with their full source code would be expensive. Header files included just the necessary to be able to call into the library while being considerably smaller."

This claim doesn't make sense to me. You don't want and can't include the implementation in multiple C compilation units because the symbols from them would conflict. I suppose if you include the existence of shared libraries as a means of saving disk space the assertion isn't wholly wrong, but I've never see anyone make such a claim before.

Headers exist even within a C project so that the various parts of it can share the function prototypes and struct definitions while being independently compiled.


Unitit builds where you do include everything exists. They have pros and cons.


I've never heard of unity builds including everything together. I'm not aware of a build system which even does that by default. If you enable unity in CMake, it groups batches of 5 files together by default, and the documentation discourages users from including everything in a single translation unit.


sqlite calls it an amalgamation build. Whole program building certainly exists. It doesn't explain where the quoted poster got the idea that headers were intended to save space rather than allow for independent compilation.


Header files were an obvious and easy hack for the compiler writers. We don't want to duplicate these declarations in every file. Just put them in a file, and use the magic "#include foo.h" to read the file in at compile time.


Is there any specific reason why we should prefer a module system as it exists in C++ over something like the SQLite's approach with Makeheaders[1]?

As far as I understand it, they have a tool that can parse all their (C) files, figure out what each file declares and what declarations it depends on, and then autogenerate a header for every file with just the declarations it requires.

It seems like a sensible approach, but nobody besides sQLite and Fossil (which comes from the same developers) seems to be using it, either in C or C++, which leads me to suspect that there are caveats I don't know about. Is there a reason why this is a bad idea?

[1] https://fossil-scm.org/home/doc/trunk/tools/makeheaders.html


Probably because things that don't get "used" can still affect things. There's a lot of detail in name lookup algorithms and SFINAE where computing what you end up using and giving each TU its own view is probably not much savings over using the headers as-is.


This works for C, but not C++. And for C, it’s not that hard to do by hand so no one else bothers.


I think the funniest part is that you have to add ‘module;’ at the start, as if the extension doesn’t make that clear, or that the file exports the module explicitly with ‘export module mod1;’

You guys OK in C++ land?


C++ file extensions aren’t standardized unlike other languages. Hence the need for keywords to enable functionality.

So you can’t really rely on extensions to be sufficient.

Many build systems and compilers do assume specific extensions imply a file type by default, but those can often be configured to arbitrary extensions as well.


Existing C++ file extensions aren't standardised. There's no reason they couldn't mandate file extensions for this.

And really even among the non-standard extensions it's like 95% .cpp, 4.9% .cc and 0.1% .c++. Ignoring the weirdos it's de facto .cpp or .cc. They could standardise that too if they really wanted and there was a benefit.

This sounds like one of those things that often happens where people worry about technically possible but totally insignificant risks, and insist on handling them because they feel proud of having thought of them.

Not enough YAGNI.


You’re missing cxx and mm, which are more common in codebases I deal with than most of your examples. Which is to say you can’t make sweeping statements.

Anyway, I’d try and contextualize your argument more by adding that feature teams are restricted in the scope of the changes they can do to the features they’re working on. They can certainly suggest that the language specify specific extensions but that goes against decades of convention, and is a hill that would likely have taken more of their time and energy for little specific gain for the feature itself.

I’m not arguing that there shouldn’t be standard extensions, just that I can understand why it didn’t happen. It’s like swimming upstream


Ah yes `.cxx` is rare but used. I haven't seem `.mm` for decades. IIRC gtkmm uses that? No maybe not, they use `.cc`. What on earth uses `.mm` now?

Anyway none of that really changes the point - they could easily standardise extensions. The only reason they haven't is because there has been no real benefit until now.


cxx is what msvc prefers hence why they picked icxx for their module types.

mm is convention for ObjC++ files, which is a required dialect if you need to use most the platform libs on Apple platforms.

Being the standard extensions for the two most major platforms, I wouldn’t call them rare at all.

And which further reinforces my point. They could have mandated an extension BUT it would have been going against convention which makes it decidedly not easy. There’s not one singular team in charge and there’s lots of friction in such a seemingly minor change.

I’d argue too that even with modules there’s very minimal benefit to a standard extension. Any such benefit would have had even greater benefit for the rest of the language.


You forgot .cxx


What .ii for preprocessed files? And .ipp and .tpp for template implementation files?


The implementations have no agreement here. Clang "likes" `.cppm` while MSVC "likes" `.ixx`. I don't think there's any hope for agreement on extensions.


Part of the justification is the lack of optimism that file extensions could be mandated (I'm less pessimistic).

But it's also an important hint to the preprocessor to make pre-build dependency scanning faster. You don't want to have to compile all possible source files and then throw a bunch of that work away once you find there are module dependencies in the source files. Just having a magic file extension is not enough to clarify that.

A C++ packaging standard could help there, but C++ started with modules, probably because they seemed simpler.


There should not be a c++ standard for packages. Maybe an ISO standard, but it needs to work for all languages, otherwise we will all roll our own. If the package can't support all languages it doesn't support the complex world C++ is used in.


There is some work in that direction but no specific ISO proposals yet.


It is a very hard problem. I'd be concerned if there was a proposal.


That is only if you have a global module fragment. Without that, you can start right off with `export module M;`.


We had this working: Turbo Pascal Units....


I'm guessing that adding modules to a simple, clean language like Pascal is a lot easier that making almost any change to C++, the epitome of language complexity.


If you think your language has modules, but you are still building with make, there is something wrong.

I programmed in Modula 2; that definitely didn't need make.

The module definitions need to be parsed and semantically analyzed, so the compiler has to be the build tool.


Has the C++ standardization committee given a rationale why a module definition does not automatically also create a namespace with the same name? That would seem such a useful feature.

Can mobiles be nested? (I'm thinking java.util.StringTokenizer.) If so, has a module hierarchy (as for Java) been defined for all STL standard functionality?

Are their any compilers that implement C++20 in full in 2023?


Modules adoption would be severely stunted if everyone had to adjust all calling code in order to convert a library to use modules. Ideally one could find/replace includes with analogous imports, a bit at a time.

I guess we could instead mandate that module names have to match the namespaces inside (i.e., mandate what modules are named up front), but it's also common to have C++ namespaces be reopened for various reasons (like adding to overload sets in supported ways).

Anyway, bottom line is that modules and namespaces need to have M:N relationships, though in practice, it should be Bad Practice to get too creative there. Nobody has added namespace/module matching rules to CppCoreGuidelines or clang-tidy yet, but right now is a good time for some ideas in that space.


> Modules adoption would be severely stunted if everyone had to adjust all calling code in order to convert a library to use modules.

"We'd like this new feature to be good, but we have to cripple it for adoption/back-compatibility reasons". This is the entire story of C++, sadly :(

Modules being orthogonal to namespaces means:

1. Bad Surprise for people first using modules, leading to incomprehension as this is different from every other language

2. Occasional Bad Surprises for people using modules even after the first time, as people will export stuff outside of namespaces/in the wrong namespace from time to time

3. Some code using the convention that the namespaces in modules follow the module "hierarchy" (which is also entirely a conventional concept because `.` is not treated as a special character) and some code not following that convention, either because it was hastily ported to modules or because the authors elected not to care for this particular convention. Now enjoy not knowing that when importing a module from a third-party library.

Unfortunately the orthogonality to namespaces, in the name of "facilitating migration", cripples the language forever and actually reduces the motivation for migrating to modules (because it doesn't remove footguns). Doesn't help that the whole module feature is designed in this spirit. I'm lucky enough to have migrated to greener languages, but I don't think I would find it worth it to migrate to modules if I were still in the position of deciding what C++ features to introduce in my team.

> Ideally one could find/replace includes with analogous imports, a bit at a time.

Make a tool like `cargo --fix`, or something. Ah no, we can't, because of headers (and more generally the preprocessor), ironically :(


Tools like `cargo --fix` do exist. Such as clang-tidy. They have to be more opinionated to be viable for things like this, but if you're the is passionate about it, try starting a project like that.


This is how every other language handles it, it's not new territory

    #import foo
    foo::bar();
vs #import foo::bar bar();

Now, I'm sure there's more nuance to actually standardise this, but given you already need to replace #include with #import, this isn't going to catch anyone off guard.


> Has the C++ standardization committee given a rationale why a module definition does not automatically also create a namespace with the same name? That would seem such a useful feature.

I'm not sure what the official rationale is, but I feel that would cause great namespace pollution. Some companies use one namespace per "logical" unit of code (e.g. a subproject), not for each small piece of code (e.g. a small module implementing a couple of classes.


I'm speculating, but I think that if one enforced "module is a namespace", projects like Qt, standard libraries, and other libraries would be forced to ignore them for far longer than anyone wants due to ABI stability requirements. If `#include <vector>` got you a different `std::vector` than `import std;`, I don't know how you would get everything that passes a vector over an API boundary ported without doing it all-at-once.

Not to mention that if it were the case, things like a "Boost incubator" library would be in a separate top-level namespace as it could not "inject" itself into `import boost;`.


According to cppreference (https://en.cppreference.com/w/cpp/compiler_support/20), MSVC has everything implemented. GCC is close but its modules support remains lacking.


Clang 18 supports `import std;` now, but you need to enable some build settings [1]. Also has `deducing this` support, which is nice - I am fully in favour of C++ continuing to poach the good parts of Rust.

[1] https://libcxx.llvm.org/Modules.html


Deducing this is not a feature analogous to anything in Rust, the language with almost no type deduction at all. It bears a vague similarity to constrained generic traits, but you really have to squint to see it.


Of course you're correct that Rust has no support for making the `Self` type itself generic, which is the core of the "deducing this" feature.

However, "deducing this" has the side-effect of allowing to explicitly specify the "this" parameter in the argument list. The syntax looks the same as Rust, and the convention of calling this parameter "self" is taken from "other languages" (Python, Rust, ...)[0].

Similarly, it could be argued that the ability to pass the object by value in method is lifted from Rust. That's how I understand the GP comment anyway.

[0]: https://devblogs.microsoft.com/cppblog/cpp23-deducing-this/ "You don’t have to use the names Self and self, but I think they’re the clearest options, and this follows what several other programming languages do."


The good part of rust isn't anything they added, it's everything that's gone


What’s the problem with header files? Also header files exists not just because of resource constraints.


One good thing that comes from it is that templated code can now go into a module and be compiled only once.

Before, you had to put template classes into headers, so you effectively had to recompile, for example, the class std::vector<T>, for each file you included the header <vector> in, because a new compiler process compiles each .cpp file from scratch, knowing nothing. If two separate files instantiated std::vector<int>, then it would be instantiated and defined separately at compile time, and at link time, one of the definitions would be discarded. The only way to speed it up before modules was to use pre-compiled headers, which are compiler-specific.

Modules let the compiler share state across files, so it doesn't have to recompile template instantiations unnecessarily.

I think this guy's blog explains it well: https://blog.ecosta.dev/en/tech/explaining-cpp20-modules


Extern templates have existed since c++11 for that.

> The only way to speed it up before modules was to use pre-compiled headers, which are compiler-specific.

Modules are compiler specific too, unfortunately.


>Extern templates have existed since c++11 for that.

True, but ar least you don't need to write extern template class SomeTemplateClass<SomeType> in the header, and then explicitly instantiate it in the source. You can just write "export template class SomeTemplateClass<typename T> { ... };" (i.e., "export" followed by the definition) in the module file.

> Modules are compiler specific too, unfortunately.

Ah, I didn't realize, my bad.


We have a handful of expensive, widely used templates in my work project.

We put the extern definition in a precompiled header and we have an externals.cpp file. It works a treat.

> Ah, I didn't realize, my bad. Yeah, a huge beat was missed here. I don't envy the compiler writers, but surely we could have spent 6 months out of the last 19 years defining a common format, even if it defined <implementatiom defined blob> for the meat of the work.


Unfortunate direction. Header files are great and I wish modern languages would adopt them.


I'm glad other languages solved that in another way. With C(++) headers, you pick the declarations at compile time, and the implementation at link time. That's a disaster waiting to happen, and unpleasant to unravel.


It's almost impossible to get wrong though, because the same header will also be included in its own implementation. If there's a mismatch between the declaration and implementation you'll get at least compiler warnings.


It’s been a while, but IIRC there is no problem when you eg. change a struct, as long as the name is the same.


Indeed. Mismatching headers and libraries is not all that rare. I've seen headers found from Homebrew, libraries from the macOS SDK, and some related tool from `/usr`. Things…rarely work out well, but you only find out in your test suite (or linker if you're really lucky).


The only realword problem with headers I've seen so far is when there are multiple versions of the same header in the same project (bad idea anyway), and one isn't extremely careful about search paths.

The module equivalent would be different versions of the same module under the same import name. Can the C++ module system actually deal with that?


Modules don't have search paths unless you toss everything into one directory. The robust way is to instead do `-fmodule-path=modname=path/to/bmi` where ambiguity is non-existent. It is IFNDR to have two modules of the same name in a program (basically just a restatement of the ODR on the module initialization function). Due to the compatibilities, there may be more than one BMI for a single module in a build, but that module can only exist at link or run time once.


I spent a long time implementing precompiled C++ headers. The main problem was converting the pointers in the complex data structures to/from file offsets. I never wanted to go through that again.

D has a simple and effective module system. It even treats unmodified .c and .cpp files as modules (though the C and C++ files are unaware of that!). D's builtin C compiler can even import other C files as modules.

Both C and C++ should have just copied it.


https://wikipedia.org/wiki/Poe's_law

modern languages dont adopt them because they are terrible. a header import just dumps all the declarations into the global scope, so you have no idea where function or type usages comes from.


I consider them much worse than that. Header files are explicitly a copy-paste of the included text file into your source. There’s no rule that they have to only contain declarations, or that they have to appear at the top of source files, or that they have to be .h files. All of those are usage conventions and not required by the language itself. The full capabilities of headers to muck with your code are arbitrarily large.


It's the same for pretty much any language. There is nothing special about header files...

> There’s no rule that they have to only contain declarations

Same thing in Python, you can write statements at the top level of a module, and it will be executed whenever someone first import that module...

> or that they have to appear at the top of source files

Same thing in Python, you can `import` anywhere, and the context/order matters and overwrites existing scope declarations.

> or that they have to be .h files

Same thing in Python, the extension doesn't matter if you use import lib, only if you use the `import` keyword does the name/path matters.


> It's the same for pretty much any language.

no, its not. just off the top, I can say that D, Go, Nim, Rust and Zig dont do this.

also this isn't really an argument. instead of actually disagreeing with the original premise "headers are terrible" (which you cant), you've started a new argument, which is "other languages use headers", which no one was contesting.


That's the best(weirdest?) part! I used to have a job porting LabView and matlab to C/C++, since headers don't even have to include full statements, so share our test data and coefficients like so:

static const float FIR_FILTER={

#include "fir_coefs.csv"

};


FWIW, the `#embed`/`std::embed` proposal is the preferable solution here. There's a lot of time spent in the compiler parsing those ASCII floats into bytes (and I've seen `clang-format` take ages to format a block of `0xXX` bytes in an array).


You're downvoted for some mysterious reason, but you're completely right, in languages with module support I really miss the compact API overview that is provided by C headers. This is good because the interface is important, and the implementation is not.

In modules, the implementation completely dominates, and it's impossible to figure out what the module API looks like from looking at the source. You need an IDE to generate an outline for you, which frankly sucks.

For C at least, headers are pretty much perfect because they are supposed to only contain public API declarations. It's just C++ that messed everything up with the inline and template keywords, which required implementation code to move into header files.


I agree that the compact API overview is a good thing to have, but I loathe that C forces the user to write it by hand, while modern languages just generate it from the documentation comments of the source.

See: `cargo doc` that generates nice HTML documentation. See also language servers that provide completion using the same documentation comments.

Meanwhile headers, on top of the forced duplication of signatures between header and implementation, and the associated cost, is also implemented on terms of the preprocessor and textual inclusion. This compilation model makes code analysis much harder than necessary, because it becomes hard to know which piece of the preprocessed compile unit comes from which header.

This makes it hard to implement e.g. "remove unnecessary import" or "import symbol automatically" functionalities that are the bread and butter of module-based languages. Also impacts automatic refactoring negatively


What is great about header files?


> What is great about header files?

One great thing - it's nice to have an interface[1] which already has tooling to let other languages use your private, hidden and compiled implementation.

If those C++ libraries that almost all of Python scientific work is built on had used modules[2] (or something similar) instead of headers, interfacing Python to those libraries would have been an immense effort.

Even with modules existing now, if you're writing something core and foundational in C++, it's not getting adopted like Numpy, BLAS, etc unless you also provide header files.

And that's just Python. Think of all the other languages that rely on performance critical code written in C++, and used with a C-only interface.

[1] Of course, your interface can't be a C++ interface either, so there's that.

[2] If modules were around at the time.


Headers are a really bad interface IMHO. Yes there’s tooling, but they require your headers be very very specific. Any deviance can break that tooling, so you’re either lucky in what you can bind or are writing to a very strict spec.

Modules on the other hand are more strict and would likely have made the job of writing language bindings easier and more predictable.

I’m not sure I agree with your logic about NumPy etc. You seem to be conflating technology limitations with policy ones. They could most certainly use modules too , since they exist in the compiled space anyway. They won’t because they make certain compiler compatibility guarantees but that’s a conflict of interests not of technical quality.

In general your argument feels a bit “older is better due to legacy support”. That would preclude almost any advancement in the space as well, because any new language feature (even just a keyword) would break all your examples above equally too.


> Modules on the other hand are more strict and would likely have made the job of writing language bindings easier and more predictable.

There's two ways for tooling like swig to work with modules:

1. Read the compiled module and generate the stubs to call into it.

2. Read the source code for the module and generate the stubs for it.

Which of those, do you think, is easier than reading C (not C++) header files?


Reading the module information would be significantly easier. (which is not something swig can do today since swig hasn’t been updated in a while )

Parsing code to define an API is horribly fragile. A single macro or unknown keyword can break it. Hence why you had to specify C not C++ headers.

Having a statically defined metadata like C++ modules do would have greatly simplified binding generation


That's interesting! Why would modules make it harder to interface with c++ from python ? Is it because the header files are a direct interface to the functions? How hard would it be to do the same from modules, would it just require tooling to support modules rather than directly provided header files?

(I just realized that I don't completely understand how exactly the new modules work, so I have some further reading to do! )


> Why would modules make it harder to interface with c++ from python ? Is it because the header files are a direct interface to the functions?

Well, they're simpler, they're text, and the tooling already exists (I use swig all the time to automatically make C functions and symbols available to Java and Python. Some Lisps as well).

The reason the tooling exists, is because they are simple, and they are text. Looking at the C++ module definition, it's unclear to me[1] that similar tooling for C++ modules will ever be possible.

As long as the most popular languages (Python, Java, etc) rely on a C header file for inter-call interoperability, header files aren't going away. No one wants to manually write the interface into a C++ library by hand.

[1] I'm no C++ expert though - the problems I see with modules includes things like lack of a standard to grab the interface from the compiled module, complexity of C++ to determine the interface from the actual source code, etc.


> Well, they're simpler, they're text, and the tooling already exists (I use swig all the time to automatically make C functions and symbols available to Java and Python. Some Lisps as well). > The reason the tooling exists, is because they are simple, and they are text. Looking at the C++ module definition, it's unclear to me[1] that similar tooling for C++ modules will ever be possible.

Perhaps there's something specific about C++'s module design that makes this difficult, but Rust has modules and has similar tooling to enable you to automatically generate bindings to other languages (including Python and Java).


Really there's nothing stopping tooling like swig (or a C++ compiler for that matter) from processing module interfaces as text, just like headers. The language spec for modules doesn't require (or even really talk about) compiling them to any kind of binary or intermediate format.

Rather, the language spec aspect of modules is about symbol visibility and collisions. In terms that interop cares about, moving a declaration into a module just adjusts its name mangling, without changing much else about e.g. its presence in the object file.


> Really there's nothing stopping tooling like swig (or a C++ compiler for that matter) from processing module interfaces as text, just like headers.

Well the one hurdle is that C++ is a lot more complex to process than C.

You'll note in my original comment upthread, I pointed out that the C++ libraries provide a C interface in their headers, and that is what tools like swig work with.

They don't (and can't, because of symbol mangling) read C++-only headers. Since they can't do it for headers (because the resulting symbols differ from compiler to compiler), why would they be able to do it for modules?

I'm of the understanding that C++ modules are unable to provide a C interface (`extern "C"` or similar), so it's kinda pointless reading them and generating a list of symbols, because then the generated stubs only work when the library is compiled with that specific compiler.

I think that it would be nice if the compiler can be instructed to output header files for the modules it reads. You'll still have the problem that the header now has mangled symbols in it (because modules cannot have `extern "C"`), but at least tools can generate stubs for that specific instance of the compiled library.

(PS. I'm happy to be corrected about the `extern "C"` thing - if I am wrong about that then, sure, tooling that scans C++ modules for `extern "C"` can in fact be made quite easily).


You can write `extern "C"` declarations in modules, actually. And either way the complexity of C++ and its name mangling is independent of headers vs modules, no?

Further, swig does seem to support quite a bit of C++. IIUC it even avoids dealing with name mangling by generating wrappers that provide a C interface.


> You can write `extern "C"` declarations in modules, actually.

See, I did not know that :-)

> And either way the complexity of C++ and its name mangling is independent of headers vs modules, no?

You got me self-doubting here: C++ implementations are necessarily more complex than their interfaces.

The full complexity of C++ (not restricted to only name-mangling) is not usually found in headers intended for consumption by a C-interface compatible consumer.

The intention of headers is to serve as an interface alone, but absolutely nothing enforces that - nothing stops the developer from dumping the entire implementation into the header (and this is a popular route for some libraries that are header-only).

Modules are not intended to serve as an interface alone, so it is more likely that devs (myself included) will simply throw the entire implementation into a module, because it seems like a better idea to do so with modules.

IOW, common practice for headers is to only have the interface necessary to use the implementation, but I think that common practice for modules will be to have the implementation and interface included in a single module.

> Further, swig does seem to support quite a bit of C++. IIUC it even avoids dealing with name mangling by generating wrappers that provide a C interface.

Swig supports much of C++ in a smart way, by default. For things like demangling swig generates `extern "C"` wrappers (for functions, definitely - not so sure about class typenames and enum typenames).

But, to generate those wrappers from modules requires the original source code for those modules, and to use generated C++ wrappers, it needs to be compiled with the same compiler: Not a large hurdle, but it is definitely an additional concern compared to using the headers.

It will be interesting to see how this plays out: how long it takes until swig support for modules, or swig-like tooling for modules, comes along. It's still early days for modules, after all.


> Modules are not intended to serve as an interface alone, so it is more likely that devs (myself included) will simply throw the entire implementation into a module, because it seems like a better idea to do so with modules. > > IOW, common practice for headers is to only have the interface necessary to use the implementation, but I think that common practice for modules will be to have the implementation and interface included in a single module.

While I'm sure some people will do this (just like they do with header-only libraries, and are forced to do with C++ features like templates) modules do support the same interface/implementation split as headers.

I suspect that a lot of code will stick with that split for a couple of reasons. First, they already have it with headers, so migrating over to modules will be less work if they just leave things that way. And second, downstream translation units that import the module can't (with today's compilers) even start building until their dependencies' interfaces have finished building, so keeping the split may also turn out to improve build times.

Really though, I think the ideal here is not for library authors to have to maintain C-ish interface headers as a sort of conflation with that interface/implementation split. That should be generated by a tool- either something like swig (which knows more about the other side of the binding) or at least a C++ tool like you mentioned upthread. This is also how, for example, Carbon plans to do its C++ interop.

> But, to generate those wrappers from modules requires the original source code for those modules

This part I don't think is actually going to be an issue, because pre-compiled module interfaces are not really any more usable in C++-land than pre-compiled libraries. They also have to be compiled by the same compiler, with the same configuration, so people will have to provide the module interface sources just like they provide headers.


Separation of interface and implementation. In languages without them you must rely on documentation generators or an IDE to explore a library/packages public interface. Header files provide this separation without the need for fancy tools.


why would be using a "fancy tool" a bad thing? Headers are to be maintained manually, which is lost productivity and invite errors. The "fancy tools" can provide specialized search, formatting of the documentation, internal links, formatted and tested code examples, and of course the interface for free. In modern languages they're even part of the included tools.

I'm having a hard time seeing the downside of the "fancy tools".


Python added pyi files in after the fact. Adoption on those is much slower than I would like.


You immediately see the public interface of a library without the implementation getting in the way (for C headers at least, typical C++ headers are an unreadable mess because they mix interface declarations with inline/template implementation code).


They're straightforward and easy to reason about.

Textual inclusion also allows a number of things modules do not.

The main disadvantage that they have, that they're stateful, is also true of module systems in many languages (e.g. python).


I greatly disagree with “easier to reason about”

Maybe in a clean, smaller codebase with minimal dependencies.

But the textual inclusion means that including one header can suddenly break previously compiling code. Unless that header is completely inclusive, that can mean a while of chasing down transitive headers till you find the problematic one.


The semantics are literally "it's the same as copy-pasting it where it's used". It doesn't get simpler than this.

There are much more things going on in the module system that you'd have to know about if you care about what's actually going on.


The module system doesn’t have side effects on inclusion, whereas headers do.

So again, maybe if you have a smaller, cleaner or dependency free codebase, then it’s not an issue.

But in any codebase of major scale or legacy, headers can introduce several issues that aren’t possible with the module interface.


The side-effects are merely the rules of C++, that you already know by virtue of writing C++.


I feel like you’re talking past any of the caveats I’ve mentioned twice. I have no interest in belabouring the point with someone who’s clearly ignoring what’s being written. Have a good day.


You're really burying the lede here.

Headers are literal copy paste, and are entirely different. Anyone who has ever had to define WIN32_LEAN_AND_MEAN knows that there's _way_ more going on here.


This exactly, thank you. Sometimes you can’t untangle the defines in another header and they infect your own.

Modules are a nice barrier to guard against side effects.


Yes, the number of times I've had a `LoadLibrary` function name conflict with `windows.h`'s `A`/`W` suffix-switching or had to deal with X11's `#define Status int` "typedef" mucking with completely unrelated code is…sad.


Just don't include those in headers.


That's not possible. Windows.h is a requirement for anyone working on Windows. Various standard library implementations make use of macros to implement debug-only assertions, or bounds checks in containers.


Don't include it in headers = only include it in a .cpp file.


Because `void*` to pass `HANDLE` around is better how? Maybe if you're able to hermetically seal off platform-specific bits, but usually you'll have two parts that want to talk about Windows API types across their call boundaries…


It's called encapsulation.


Sure, but I also prefer talking directly about what you're talking about though. If you have some Windows implementations that need to talk to each other, I'd rather they use HANDLE and HMODULE than "void* winkwinknudgenudge" style things. Hopefully users don't need to include these headers directly (but sometimes they do), but it's still something you need to consider in the APIs and such because while the header might say LoadLibrary, the caller is going to need to include windows.h and deal with the #define anyways.


any interaction with the win32 api should be hidden within your library.

Surely you want your code to be portable to begin with?


It can be portable and still want to expose native information. SDL2 is probably something we can all agree is portable, but this still exists: https://wiki.libsdl.org/SDL2/SDL_SysWMmsg


Down voting because you disagree with someone isn't what we strive for here is it?

Edit: Ok I get it, disagreement is an encouraged reason for downvotes on HN. If a reason for an opinion is given, a refuting response is preferred, but in this case, no reason was given.


This is not Reddit. It is considered acceptable, even preferable, behavior here, and has been basically forever.

https://news.ycombinator.com/item?id=16131314


Thanks for the link. I did not know. Also thanks for your work on rust.

And to be fair, there was no reason givin for the dislike of headers, so all it contributes to the discussion is that there is an alternative view.

I like modules, rust has been my daily driver at work for a couple years now, with some cuda for things that are too slow, and usually prototyping algorithms in matlab.

I can see the viewpoint though, that headers made for an easy way to convey an api before c++ added all the inline stuff.


It’s all good. You’re welcome. I can see it both ways: its sucks to get downvoted and not know why. But also, here we are talking about something that’s not C++ modules in a thread about C++ modules.

The surest way to get massively downvoted here is to complain about downvoted, though. People seem to relish in it.


And this is why HN has degenerated into a homogeneous echo chamber for many topics.


I don't remember ever seeing another reason I had a comment downvoted.


I used to sometimes get downvoted heavily for disagreeing with longtime HN favourite Thomas Ptacek, but that eased off a lot after one time I'm like "LOL, the transcripts for your show are so wrong" and Thomas misread that as me claiming the experts on the show are wrong rather than the transcripts so he was indignant.

Initially I got the most downvotes I think I'd ever seen†, and then Thomas realised oops, yeah, he's not saying what I thought he's saying and it shot back up over the course of an hour or so. IIRC the auto-transcripts for example transcribe PKIX (an RFC explaining how we shall use the X509 certificate system for the Internet, and the regime resulting from that RFC) as "Peacocks". Hilarious. But, not a criticism of the person on the show (Ryan Sleevi I believe, who I rather like although we've never met in person).

I have written things I don't feel good about, especially trolling groups who believe something I don't, and deserved to be downvoted even if they were strictly true. But those mostly get -1 or -2 or something. Thomas' fans just keep hitting that down arrow apparently.

Other than disagreeing with Thomas, the main way I got sizeable negative voting was being wrong and then going to sleep without correcting it. Say 2+2 = 3 on a Thursday evening, but correct it to "1+2 = 3" a quarter hour later? Probably get one or two downvotes. Leave it that way overnight? Wake up to 10 downvotes at least. People will kick you if you don't react.

I remember picking an erroneous example in one of the endless Rust v C++ threads, and then going to bed, and yup, very clear I was wrong when I woke up.


I always find it weird when people even pay attention to what the name of the person is.

A comment should be evaluated on its own, not based on who it was written by.


That's a nice sentiment, but I'm not sure it's always a practical one to take simply because no one has the expertise/time/knowledge to evaluate all of the comments they read for accuracy/completeness/etc, especially if commenters are disagreeing on a subject you are not an expert in.

For example, if a comment claims an article is wrong about the development and use of the Nagle algorithm, I am very much not well-equipped to evaluate the comment on its own since that is not an area I know much about. If I see that comment was written by Animats, though, I can be reasonably confident I should trust the comment over the article, since he invented the algorithm.


The correct approach is to assume that people bear no ill will -- I believe that is even part of the posting guidelines.

If someone makes the statement that something in an article is incorrect, and fails to takes into account some minutiae, it is probably because the person is knowledgeable about the topic and believes it to be the case.

There is no need to research who the author is, or to mod the person down because he appears to be a contrarian that doesn't follow the hivemind. If anything, that's just an interesting subject to look into.


> The correct approach is to assume that people bear no ill will -- I believe that is even part of the posting guidelines.

It is, but intent is distinct from correctness. I can believe that everyone is commenting in good faith and stating their honest beliefs, but that is little help when commenters take contrary positions and I would like to determine who is correct despite not being knowledgeable myself.

Under those circumstances, authorship is a useful signal. It doesn't need to be the sole determining factor, of course, but it's better than nothing.


I don't personally believe in using people's fame as a proxy for assessing correctness.

I think it best to make your own opinion based on the arguments presented to you. Sure, someone may convincingly say wrong things, and you might end up believing them, but then it's up to you to develop your own ability to see through bullshit (of which there is a lot in the tech space, especially when you get close to VCs).


> I don't personally believe in using people's fame as a proxy for assessing correctness.

I'd hope most people would agree with you, since fame is not the same as expertise. A well-known author who happens to talk about a subject they are knowledgeable about can be worth paying attention to, but the same author talking about a completely different field may (should?) warrant more skepticism. Authorship can be a useful signal, but that doesn't mean it is always a useful signal.

> but then it's up to you to develop your own ability to see through bullshit

I mean, yes, that is ideal, but unfortunately but not all BS is equally detectable. Sometimes it really does take intimate knowledge to know someone is incorrect, and achieving that for everything just is not realistic for most, if not all, people.


It's even weirder when you notice the names. I remember a few instances where the person who replied to my post turned out to be the creator of a major programming language.

I try to avoid looking at the names until after the discussion is over. When I remember the fact I'm posting among giants, I find it difficult to post at all. HN is a place where the person who created an algorithm shows up to explain it to you. The fact such people post here is the number one reason to come here.


I believe you'll do them a favour by not paying attention to who they are.


You're just consistently a jerk in all of your comments. I'm not sure what else you expect.


> make

No, please. If you're using C++20 modules, you also owe it to yourself to use a modern build system...

Edit: it seemed like I fired right into a Makefile-shaped hornet's nest.


There isn't one for c++ unfortunately


Don't know what qualifies as a Scotsman to you, but CMake is more modern than Makefile.


Yep, CMake. It has a weird string-typed syntax, and an even weirder function/function parameter syntax, but those are the most egregious issues.

Target-based CMake is extremely straightforward to get started with, and CMake + Ninja is also frequently significantly faster than the alternatives (autoconf tools). It also easily hooks into package managers for C++ (Conan, vcpkg) and today, users can write C++ like they do JS or Rust: have a `vcpkg.json` file, top-level `CMakeLists.txt` file, and that's it.

I've worked enough with Autotools to understand how ridiculously painful they are. Look at this monstrosity[1].

CMake may be hard to get started with, but it's easy to append, maintain, and refactor code that builds with CMake.

[1]: https://en.wikipedia.org/wiki/GNU_Autotools#/media/File:Auto...


Don't use make! Use a makefile generator!

(hope the sarcasm is obvious)

CMake is pretty good, so is Meson, so is Conan. All of them have some unfortunate shortcomings that make the C++ ecosystem hard to work in, but the biggest problem is diversity of build systems for dependencies.


XMake is by far the best option. There is nothing even close in feature completeness, nor in user friendliness.


Modules is another of those features that were added to C++ due to peer pressure from other programming languages.

The thing is that they're heavily implementation-specific things. Two of the biggest compiler vendors pushed the model that worked best for their implementation. The Microsoft model won the committee politics, which is why they're the only ones with an implementation. Too bad most C++ developers don't really use Microsoft software to begin with.

They should have just stuck with a simple PCH-but-you-don't-have-to-include-it-first model. That would have been straightforward and immediately useful to everybody.


MSVC seems to have the best funding model of the three major compilers. I estimate that has more to do with adoption velocity than head starts on implementation, especially given how long modules have been in progress.


As I said, the module system in C++20 is literally the Microsoft model.

Clang had another model that was rejected, and most of the years in committee were actually about finding a compromise between Microsoft's and Clang's approach. In the end, Clang wasn't particularly happy about the end result either.


Clang modules were basically "let's try to automate precompiled headers". It lacks the isolation offered by proper modules. Even if they're not faster, the isolation is something I'm looking forward to.


If it were literally the MSVC implementation, they would have been done threw years ago.


They were. Gaby has been making presentations at Cppcon about it since 2015. By 2019 it was mostly about teaching people how to use it.


I'm positive there has been nontrivial work to support C++ modules going into MSVC, msbuild, VS, MS STL, etc., since 2019. And there is still plenty of work to do, for instance helping build systems know when they need to produce more than one IFC for a given module (right now build systems assume modular C++ code is ODR bug free).


That sounds like it's mostly integration and tooling.


Also bug fixes. And there are still major headaches around when names are provided both through textual inclusion and import statements.

But integration and tooling is required for any major amount of adoption. IntelliSense has to work. Existing LSPs have to work.


I personally do not use IDEs, completion or even off-the-shelf build systems. I believe that to be a common situation.


MSVC is also the least expansive of the three compilers. It has a much smaller set of languages it supports (No C, ObjC etc for example), and a much lower focus on performance [1]

That’s not a criticism necessarily , but just to add that I think they likely have a more agile code base as a result.

[1] https://reddit.com/r/cpp/s/BOj34UtKvu


Plus its stdlib is developed by STL, the most fittingly-named guy to be developing a C++ standard library.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: