
C++ Modules Might Be Dead-On-Arrival - pplonski86
https://vector-of-bool.github.io/2019/01/27/modules-doa.html
======
WalterBright
D made the decision at the beginning to have filename==modulename specifically
to avoid this problem. So:

    
    
        import std.stdio;
    

means look for:

    
    
        std/stdio.d

~~~
slavik81
Have there been any downsides or annoyances with that approach?

~~~
WalterBright
1\. The filename characters have to be valid D identifier characters. This
annoys some people.

2\. Because Windows has case-insensitive filenames, and Linux, etc., have case
sensitive filenames, we recommend that path/filenames be in lower case for
portability. This annoys some people.

3\. There are command line switches to map from module names to filenames for
special purposes. They're very rarely needed, but invaluable when they are.

Overall, it's been very successful.

~~~
coldtea
> _The filename characters have to be valid D identifier characters. This
> annoys some people._

I'd say "valid X language identifier characters" should always be ASCII.

I never understood the BS fad for unicode identifiers.

Wanna allow some math symbols? Maybe. The full unicode gamut, so that you can
have a variable named shit emoji? Yeah, no.

~~~
ZiiS
This stops most of the world naming things in their native language.

~~~
ckastner
As a non-native English speaker, I understand the desire to name things in my
native language (German), but for all but languae francae, naming things in a
native language presents an obstacle to sharing these things with others.

Compare итератор and 迭代器, which are complete mysteries to me produced by
Google Translate. If my intention were to reach as many people as possible,
I'd use "iterator" (which, coincidentally, works for English _and_ my native
German).

~~~
johannes1234321
There are technical terms and there is domain terminology.

If once worked in finance and there is a difference between GAAP accounting
and German accounting rules. If my algorithms used English terminology to be
consistent with technical terms this would be confusing inneach review. Using
German terms (even combined with English "get" or "set", like
"getBetriebsertrag") there was beneficial, even though it always confussed new
members of the team.

------
dkarl
As someone who programmed professionally in C++ for eight years but hasn't
touched it since 2011, all I can think of is that it would be easier to learn
Rust than to catch up to current C++.

Not that my memories of C++ are bad or that I'd avoid using it again! It's
just that it would be like trying to reconnect with someone I haven't seen
since college. I'm curious, but I don't know if it would be worth the
awkwardness.

~~~
cheez
As someone who programmed professionally in C++ for almost 20 years, you're
wrong. Just look at a tour of C++ by stroustrup and you are 80% of the way
there. It's a very simple book.

~~~
apta
And the remaining 20% will take 200% of the time? :P

~~~
mempko
Isn't this true with Rust too?

------
ziotom78
The idea of forcing modules to be defined in files with a deterministic way to
pair module names and file names seems pretty reasonable. Two examples come to
my mind:

\- Go doesn't place requirements to the name of files defining a package [1].
However, it has a preprocessor neither, so the problems described in this
article (specifying the module name within an #ifdef) are impossible.

\- FreePascal has a preprocessor [2], but it defines a deterministic algorithm
[3] to find the files containing a unit (the FPC equivalent of a module).
Moreover, the compiler creates two files for every unit: a .o object file, and
a "unit description file" [4], much like the C++ proposal.

It seems that FPC's case is the most similar. I think the author is right; the
C++ committee should adopt a deterministic way to find the name of the files
defining a module.

[1]
[https://golang.org/ref/spec#Packages](https://golang.org/ref/spec#Packages)

[2] [https://www.freepascal.org/docs-
html/current/prog/progse4.ht...](https://www.freepascal.org/docs-
html/current/prog/progse4.html#x135-1360002.1)

[3] [https://www.freepascal.org/docs-
html-3.0.0/user/usersu7.html](https://www.freepascal.org/docs-
html-3.0.0/user/usersu7.html)

[4] [https://www.freepascal.org/docs-
html/current/prog/progse13.h...](https://www.freepascal.org/docs-
html/current/prog/progse13.html#x151-1520004.1)

~~~
ithkuil
> \- Go doesn't place requirements to the name of files defining a package
> [1].

In Go, the import path and package name are two distinct things: the import
path locates and identifies the package, while the package name acts as the
default name for scoping qualified name exported from that package.

Furthermore in Go the file names are not part of the import path: the name of
the directory containing the files that together define a single package is
part of the import path.

An imperfect analogy with C++ would be:

\- Go import paths <-> path to the included file (header)

\- Go package names <-> a namespace inside that included file.

\- Go package filenames <-> sections within the included file

------
gmueckl
What really scares me is that the use of the preprocessor is still allowed
inside modules. This might have been a chance to define a clean break at least
with #include and the shortcomings thereof. A syntactically and semantically
saner replacement for #define amd #ifdef could have laid the groundwork for a
much improved tool support. But if the preprocessor is dragged into modules as
a whole (sans interaction between modules), the only gain is in language
complexity.

I'm generally disliking the need to maintain separate header and
implementation files. Maintaining both is time consuming and putting
everything in headers is no panacea, either. Now modules seem to add another
type of interface definition to the mix that would need to be maintained after
a project adopts modules.

~~~
stochastic_monk
How would you write different code for different architectures or based on
compilation flags without preprocessor directives? Rust has directives for
specifying which version to compile, but C++ currently doesn’t. However, I
find #if __AVX512BW__/#elif __AVX2__/#elif__SSE2__#else/#endif to be easy and
flexible, allowing only a subset of code to vary by architecture. I also find
that macros allow me to write much more concise, maintainable code.

It’s archaic and low level, but it’s also powerful and expressive. Replacing
the CPP would probably just require a new language.

~~~
tambre
What about using `if constexpr`? Sure, there currently are no equivalent
constant variables to use, but adding those should be very easy.

~~~
stochastic_monk
if constexpr only works for procedural code, not data members. (e.g., use
__m128i v[4] for SSE2, __m256 v[2] for __AVX2__, etc.) It could be templated,
as long as there was a compile-time method besides the CPP to get
architectural information, so that would be a way (if much more verbose)
forward.

That being said, if constexpr is great; iterating over either sparse or dense
arrays with in Blaze without wrapping in two functions was mind-blowing when I
discovered I could so easily.

~~~
gmueckl
That is why static if and version in D work very differently. Alexei
Alexandescu criticized if constexpr in C++ for being a poor copy of static if
that misses the point, but that was dismissed. But it is powerful enough to
handle almost all cases of conditional compilation.

------
richard_todd
The article actually says c++ would be best to follow Python’s import model,
which makes no sense to me. It doesn’t work to try to get the compiler to go
compile other modules when you import them, because it has no way to know how
you want them compiled (build switches, #defines, pre-compilation steps, etc).
The eventual answer has to leave the dependency-relationship of modules to the
build system; I don’t understand how that can even be up for debate in a c++
context. If the proposals include pulling a standard build system into the
compiler, then they have very little chance of gaining traction.

~~~
carlmr
Say if modulename == filename, then you can create a build system that creates
a DAG and waits on compiling this module until the BMI files are there.

If you have the other module as a build step it will compile. If not, you have
a linker error.

I don't see the issue here?

Especially if you force module imports to be at the top of the file, you could
only scan the start of the file to get an idea on whether you can continue and
pause until it's possible or all modules are finished and you didn't get your
BMI.

~~~
quietbritishjim
I might've misunderstood your comment, but I think you're referring to a
different meaning of "Python’s import model" to the parent comment. Here
"Python's import model" refers to the fact that when Python comes across an
import statment it will potentially pause compilation of the current
translation unit to go and compile something else. It does not refer to the
fact that Python maps import statements to directory names and file names.
Here is the relevant quote from the article:

 _When a new import statement is encountered, Python will first find a source
file corresponding to that module, and then look for a pre-compiled version in
a deterministic fashion. If the pre-compiled version already exists and is up-
to-date, it will be used. If no pre-compiled version exists, the source file
will be compiled and the resulting bytecode will be written to disk. The
bytecode is then loaded._

The article suggests using this idea in C++ and the parent comment objects,
but then it sounds like you're saying it wouldn't be needed anyway (so you're
disagreeing with the article too?).

------
reissbaker
Not deeply familiar with C++ modules, but I've built and maintained fairly
large build systems for other languages (and written my fair share of C++),
and from this article I'm not quite sure where the intractable problems lie.
It seems like the .bmi files are effectively an optimization that allows for
fast incremental compilation, but a compiler doesn't actually _need_ them to
run compilation from scratch: it knows how to generate them, so if they're
missing it can fall back to the old, slower #include-style compile-every-file
behavior, generating the .bmi files as it goes. It doesn't seem like they add
new slow paths that you can't already construct today with macros and
#include, so it's hard to see why they'd be DOA: first time compilation should
be no slower, but incremental compilation should be much faster thanks to
interface stability.

Maybe I'm missing something?

It's not like C++ modules were designed by random nobodies, though; this has
been worked on by build infra engineers at major companies with enormous C++
codebases like Facebook, and compiler maintainers e.g. the Clang maintainers.
It's possible they completely forgot to think about parallel builds, but that
seems at least a little unlikely.

~~~
jsnell
But you can't just compile-every-file. Each file can depend on the outputs of
compiling some unknown set of other files. The compiler needs to become a
build system, or the build system needs to become a compiler.

The clang modules proposal had the concept of mapping files, mapping module
names to file names.

Companies like Facebook will presumably use proper build systems that already
encode the dependency information in the build files rather than try to
autodetect it. In that kind of an environment this proposal probably isn't
particularly painful.

~~~
coliveira
The compiler will not become a build system because this is out of scope for
C++. With or without modules, C++ will continue to rely on an external
dependency management tool, such as a Makefile. The introduction of modules
will not change anything in this respect.

~~~
jsnell
Indeed. You've now taken one solution off the table. The other one is for the
build system to become a compiler, which is equally unacceptable. That leaves
you with manually encoding all dependency information in the build files.
Which most people aren't doing (the exception being Bazel-like build systems
which enforce that).

That seems to leave us with just one conclusion: the article is right, and
most of the ecosystem will never migrate to modules, leaving us with the worst
of both worlds.

~~~
ginko
The build system doesn't need to become a compiler. It just needs to parse the
source files for import statements in order to create a DAG.

~~~
jcelerier
> The build system doesn't need to become a compiler. It just needs to parse

parsing C++ is mostly equivalent to becoming a C++ compiler.

~~~
geezerjay
> parsing C++ is mostly equivalent to becoming a C++ compiler.

It reallt isn't. Parsing a languagr just means validating its correctness wrt
a grammar and in the process extract some information. Parsing something is
just the first stage and a one of many stages required to map C++ source code
to valid binaries.

~~~
tcbrindle
The presence of template specialisations and constexpr functions means that
the GP is right here; you cannot decide whether an arbitrary piece of C++ is
syntactically valid without instantiating templates and/or interpreting
constexpr functions. Consider

    
    
        template <int>
        struct foo {
            template <int>
            static int bar(int);
        };
    
        template <>
        struct foo<8> {
            static const int bar = 99;
        };
    
        constexpr int some_function() { return (int) sizeof(void*); };
    

Now given the snippet

    
    
        foo<some_function()>::bar<0>(1);
    

then if some_function() returns something other than 8, we use the primary
template and foo<N>::bar<0>(1) is a call to a static template member function.

But if some_function() does return 8, we use the specialisation and the
foo<8>::bar is an int with value 99; so we ask is 99 less than the expression
0>(1) (aka "false", promoted to the int 0).

That is, there are two _entirely different but valid parses_ depending on
whether we are compiling on a 32- or 64-bit system.

Parsing C++ is hard.

EDIT: Godbolt example:
[https://godbolt.org/z/yR3YHW](https://godbolt.org/z/yR3YHW)

~~~
ginko
You only need to parse the "module <module name>" and "import <module name>"
statements. No need to parse all of C++ for that. You could probably even do
that with a regex.

~~~
galangalalgol
It also has to do all the preprocessing to see which import statements get
hit. I don't think templates could control at compile time which module to
import, at least I hope not.

------
tokyodude
> In this respect, C++ would be best to mimic Python’s import implementation:
> When a new import statement is encountered, Python will first find a source
> file corresponding to that module, and then look for a pre-compiled version
> in a deterministic fashion. If the pre-compiled version already exists and
> is up-to-date, it will be used. If no pre-compiled version exists, the
> source file will be compiled and the resulting bytecode will be written to
> disk.

What??? How would that happen? Are modules always compiled with zero flags
because in non-module c++ how the dependent module gets compiled is defined in
the build system so in order for the compiler to build a missing .bmi it would
have to ask the build system how to build it .

That seems to answer the question. What happens if foo.bmi does not exist?
Anwser: you get a compilation error (Missing foo.bmi or foo.bmi out of date).
You then need to go fix the dependencies in your build system to make sure
foo.cpp gets compiled before bar.cpp.

Right?

I get that might suck but it's not unprecedented. lots of builds have
dependent steps. Maybe in order to implement C++ modules build systems will
need an eaiser way to declare lots of dependencies where as now dependencies
are an exception?

~~~
martincmartin
_What??? How would that happen? Are modules always compiled with zero flags
because in non-module c++ how the dependent module gets compiled is defined in
the build system so in order for the compiler to build a missing .bmi it would
have to ask the build system how to build it ._

The build system just figures out how to invoke the compiler. The compiler
does the actual building. When the compiler runs, it has all the flags.

Remember, headers in C / C++ are basically a file level construct. They happen
before you even split the file into tokens. #include just means "do the
equivalent of opening that file in a text editor and copy and paste it in
place of this #include line."

The compiler is already compiling header files, as part of compiling cpp
files.

~~~
tokyodude
But we're not talking about header files, we're talking about modules. Modules
are a new concept so how they work is up for definition. To say that if
bar.cpp uses module foo that foo.bmi must already exist is not an unreasonable
rule.

modules work with the import statement (new) not the #include statement. They
are not the same as include at all.

In fact this is spelled out in the article in the first goal

> The “importer” of a module cannot affect the content of the module being
> imported. The state of the compiler (preprocessor) in the importing source
> has no bearing on the processing of the imported code.

In other words, the flags passed in when compiling bar.cpp have no effect on
foo.bmi. foo.bmi is the result of the flags passed in when foo.cpp was
compiled and those flags can only be gotten from the build system if foo.bmi
does not exist.

------
est31
Just for comparison, in Rust this is solved in a very easy way: If you are in
a module foo and have a mod bar; statement, then the compiler will go search
for bar.rs and for bar/mod.rs. If neither are found, it reports an error.
There is only one path where the compiler starts the search from: the foo/
directory (note that the foo module itself can be declared in the foo
directory or outside as foo.rs).

Sometimes C++ can use its age as an excuse to be super complicated but here,
the modules implementation of C++ is younger than Rust's.

~~~
sanxiyn
Unlike C++ modules, Rust modules are not separately compiled. Better
comparison is with Rust crates.

~~~
est31
In Rust's crate compilation model, there's certainly some unexploited
parallelism. Often as much as half of the compilation is spent in the LLVM
phases. By then, all the MIR is already around and only sitting there, waiting
on LLVM to finish. Downstream crates could already start their compilation
with the MIR data only. Only the LLVM phases of the downstream crates need the
LLVM data of the upstream crates. Assuming that half of the time is spent in
MIR, half in LLVM IR, you would be able to double your parallelism, or halve
the length of the critical path through the compilation graph.

~~~
zozbot123
Isn't this pipelining as opposed to parallelism? I assume that multiple
downstream crates are already compiled in parallel whenever possible.

~~~
est31
Yes, often things are compiled in parallel but often there are tight spots in
the compilation graph where only one crate is compiling because all later
crates are relying on it.

------
haberman
I don't get it. What about modules inherently forces them to be imported via
this newfangled module namespace (eg. import std.iostream) instead of being
imported by their source filenames (eg. import "iostream.h")?

If I'm understanding the post correctly, the entire problem they are facing is
that you have to scan all source files to build this module->filename mapping.

None of the "essential goals" listed at the top of the blog post requires that
modules be imported by namespace instead of filename, as far as I can see. So
why was this design chosen when it causes these problems?

~~~
Chabs
C++, as a language, has never cared about the notion of "files". The entire
standard is defined as a function of a "Translation Unit", which is an
abstract notion that we tend to associate with "a single .cpp file" by nothing
but convention.

Since modules operate at the language level, they need to operate on this
notion, which precludes importing by file.

~~~
brianberns
But a header is a file, no? And it is referenced explicitly by its file name.

~~~
gmueckl
No, the language of the standard carefully omits talking about files. This is
because there are still ancient mainframe operating systems around that do not
have typical hierarchical filesystems, but it is still technically possible to
provide a C++ implementation for these. Prescribing a module name to file name
mapping woukd not work in these environments either. This is also why #pragma
once was rejected and the replacement #once [unique ID] was invented instead:
just defining what is and isn't the same file for #pragma once turned out too
difficult to define.

~~~
ahaferburg
What I don't get though is why these ancient mainframes need the latest
version of the standard. I can't imagine the compiler writers for these OSs to
be too eager implement any change at all. You said "technically possible", are
you implying that nobody actually does? What are these OSs?

To me this seems like a weird take on accessibility. In order to accommodate
that one OS that has some serious disabilities, everyone else has to suffer
the consequences. Why not build a ramp for that one OS, and build stairs for
everyone else?

~~~
gpderetta
IBM has multiple people in the standard committee and they care a lot for both
backward compatibility and new standards. They alone were strongly opposed
from removing trigraphs from the standard.

Still trigraphs were removed in the end; if there is enough support the
committee is willing to break backward compatibility.

------
choeger
So you need to organize your project in a sane way regarding module
dependencies. Bummer.

OCaml or Haskell do just fine with proper modules and parallel builds. I
presume the same holds for Go, rust, et. al.

~~~
pjmlp
So far, from what I have understood, the anti-module movement seems to be all
about wanting to use modules as if they were header files, while magically
winning the compilation speedup of modules.

~~~
alexeiz
What movement? You got infected by politics to think everything is a
"movement".

~~~
pjmlp
Any language standardisation process is full of politics.

------
xenadu02
The C++ modules folks seem determined to learn nothing from Clang modules.

Clang’s module maps solve a lot of these problems. There is a known location
to find the module map and all headers from the module must be reachable from
the map.

~~~
pjmlp
Clang’s module maps aren't without issues, which is why they have been mostly
used by Google and no other C++ compiler vendor bothered implementing Clang's
design.

Even at Apple, which originally designed them, they just focused on making it
good enough for C and Objective-C system headers.

------
Ace17
Interesting talk with a critical flavour about C++ modules by John Lakos
(CppCon 2018):

[https://www.youtube.com/watch?v=K_fTl_hIEGY&feature=youtu.be...](https://www.youtube.com/watch?v=K_fTl_hIEGY&feature=youtu.be&t=2040)

------
sanxiyn
> The compilation ordering between TUs will kill module adoption dead in its
> tracks.

I agree, but that's because C++ TUs are too small. Rust works like this "dead-
on-arrival" way, but survives because TUs are larger.

------
muststopmyths
Granted I haven't looked too deeply into this in a while, but this article
seems to imply usage of modules in a way that is fundamentally incompatible
with the actual goals of modules.

For example,according to the clang documentation for modules [1], they are
meant to obviate the necessity of #including headers of libraries you are
linking to.

When you link with libraries, you expect them to be compiled already. If the
library is a dependency of your current project (in Visual Studio solution
parlance), then it will be recompiled if needed before your project is
compiled, if your build system is set up correctly. You will also take care to
point your build system to the correct version of the library you want to link
to, including versions that change depending on CPP flags etc.

I don't see how modules are any different.

Please enlighten me if I'm not fully grasping the point of the article.

[1]
[https://clang.llvm.org/docs/Modules.html](https://clang.llvm.org/docs/Modules.html)

~~~
comex
The "modules" feature documented on that page is Clang-specific functionality.
The article is talking about the newer Modules TS, which is currently in the
standardization process, and works completely differently. However, as to your
specific question, both Clang modules and the Modules TS aim to support
replacing #include entirely with module imports, including within a single
project.

~~~
muststopmyths
interesting. Will have to read up on it. thanks.

------
im3w1l
I haven't read much at all about this, but why couldn't modules work like
this?

Header, source like before. Allow putting all implementations in the source
(including templates). Allow private member functions to be defined in the
source file even though they were not declared in the class. Every function
that is not declared in the header is no-external-linkage (including these
adhoc private members). Defines don't leak into header file from including
files unless explicitly passed into it (using some new syntax). Defines can
leak _out_ of header files. If the define already exists it's an error (the
point of this is to make sure that order doesn't matter), unless the origin of
the define is the same (so if x includes y and z, and y includes z, then a
define in z would go into x directly and also through y which would not be an
error).

This seems like it should be a strict improvement.

~~~
creato
> Header, source like before. Allow putting all implementations in the source
> (including templates).

Do you realize how hard this is to implement? This actually was part of the
original C++ spec (export keyword), but no compiler successfully implemented
it in a way that was compatible with other compilers, and it was deprecated.

This (and C++ modules) requires a standard intermediate representation of the
language that all compilers share, otherwise compiler A can't use a module
generated by compiler B.

This is why I've never really expected C++ modules to ever exist, or if they
do, it will be in a form that is much more limited than most people want or
expect. Either they'll only allow a subset of the language to exist in a
module, or the feature won't be much different than the "pre-compiled header"
feature offered by most C++ compilers), or modules won't be portable across
compilers (and maybe even versions of the same compiler).

~~~
bigcheesegs
This has always been a problem in talking about modules in C++. Everyone has a
different idea of what a module is.

Currently every compiler except MSVC plans to make module files version
locked. Different versions of the same compiler will use different module
files.

Personally I find little value in portable module files, as you need to be
able to rebuild modules anyway to handle pretty much any change to compile
flags.

------
jeffdavis
I have a lot of respect for C++ because it's still holding together (thriving
even) after changing and adding so much.

But I have to wonder, is adding one more big feature wise? Will modules be the
feature where it becomes impossible to create a compiler that's both useful
amd conforming?

~~~
m12k
From my experience working on a commercial game engine, my guess is that
almost anyone working on a huge C++ code base (e.g. a game engine, browser, OS
or the like) would rank compilation speed as the #1 issue they would like to
see addressed. Modules seem like the best candidate to improve this issue,
primarily by limiting the scope of how much needs to be recompiled whenever a
change is made. So I think it's a feature that is worth quite a lot of
compiler-writer pain to get shipped, seeing as how many man-years it could
save. That said, I don't really see why this would be all that tricky for
compilers to implement compared to other recent additions in modern C++ - do
you have any particular reason to think it would be? (other than apparently
some communication issues within the body making the spec as per the linked
article)

~~~
jeffdavis
My worry comes mostly from the article. I'm not informed enough on this topic
to say how feasible it is to resolve these concerns.

------
chris_wot
Don't know who this Thomas Rodgers is, but telling someone to STFU is hardly
going to be helpful.

------
eej71
I'm hopeful that modules can be salvaged. But I must admit - I find it hard to
follow along with what the current proposal even is. I've assumed that this is
the best and most complete summary of it. [http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2018/p110...](http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2018/p1103r2.pdf)

------
coliveira
I think the author is trying to map the way python works into C++, which
doesn't make sense. The type of problems that he mentions with the
preprocessor are ALREADY present with the compilation model used by C++. The
solution is that each project needs to manage dependencies using Makefile or a
similar tool. The process is not automatic as with python, but instead is
managed according to the needs of each individual project.

------
pierrebai
I'll admit right away of not having read the module deisgn documents. I'm
basing my comments on teh description given...

It seems to me that the module-interface unit (MIU) would be pretty much the
equivalent of present-day header files. So for two modules foo and bar, there
is no dependency on the order foo.cpp and bar.cpp are compiled, because only
the MIU needs to be compiled for a given module and the design ensures two MIU
are isolated. If they are mutually dependent it's a bad design, but teh
solution is the same as in Java: you need to build the MIU in the same
compiler invocation. (In fact, that would probably happen automatically, using
the equivalent of today's -I include directory directive to find MIUs.)

Yes, that means you need to split your module into a clean MIU andthe actual
implementation file, just like now you split them in a header file and a cpp
file.

Yes, you need MIU to be "available" to be translated to BMI gobally, just like
you need header files to be available globally when compiling.

------
75dvtwin
WRT > Module interface unit locations must be deterministic.

My immediate though was create a SQLLite database per some collection of
modules (a crate of modules?).

Each row in that table will have unique 'business id' and unique row_id.
business Id is a composite key on

module_name+exported_function_names_with_attribute_and_result signature

every BusinesID as defined above, can point to source location, compiled
object location, last compilation time, compilation state.

Moving crates to other locations will not change the BusinessID. Moving
compiled object location will not change the BusinessID.

The sqllite database can have a network interface for multi-machine
compilation. can be backed up/restored into a new environment and be unchanged
as long as compilation flags for all the modules remained the same, and
compiler versions are same.

------
fooker
Except, several large companies already use clang's implementation of modules
internally.

------
petters
Python (mentioned in an article) usually works nicely without a build system.

C++ does not. The build system will also take care of this problem, just as we
currently e.g. define libraries in cmake.

------
norswap
> Ahead-of-time BMI compilation is a non-starter.

I didn't get why - one could imagine parallelizing BMI generation, then
parallelizing "normal" compilation.

The only issue I see is that you wouldn't know which BMI you'd need, so you
would need to generate all of them (or regenerate those who are out of date),
or specifically list those that needs to be generated in a build tool. Given
how the rest of the compilation pipeline works, is that undesirable?

------
purplezooey
_" As far as I can tell, the paper was not discussed nor reviewed by any
relevant eyes."_

I'll keep my irrelevant eyes to myself then, thanks.

------
IshKebab
Surely the binary module interface is just a binary version of the header
file? Are you _sure_ it has to be generated from `foo.cpp` and not just
`foo.h`?

It is a binary module _interface_ not binary module implementation.

------
phkahler
From TFA: As this depth increases, modules grow slower and slower, while
headers remain fairly constant for even “extreme” depths approaching 300

Well before reaching 300 I'd have to ask WTF are you doing? I mean really,
seriously, if your dependency chain is 1/10 that deep I'd look for something
wrong.

I've often thought that including headers from inside headers in C or C++ is a
mistake. And that thinking is probably wrong. It makes sense when using a
library that may itself have a lot of components - I just include the top-
level header for the library. But even that is different from having a really
deep dependency chain.

Maybe - just maybe - people have shifted the spaghetti out of their code and
into the file structure.

~~~
richardwhiuk
If foo.hpp is:

    
    
      class Foo {
    
      };
    

Then if I must write bar.hpp as:

    
    
      #include "foo.hpp"
    
      class Bar {
         Foo f;
      }
    
    

I cannot forward declare Foo because in order to size Bar, I must know the
size of Foo. I must therefore include `foo.hpp` in `bar.hpp`, thus I must
include headers in headers, unless my headers are not allowed to contain class
definitions.

~~~
phkahler
Which is fine, but if the chain is getting too deep you probably have
excessive granularity and/or complexity. Foo could be defined in the same
header as Bar if they are always used together. I still can't see getting
anywhere near 300 levels deep in this stuff. You can also forward declare Foo
in the header if it's just referenced via pointers in Bar.

This is the type of complexity that a good software "architect" should be
trying to reduce rather than manage.

~~~
richardwhiuk
Sure, but once you stray from blanket rules, it's harder to state
categorically that 300 is an imperative to fix, and harder to prevent it
occurring.

e.g. a blanket rule that a header file isn't allowed to include another header
file is trivial to enforce, one which says it can't be more than n deep, is
subject to boundary pushing.

------
Jumziey
Ah, crap!

------
4bpp
Half of what came out of the C++ standard bodies in the past five years makes
me only half-jokingly wonder if the committees haven't been infiltrated by
deep-cover Rustaceans trying to run the language into the ground in a
plausibly deniable manner...

------
rafaelvasco
It appears to me C++ will slowly fade away. Languages like Rust are gaining
traction fast, and most importantly, they're building a huge, passionate and
dedicated community, especially Rust itself. C++ people are slowly getting
curious, like what's up with this Rust thing ? I could give it a try... Thats
how it happens i guess.

~~~
smolsky
Wow, that is the most often made generic comment.

Are there other languages _like Rust_? Have you heard of large, production
code bases maintained at large successful companies that are written in Rust?

~~~
steveklabnik
> Have you heard of large, production code bases maintained at large
> successful companies that are written in Rust?

Google, Amazon, Facebook, Microsoft, Dropbox...

