
“Modern” C++ Lamentations - tpush
http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
======
ilovecaching
It's really crazy how much C++ has already evolved since the start of my
career (with C++98). C++ has been the millennium falcon of languages; creaky
and old, not pretty to look at, but still good where it counts.

That said, I completely switched over to rust many months ago and rarely look
back. Rust is a tribute to C++, an incorporation of lessons learned that sheds
all of the baggage C++ will never be rid of. Rust also has less of learning
curve than C++ (although both are quite high), simply because C++ has so many
odds and ends to keep track of these days.

Rust has also made me realize how bad class based OO is as an abstraction. OO
glues together operator overloading, methods, inheritance, all into one
package. Rust breaks those concepts into component parts that provide much
better abstractions with less keywords and jargon to worry about.

~~~
auxym
I'm not sure why anyone today would start a greenfield project in C++ instead
of D, rust or Nim. Maybe in niches where C++ is deeply rooted, like games.

~~~
FartyMcFarter
Because C++ is a stable, reliable and well supported (read: production-ready)
language that works on many platforms.

Last I heard D was definitely not all of those (it doesn't even have a
straight story on garbage collection, as its creator said himself! [1]), and
I'm not sure about Rust or Nim.

[1] [https://www.quora.com/Which-language-has-the-brightest-
futur...](https://www.quora.com/Which-language-has-the-brightest-future-in-
replacement-of-C-between-D-Go-and-Rust-And-Why/answer/Andrei-Alexandrescu)

~~~
steveklabnik
Rust is used in production by hundreds of companies, from ones as large as
Facebook, Amazon, and Microsoft, down to tiny new startups.

~~~
int_19h
The visual tooling is still very meh though. Just compare the kind of code
completion you get for C++ (even something as gnarly as Boost) in modern IDEs,
to what RLS can do in VSCode.

~~~
steveklabnik
IntelliJ is heavily investing, and it’s already showing. We are too:
[https://ferrous-systems.com/blog/rust-analyzer-2019/](https://ferrous-
systems.com/blog/rust-analyzer-2019/)

But yeah, if that matters to you, it’s less mature. We’ll get there!

------
tpush
The statement that I found most powerful:

> Compile time of this really simple example takes 2.85 seconds longer than
> the “simple C++” version.

> Lest you think that “under 3 seconds” is a short time – it’s absolutely not.
> In 3 seconds, a modern CPU can do a gajillion operations. For example, the
> time it takes for clang to compile a full actual database engine (SQLite) in
> Debug build, with all 220 thousand lines of code, is 0.9 seconds on my
> machine. In which world is it okay to compile a trivial 5-line example three
> times slower than a full database engine?!

~~~
userbinator
The size comparisons are also very useful, if anything to show that even the
simple "non-Modern" C++ program already has quite a bit of bloat.

 _Compilation takes 0.064 seconds, produces 8480 byte executable_

I am experienced with Asm so I can come up with an instinctive order-of-
magnitude estimate, but even those who aren't should try to roughly estimate
how many machine instructions are required to implement that code, and come to
the conclusion that _over 8000 bytes_ for three nested loops with not much in
them, a comparison, some arithmetic, and a handful of library function calls
seems rather excessive. Even 800 would be on the high side --- somewhere
around 100-200 _bytes_ is my estimate.

Also, watch this 4k demo (everything you see and hear is generated in realtime
by a 4096-byte executable):

[https://www.youtube.com/watch?v=jB0vBmiTr6o](https://www.youtube.com/watch?v=jB0vBmiTr6o)

~~~
jcelerier
I don't think that this is a very meaningful comparison at these sizes,
because it entirely depends on your target. Just compiling a "foo.c" file with
"int main() { }" for sole content through "gcc -Os" produces a 16kb executable
on my linux machine, but I know very well that the exact same compiler will
produce something much smaller if I compile for, say, a bare metal AVR target
because the ELF file format has a lot of fluff.

~~~
gumby
> because the ELF file format has a lot of fluff.

I wouldn't call it fluff: even assuming no debugging info (which you won't
have in your avr object file) it has info needed at runtime (program start
time) such as where to load what into memory, how much initial uninitialised
memory to allocate when starting up, etc. It all gets used.

------
4bpp
I think that at this point, C++ has basically to be treated the same as the
English language. Sure, you could write sentences that span three pages,
employ Shakespearean vocabulary or words that hardly anyone has known since
the abolition of Classics from university admissions tests, or encode another
layer of meaning in the bit pattern of when you do and when you don't use
Oxford commas. What makes something good English communication is not whether
you used those features correctly, employed the most in vogue ones or found
the "most elegant abstraction" ("there's this one word attested in a handful
of 14th century sources that captures exactly what I want to say, so I'll use
it without explanation"), but whether you have struck the right tradeoff of
minimising the combined effort put in by you and your audience and conveying
your message faithfully.

Of course, for English, this is exactly the sort of problem that is addressed
by style guides (and teachers/peers reacting to you, and natural selection). I
think what we need is more and better opinionated C++ style guides that don't
just talk about what line to place your { on, but have the confidence to say
things on the level of "std::move may add a marginal amount of efficiency, but
greatly detracts from the legibility of code. Avoid using it altogether."

~~~
4bpp
Personal opinions on the "right subset" of C++:

* Yes to classes.

* Yes to inheritance and virtual member functions, because interfaces are a very useful and easy to understand abstraction. Moreover, especially for games, idioms such as "class Monster : public DamageableObject" are really the most natural thing. However, avoid situations in which you need dynamic_cast like the plague. No to virtual inheritance.

\- I really wish class memory layout were standardised more, but at least some
non-standard assumptions are fairly safe in my experience: e.g. class A : B
{...} ... B * x; assert((long)static_cast<A* >(x) == (long)x);

* Before you start using "friend class", consider whether you could get around it somehow. If a class is not public-facing, set everything public and enforce encapsulation by self-discipline rather than language features.

* Yes to templates in their C++ 101 application of making a $type-specialised version of classes and functions. No to using them to perform any real compile-time computation, or anything of the type Boost pulls. No to Boost.

* "Keep the parts of your code that only do C stuff in C".

\- printf over iostream.

\- (de)allocate classes with new/delete, but POD with malloc/free

\- exception: std::string over char* , because it is really that much more
convenient

* Yes to lambdas, but only pass them as function arguments if there is no good alternative, because of legibility. Avoid stuff like the for_each(..., []{ ... }) in the Ranges example.

* Yes to STL containers and algorithms, because everyone knows how they work and they are usually good enough.

* Range-based for over for(std::container::iterator i=...). Using the Ranges TS when a plain for loop would do strikes me as novelty-chasing and violates "do C stuff in C".

* Cautious yes to auto. It does somewhat detract from comprehension in some use cases. Don't just auto everything because you are too lazy to figure out the type.

* std::shared_ptr only when you actually think you can't answer the question of what the appropriate point to deallocate a piece of memory is, or at least not without significantly changing the structure of your code.

* Avoid std::tuple, std::variant and other awkward STL attempts to replicate functional language idiom without the syntactic support. It's terrible to read. Unfortunately, STL containers sometimes force you to use pair.

* No to exceptions; their interaction with other language features is awkward, and the eternal difficulties with platform support make me feel no confidence in the reliability of the feature. (I'm unfamiliar with the bowels of the implementation, but it seems like unwinding C++ stacks is fundamentally a hard problem.)

* No to novelty features such as user-defined literals.

* goto only to break out of multiple levels of nested loops.

* Coroutines strike me as something that's unambiguously cool but so far removed from the standard idiom of C++ programming that I'm inclined to say no. Maybe there's an alternative style guide you could write that says yes to coroutines and no to some of the things above. Multi-paradigm should not mean "use all the paradigms".

(I'm very happy to be persuaded otherwise on any of these points.)

~~~
int_19h
> However, avoid situations in which you need dynamic_cast like the plague.

You need dynamic_cast for dynamic interface queries ("does this object support
X?"), even in the absence of virtual inheritance.

Some would say that there's no downside to always using it when you can. If
what you're doing is an upcast, it's a no-op. If it's a downcast, then at
least you get an exception instead of an invalid pointer that may or may not
blow up if what you're casting is not of the correct type.

> Before you start using "friend class", consider whether you could get around
> it somehow. If a class is not public-facing, set everything public and
> enforce encapsulation by self-discipline rather than language features.

Why? It's a language feature that's there to help you enforce said discipline.
What's the downside?

> I really wish class memory layout were standardised more, but at least some
> non-standard assumptions are fairly safe in my experience: e.g. class A : B
> {...} ... B * x; assert((long)static_cast<A* >(x) == (long)x);

That the object is allocated at the same address as its base subobject is
guaranteed by the standard, if I remember correctly. Your particular assert is
not, though, because long is not guaranteed to fit a pointer (and doesn't, on
Win64).

> printf over iostream.

Not only you lose type safety, but it's extremely easy to write code that
kinda sorta works but actually doesn't. The most common case is passing a
struct with a single field to printf - this is actually implementation-
defined, and some implementations detect it and treat it as invalid, while
others just put the contents of the struct on stack. So it happens sometimes
that people e.g. pass std::string that way, expecting it to work with %s -
_and it does_... on that one implementation, and for that particular string.
Then it breaks elsewhere.

iostream is still bad tho, both design-wise and perf-wise. The answer is
actually Boost, which has proper typesafe formatting.

> (de)allocate classes with new/delete, but POD with malloc/free

There's no benefit to using malloc/free whatsoever, and it forces you to cast.

> Avoid std::tuple, std::variant and other awkward STL attempts to replicate
> functional language idiom without the syntactic support

Tuple has syntactic support as of C++17:

[https://en.cppreference.com/w/cpp/language/structured_bindin...](https://en.cppreference.com/w/cpp/language/structured_binding)

> No to exceptions; their interaction with other language features is awkward,
> and the eternal difficulties with platform support make me feel no
> confidence in the reliability of the feature.

If you ditch exceptions, you might as well ditch constructors as well, since
there's no way to report errors from them otherwise. You're also forced to use
new(nothrow) then. Basically, the language is designed around exceptions, and
excluding them is fighting it.

But I don't see the point. C++ exceptions have been supported and stable on
all platforms for a long time now. On sane platforms (i.e. not x86, but e.g.
amd64), they're also zero-time-cost at runtime for the non-exceptional path.
It's not the 90s anymore.

~~~
TheAccount
> But I don't see the point. C++ exceptions have been supported and stable on
> all platforms for a long time now.

How come we don't see exceptions being used in serious C++ projects then (e.g.
Chromium and Windows)?

~~~
int_19h
Either legacy codebases that have rules dating back to the times when avoiding
exceptions actually made sense, or else a cargo cult.

------
buserror
Excellent article! I stopped using C++ completely after completing the meme
that is posted near the end, I wrote gazillions of lines of C++ over 20 years,
used all the gizmos, and then, gradually started to cut down on them,
eventually settling on stuff like --no-rtti and --no-exceptions.

It's mostly working on the linux kernel and qemu that showed me I _could_
write extremely nicely structured code, in plain C. Very easy to debug, very
readable, very quick to compile and run.

And then one day I looked at one of my C++ module and told myself I actually
didn't /need/ all that jazz of classes, namespaces, constructors etc etc. It
could be done as a small .h/.c couple with 2 functions...

And that day I started to convert many many lines of C++ to plain C. Because
it's /indestructible/, it'll continue compiling and working on /anything/
forever, without having to figure out why the new C++ compiler is throwing you
a 10 lines error message just because that codebase is 10yo and isn't up to
scratch with the 'new' way of doing things.

I _DO_ miss bits of C++, I miss some of the encapsulation it provides, I miss
stack-based objects, and funnily enough, I miss exceptions. But I haven't
looked back.

The other points he makes about 'junior hires' is also very valid, with C++
you could /easily/ have a new guy write completely bonkers code, add subtle
bugs that takes days to find, much much later. I had my own horror stories
about that, and that's another factor that pushed me over the edge to drop the
language.

~~~
thestoicattack
> the new C++ compiler is throwing you a 10 lines error message just because
> that codebase is 10yo and isn't up to scratch with the 'new' way of doing
> things

Does this happen? I thought the committee was so into backwards-compatibility
that C++ will also work on everything forever.

~~~
buserror
Sure. I remember one. For about a million years, you could declare a naked
virtual method with int blah() = NULL; -- perhaps it wasn't /suposed/ to be
used like that according to the standard... but it worked, and was used a lot,
as it made perfect sense.

Then one day, you recompile it and no, it /needs/ to be ZERO and not NULL
sorry.

But it's just one simple to fix example, in plenty of cases, especially as
templates (and especially template instantiations) evolved, the whole thing
would come crashing on you. For a while trying to compile templated code on
MS, CodeWarrior and GCC was pretty much impossible without deploying ruses
that made C preprocessor macros tame in comparison.

~~~
int_19h
As you yourself admitted, it was not the standard way of doing things. Ever
since the first ISO C++ standard, the syntax was =0. And it was never
guaranteed that NULL (which is a macro) expands to just plain 0. So even back
when it "just worked" for you, chances were good that it only worked on that
one implementation that you had, and would've broken if you tried to use a
different compiler.

Judging by your mention of CodeWarrior, this all sounds like war stories from
pre-standardization days (and of course it still took a while after ISO C++98
was published, for implementations to actually adopt it).

~~~
kps
> And it was never guaranteed that NULL (which is a macro) expands to just
> plain 0.

True, but is _was_ guaranteed that NULL expands to “an implementation-defined
C++ null pointer constant” (C++98 §18.1), where a ‘null pointer constant’ is
“an integral constant expression (5.19) rvalue of integer type that evaluates
to zero" (C++98 §4.10). So while it didn't have to be just plain (literal) 0,
it _did_ have to be a compile-time constant integer 0, which is otherwise just
as good unless you're doing macro magic.

(This differed from C (ANSI era), in which a null pointer constant can
alternatively have type (void*), and often does.)

The fact that a pure declaration requires a literal single character ‘0’
(rather than 0) is just one sad example of C++'s ad-hoc irregular overloading
of things to mean different things.

~~~
int_19h
It may be ad-hoc (although I would argue that "0" in this case is really just
a token that's a part of syntax - basically a numerical keyword - so expecting
a constant expression is too much). But, in any case, the main point is that
it never changed - it was always wrong to use =NULL to denote abstract virtual
methods, and there were always implementations that broke that. So it's a
strange example of the lack of stability in C++.

------
mschuetz
I was really looking forward to C++ ranges. However, that blog post by Eric
Niebler felt like C++ ranges are trying to get everything right from a cs
point of view, but in the process completely ignores usability / legibility,
just like the previously introduced list operators that suffer from the
requirement to specify start and end iterators. Some of the code samples are
an atrocity. That C# example looks so much better to read and easier to
understand.

~~~
munk-a
I put a lot of value into the maintainability of what I'm writing and I think
legible syntax is a language quality that's often minimized but provides a lot
of value in terms of labour savings over the lifetime of a project. I've been
working in PHP a bunch lately and the expressiveness there is amazingly
strong, every time I go back to writing C++ it feels much more cargo-culty.

(Also, PHP isn't perfect, but it's really good at allowing clear readable
code, while also allowing terrible code, also the performance is terrible...
Basically, don't waste your breath on why PHP is terrible - it isn't, it's a
decent language - you're just using it wrong)

~~~
zackmorris
I concur. My biggest gripe with C++ today is that it's conflating two
concepts: imperative programming (IP) and functional programming (FP) via
templates.

Imperative programming has no future IMHO. Rust is going to the ends of the
earth to make it "safe" but my gut feeling is that it will never be fully
statically analyzable. Today I only see needless complexity that distracts
from the underlying logic. I lost interest in this style of programming around
a decade ago.

Functional programming via templates is an admirable endeavor, and I went all
the way to the ends of the earth to make something of it. I was left with code
that wasn't reusable (wasn't salable) so in the end, just wasted years of my
life on needless complexity.

And both of the above only make sense from a performance perspective. An
argument that becomes weaker with each passing year as processing speed
improves.

I think the future will be something more like reactive functional
programming, written in a high level language like Javascript and only
dropping down to bare metal optimizations with something like Rust when
necessary. It will work like the UNIX shell (which is the only proven model
that's endured). So something like the Actor model where a series of black
boxes are wired together with pipes and as much FP as possible, and any IP
sections are treated like monads and flagged as having unknown side effects.
ClojureScript does something like this, running its FP code to completion and
then yielding until more input/output happens on the Javascript side. Another
way of thinking about this is a spreadsheet with some imperative code within
the cells where needed.

PHP is my favorite language currently because it has some notion of constness
and fairly decent higher order functions (but who are we kidding, map and
reduce give us everything else). PHP is a conceptually simple language that
has some longstanding flaws with naming conventions that make it not as easy
as it could be. It's largely succeeded in moving away from object-oriented
programming, favoring a fairly simple interface implementation instead. I've
never run into the implicit context problems that plague languages like Ruby
(where convention over derivation is written into its DNA).

And PHP is fairly fast, on the order of 200 times slower than C++ but probably
within the same order of magnitude as the newer C++ methods mentioned in the
article. My gut feeling is that proper string handling, with every contingency
accounted for, runs approximately the same speed in any language. I'm not
willing to spend time and effort trading safety for performance anymore.

~~~
blattimwind
> Another way of thinking about this is a spreadsheet with some imperative
> code within the cells where needed.

Uhm, how are cell _functions_ in spreadsheets an example of imperative
programming?

~~~
zackmorris
Oh I was thinking of Visual Basic Excel macros, since VB is imperative.

I'd like to see a spreadsheet that uses something like Elixer or Clojure
though because then any cells that don't have monads could be fully statically
analyzed and form a purely declarative solution.

In other words, the cells could be decomposed further into cells containing
individual functions instead of composed statements. That way the spreadsheet
could be transpiled into lisp or a syntax tree. I'm using all these terms
loosely, but they are equivalent.

------
ptero
> Cognitive load is important

This, to me, is a very powerful statement that points to the major change
between old C++ and the current model (which is IMO is a monster). It may have
gained a lot in the process, but at a huge cost to conciseness and clarify of
written code. And this may be fatal for a general purpose language.

Maybe as other languages with focus on cleanliness and clarity (Pythons,
Kotlins and Haskells of this world) become fast and powerful enough for large
projects the use case for simple and clear C++ code decreases. Then C++ can
still provide benefit for, say, intermediate representations. But this
complexity increase may be one of the last nails into its coffin as a general
purpose language. My 2c.

~~~
jstimpfle
> clarity [...] Haskell

no. Just no. Haskell is a lot like C++.

~~~
SomeHacker44
For the case in point, Haskell is much clearer. You can do Pythagorean Triples
in one line. Google it if you don't believe me.

However, in general, (in my opinion of course) the cognitive load for Haskell
is low as a novice, but gets higher and higher as you go down the rabbit hole
toward learning, understanding and applying all of its power - which is
constantly evolving not unlike C++.

~~~
jandrese
Just because you can do it in one line doesn't mean that it has low cognitive
load. In fact it probably increases the load since you have to think about all
of the abstractions that are built into the language instead of reading them
off of the screen. The three for loops version lays everything out right in
front of your eyes.

~~~
SomeHacker44
Perhaps you're right. On the other hand, list comprehensions are a core
feature of Haskell, which does not have something akin to a "for loop," so
it's something a novice would be familiar with as part of learning the core of
the language, just as a C++ novice would learn about "for loops."

So, I would argue the cognitive load is similar, and both lay everything out
right in front of your eyes. Although, Haskell will do it with less syntax
(alternatively, read that as "with more syntactic sugar").

------
rmcclellan
I work using C++17 for high performance applications, and I can relate to a
lot of these gripes. I think it's a fair point that C++ is unreasonably
complex as a language, and it's been a serious problem in the community for a
long time.

One part that really struck me as odd is the focus on non-optimized
performance. To me, this is an important consideration, but not nearly as
important as optimized performance. Using techniques like ranges can
definitely slow down debug performance, but much of the time it _dramatically
increases_ optimized performance vs. naive techniques.

How do ranges speed up optimized builds? One of the best techniques for very
high performance code is separation of specifying the algorithm and scheduling
the computation. What I mean by this is techniques like
[eigen]([http://eigen.tuxfamily.org/index.php?title=Main_Page](http://eigen.tuxfamily.org/index.php?title=Main_Page))
and [halide]([http://halide-lang.org](http://halide-lang.org)) where you can
control _what_ gets done and _how_ it gets done separately. Being able to
modify execution orders like this is critical for ensuring that you're using
your single-core parallelism and cache space in an efficient way. This sort of
control is exactly what you get out of range view transformers.

~~~
modeless
> I work using C++17 for high performance applications

> One part that really struck me as odd is the focus on non-optimized
> performance

I'm guessing your high performance applications aren't interactive? When your
application has to respond to user input in real time, a binary that is 100x
slower than real time is completely useless. You can't play a game at 0.3
frames per second.

I would be interested in seeing an example of how Halide-like techniques can
be used with C++ ranges. I am skeptical that you could get the kind of
performance improvements that Halide can achieve. And of course you won't get
the GPU or DSP support that is really useful for that kind of computation.

~~~
rmcclellan
Well, you aren’t rebuilding your binary every frame, are you? I might be
missing something.

Also, I think build time is super important in most contexts - what I think is
less important is runtime speed when you’ve disabled all optimizations.

~~~
modeless
Are you confusing build time with performance of the resulting binary? I'm
talking about the latter. Both are important and both are lacking with modern
C++ in debug mode.

Edit: I see, I carelessly used the word "build" to mean a compiled binary,
which was ambiguous. I've changed it.

~~~
rmcclellan
Thanks for the clarification. I guess like all trade-offs, it's context
dependent. I see the advantages of having a realtime usable non-optimized
build for debugging. Since I use modern libraries like Eigen, that option has
not been available to me for some time.

With "modern" techniques, the performance ceiling is a bit higher - whether
that benefit is worth it depends on a lot of factors.

------
amluto
I have a large C++ code base (mostly C++11), and I agree with this article. My
code base is hilariously slow to compile, and the biggest single culprit is
the variant header. I sometimes wonder whether I should just remove all the
users to save time.

To add insult to injury, std::variant is a rather bad approximation of a sum
type. variant<int,int> is only partially functional, and you can’t
disambiguate the two ints by name the way you could in, say, Rust.

I think that a lot of these problems stem from two systematic problems

1\. The desire to avoid modifying the language. A good modern language should
have sum types. Heck, the ranges paper explains how ranges fundamentally have
the wrong semantics wrt const and that this can’t be fixed without changing
the language. So _fix the language_ , please!

2\. The fact that even new libraries are specified to be header files. Surely
there could be a variant _module_ even if modules aren’t really for full
deployment.

~~~
jcelerier
> My code base is hilariously slow to compile, and the biggest single culprit
> is the variant header.

yeah, <variant> is so slow that I wrote my own (domain-specific but could be
repurposable) variant-like code generator. It keeps the same API but makes
everything sooooooooooooo much faster to compile - especially the visitations
with multiple arguments.

(ugly fugly, full of dirty hacks, but it only had to run once or twice :
[https://github.com/OSSIA/libossia/blob/master/tools/gen_vari...](https://github.com/OSSIA/libossia/blob/master/tools/gen_variant/gen_variant.cpp))

------
makecheck
C++ compilation feels like what you would experience if you could only
understand a sentence by looking up each of its words, and each word was
defined in a separate book, and each book was in its own shelf in an enormous
room full of shelves, and one of the shelves was in a library on the other
side of town, and one was in Alexandria.

------
HippoBaro
I'm working with C++17 right now and I personally can relate to the author.
Reading the discussions around C++20 has been an eye-opening experience for
me. C++ was already a difficult language but has become a behemoth of
complexity. But it has to be said that once you know "Modern" C++ it's a very
productive language.

Somehow, it _feels_ good to use it and after a while you find yourself
reveling in complexity. I'm almost ashamed to say really.

~~~
munchbunny
My personal favorite metaphor for the complexity of C++ (and other languages,
like Scala) is a hamster wheel. You enjoy writing it, but from the outside it
feels a bit ridiculous to see the amount of energy dedicated to problems of
your own making.

R-value references, for example. This is entirely a problem that C++ has
created for itself with its copy semantics. From inside the C++ ecosystem, the
language feature makes sense, but from outside the C++ ecosystem, the idea
that there are now two types of references and two types of copy semantics
triggered based on the implicit interaction of reference types, and you
sometimes need to do idiomatic things like declaring undefined private copy
constructors to lock down unintended implicit behaviors relating to copy
semantics for R-value references... yup, hamster wheel.

C++ isn't the only language where I get this feeling. I think Scala is pretty
guilty of this too. It's not that I can't grok it if I wanted to (survived my
share of proof based, pure theory math classes back in college), it's that I
think it's entirely too much energy to be spending on mundane code.

~~~
jstimpfle
> two types of references

include normal pointers and it's three.

~~~
ryanianian
add const-ness and you get *2. Then ponder what `const &&` even means.

------
gavanwoolery
Things developed by committees often tend to lack the market-driven features
of privately-owned products. On the other hand, privately-owned products are
just that, and suffer from the usual consequences of closed systems. Would be
nice if there was an intersection of the two, somehow (C# sort of comes to
mind, as it was privately developed but has open-source implementations).

The one I always think back to is DirectX vs OpenGL. DirectX has had
historically better driver support, documentation, and ready-to-compile
example code (OpenGL has caught up a lot admittedly over the years, due to
greater proliferation of OpenGL-driven devices).

~~~
gbear605
Java has achieved a lot of this.

------
utopcell
I echo the sentiment of the author. Also, It seems to me that in all three
articles, what comes through is not that ranges will be cool or useful, but
that coroutines are awesome. With this, I couldn't agree more. I sometimes
wonder if we didn't have discussions about or these tertiary[1] TSes, we'd
have coroutines in C++17 already.

[1] subjective opinion

------
stabbles
It's amazing how much of a stir the C++20 ranges have caused. Very
entertaining to follow.

------
modeless
I haven't seen anyone mention [https://zapcc.com](https://zapcc.com) yet. I've
been testing it and it works as advertised, cutting compile times by a factor
of 5 or more in real world projects with zero code changes. It's both easier
to use _and_ faster than C++ modules could ever be, even in theory. It could
go a long way to addressing one of Aras's complaints.

In order to improve C++ compilers, we are going to need to throw away some old
assumptions about how they should work. Zapcc challenges the notion that the
compiler should exit after compiling each file and start over with the next
one. Perhaps some of the other issues Aras points out could be addressed by
rethinking some assumptions. For example, tighter integration between the
compiler and debugger could potentially make it feasible to debug optimized
builds.

~~~
jcelerier
> Zapcc challenges the notion that the compiler should exit after compiling
> each file and start over with the next one.

this idea has been floating for a _long_ time :

ftp://gcc.gnu.org/pub/gcc/summit/2003/GCC%20Compile%20Server.pdf

and

[https://gcc.gnu.org/wiki/IncrementalCompiler](https://gcc.gnu.org/wiki/IncrementalCompiler)

sadly, it seems that modules has put these approaches on hold, even though I
am pretty sure that modules won't solve the problem.

------
jimmychangas
> the time it takes for clang to compile a full actual database engine
> (SQLite) in Debug build, with all 220 thousand lines of code, is 0.9 seconds

I immediately thought about some Java (Gradle) projects at work that take 10
minutes to build, and other transpiled javascript projects that take an hour
or so. What am I doing with my life?

~~~
hendzen
An hour? For JavaScript? Wtf?

~~~
jimmychangas
Babel + a million lines of Javascript, and a few other build steps. Durying
development, only the required modules are built, so it takes only a few
minutes to get up and running, but a full build takes ~1 hour and produces
~100mb in bundles.

------
guillaumec
Good article. Compilation time is indeed an important issue, and the main
reason why I use C instead of C++ as much as I can.

------
SomeHacker44
As a long time lover of functional programming, and have not used C++
professionally since the 90s, this whole article feels sort of damning about
the future of the language from a functional perspective.

The calculation of Pythagorean Triples is literally one line in Haskell (a
language I have used for almost three decades), as a simple list
comprehension. It also can then be consumed or used in any fashion, and is
calculated lazily.

While I am not particularly adept at other "functional" languages, I would
imagine similar brevity in F#, OCaml and possibly even Python (which I believe
has some sort of list comprehension syntax).

------
muterad_murilax
How about going back to and forking C++98 (or whatever version preceded the
first syntacical horrors such as reinterpret_cast<>) and start over with more
C-like additions.

~~~
makapuf
Yeah but what about implementing the real compiler after the spec? While
you're at it why not make it a different language?

------
petermcneeley
printf code should not appear in timed scope.

~~~
tom_
Why not? It clearly doesn't add anything meaningful to the run time, and helps
avoid the question of whether the loop got optimised out.

~~~
petermcneeley
I timed his code on Windows and it certainly adds to the runtime. To prevent
the loop from being optimized out I created a pre-sized vector to index the
ith ouput and print it outside the timed scope.

I have worked in the game industry and have many bad experiences with
printf/logging consuming so much performance it makes games unplayable.

------
oh_teh_meows
I wonder if the state that C++ finds itself now is a result of sub-optimal
compromises between different interests in the standards committee. Year long
arguments of what ought to be included or how they ought to be done may cause
folks to lose sight of what's important, you see.

I still like C++ though. At least a subset of it anyway.

------
whatifitoldyou
A pretty good summary of the state of C++. I actually enjoy the language and
most of the new features. The 'modern' part is not at fault here. This has
always been a problem. The cult of generic metaprogramming was always a pain
to deal with. In part, the new language features can help reduce over reliance
on templates. As a community we must push for features that can get is out of
this situation, like a sane modules system.

I think Bjarne Stroustrup is partly to blame here. He has succeeded in
creating a versatile and popular language but has given little priority to
problems like build time, error messages and debug performance. These sound
like petty, practical problems that the tooling guys should eventually figure
out. Except they aren't. They are hugely dependent on the language design.

Another thing we are missing is a high profile figure that will show the way
on correct patterns and that will literally publicly shame the authors of too
"meta" template heavy libraries. You are not smart if you are writing these
monstrosities. It should not feel good. Making it simple requires much more
intelligence.

~~~
erik_seaberg
> publicly shame the authors of too "meta" template heavy libraries

Wouldn't they say we should instead shame the people who can't read them? What
makes you right and them wrong?

~~~
red75prime
It's hard to change people, it's easier to change code.

------
gpderetta
I think that people have been taking Eric example too seriously. I can't speak
for him, but I doubt that Eric was suggesting that such code would be
routinely written with the range library, he was just showcasing as much as
possible of the range library in as short an example as possible. In a way it
was designed to maximize complexity.

These excesses have in fact a positive side: the extent that people will go to
get a specific functionality (in this case lazy ranges) as a library is a
signal that the feature should be part of the language.

This happened with lambdas, boost.lambda, boost.phoenix and to a lesser extent
boost.bind showed that the functionality could be implemented as a library,
people stated playing with it and demanded the functionality to be baked into
the language.

A similar thing is happening now, there is a push to extend the proposed await
syntax to become a more general (and more efficient) continuation syntax that,
among other things should lead to built-in support for lazy ranges.

~~~
alacombe
I can say that I have been guilty of writing such code, and people I worked
with have too. When you are given a tool, you want to use it, especially if it
gives you a lot of bell and whistle.

------
userbinator
The desire to overcomplicate and overabstract things does seem to have taken
hold in the C++ community within the past few years. It's not anything new,
however; just that for C++ to get the "Modern" treatment, is relatively
recent. (Look at the Java and C# communities for example. What they don't have
in terms of complexity in syntax, they make up for with an abundance of
classes.)

------
Mauricio_
The problem is not modern c++, it's only when c++ tries to be a functional
language that it becomes unreadable.

As I see it, there are multiple "sub languages" in c++:

-Macros: use as little as possible.

-Classic C: use only when interacting with C libraries, or in the few cases where it's more convenient (printf over cout).

-Classic C++ (classes, etc): use it all the time.

-Template meta programming: use only for simple things, eg containers.

-Functional C++: try to avoid.

------
jlebar
I want to say one word to you. Just one word. [...] Are you listening?

Modules.

