
Why I don't spend time with Modern C++ anymore - nkurz
https://www.linkedin.com/pulse/why-i-dont-spend-time-modern-c-anymore-henrique-bucher-phd
======
jupp0r
In my experience, the opposite of what the author claims is true: modern C++
leads to code that's easier to understand, performs better and is easier to
maintain.

As an example, replacing boost::bind with lambdas allowed the compiler to
inline functor calls and avoided virtual function calls in a large code base
I've been working with, improving performance.

Move semantics also boosted performance. Designing APIs with lambdas in mind
allowed us to get rid of tons of callback interfaces, reducing boilerplate and
code duplication.

I also found compilation times to be unaffected by using modern C++ features.
The main problem is the preprocessor including hundreds of thousands of lines
for a single compilation unit. This has been a problem in C and C++ forever
and will only be resolved with C++ modules in C++2x (hopefully).

I encourage the author to try pasting some of his code into
[https://gcc.godbolt.org/](https://gcc.godbolt.org/) and to look at the
generated assembly. Following the C++ core guidelines
([http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines](http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines))
is also a good way to avoid shooting yourself in the foot (which is
surprisingly easy with C++, unfortunately).

~~~
namelezz
> In my experience, the opposite of what the author claims is true: modern C++
> leads to code that's easier to understand, performs better and is easier to
> maintain.

Can you comment on the memory safety of modern C++? I am wondering if I should
learn Rust or modern C++.

~~~
PeCaN
std::unique_ptr, move constructors, etc and you're pretty safe, but it's still
not in the same league as Rust.

In my opinion you should probably learn Rust unless you want to get a job
writing C++ (e.g. game development).

~~~
pcwalton
I never quite understood why unique_ptr and move semantics are supposed to
improve memory safety over new and delete. They reduce leaks, sure, since the
compiler inserts free for you at a hopefully-useful place. But you still
effectively have to decide when to free, and there is no protection against
dangling iterators, references, or pointers. From a security point of view,
use after free is far worse than leaking, since UAF can lead to remote code
execution.

In fact, move semantics create a _new_ type of hazard that wasn't in earlier
versions of C++: dereference of a unique ptr after moving it.

~~~
AnthonyMouse

      class fail
      {
          foo * p;
          void init()
          {
             p->some_init();
             if(p->some_error())
             {
                 delete p;
                 throw some_exception();
             }
          }
      public:
          fail()
          {
              p = new foo;
              init();
          }
          ~fail()
          {
              delete p;
          }
          void reinit()
          {
              init();
          }
      };
    

vs.

    
    
      class ok
      {
           unique_ptr<foo> p;
           void init()
           {
               p->some_init();
               if(p->some_error())
               {
                   throw some_exception();
               }
           }
      public:
           ok()
           {
                p = make_unique<foo>();
                init();
           }
           void reinit()
           {
                init();
           }
      };

~~~
pcwalton
What fraction of security-sensitive UAF bugs in the real world have had to do
with exception safety?

~~~
AnthonyMouse
It isn't limited to exception safety. The problem previously had been that
there was no clear answer to the question, when should "delete" (and "~foo()")
be called for a pointer? If you do it before some other code expected you to
then you have UAF. If you do it in more than one place then you may have
double free (and two calls to ~foo()).

Now the answer is that unique_ptr will do it when the pointer itself goes out
of scope.

~~~
pcwalton
> Now the answer is that unique_ptr will do it when the pointer itself goes
> out of scope.

And my claim is that this doesn't effectively reduce UAF. It doesn't eliminate
dangling references, etc.

~~~
AnthonyMouse
It seems like you're making the perfect the enemy of the good.

Look at my example above. The problem isn't just exception safety, the first
class is five kinds of catastrophe waiting to happen.

If the user calls reinit() then init() may delete the pointer (because it was
written expecting to be called during construction). So now the object exists
but the pointer is invalid and any further use of the object will be UAF. Even
if the caller understands the exception to mean that the object should not be
used anymore, the ~fail() destructor is going to call the ~foo() destructor
again and double free anyway.

On top of that, the default copy and move constructors and assignment
operators for a naked pointer just copy the pointer, which produces UAF as
soon as the first copy to be destroyed frees the pointer the others still
hold.

None of that happens with the unique_ptr version. No explicit call to delete
is required so we lose that opportunity to accidentally call it before the
surrounding object ceases to exist. The default move constructor actually
works and the default copy and assignment operators are deleted, so attempting
to copy without an explicit deep copy implementation becomes a compile error
instead of runtime UAF. And it's less code too.

It can't actually stop you from having dangling references because if it did
then it wouldn't be able to compile existing code anymore. But how is it not
an improvement over the status quo?

------
justsaysmthng
HFT is a pretty limited and extreme application case.

From what I understand - _everything_ is not enough for HFT - network cards,
kernel drivers, cables, etc.

You have milliseconds (edit: nanoseconds !) to receive, process and push your
orders before someone else does it and gets the prize.

It's an arms race between technologists for the purpose of making a small
number of people rich.

I doubt that these requirements apply to other application fields where C++ is
used - and it's used almost everywhere, with great success I might add.

In my view C++ is actually a couple of languages mixed into one.

The hard part is knowing which part of the language to use for which part of
the problem.

The "modern" C++ solves a lot of the nuisances of the "old" C++, but you can
do without these features just fine. I apply them carefully to my code and so
far it's been a pleasant experience. Even if I don't use all of the new
features, it's nice to know that I can (and I will some day!).

So I don't really buy this rant..

~~~
hendzen
Not milliseconds. Nanoseconds. Competitive tick-to-trade times are on the
order of 1000ns or less.

~~~
bogomipz
How do you tune the machine for that type of latency? User space drivers?
Could you elaborate? Not having to go beyond L1 cache doesn't mean much if it
takes a few milliseconds to get that trade out of the network card and on to
the wire right?

~~~
hendzen
FPGAs

------
pjc50
There are two separate rants here that aren't delineated well.

1) C++ is too complicated, and therefore hard to reason about and slow to
compile.

We're going to argue about this forever, but you'll have to agree that the
spec is very large and warty compared to other languages, and that C++ tends
to take far longer to compile (this was already a problem a decade ago, it's
not specific to "modern" C++).

2) The future of software development will include more of what I'm going to
call ""non-isotropic"" software; rather than assuming a flat memory model and
a single in-order execution unit, and exerting great effort to pretend that
that's still the case, programmers will have to develop effectively on GPUs
and reconfigurable hardware. Presumably this speculation is based on the
Intel-Altera acquisition.

You can sort of do hardware programming in C (SystemC) but C++ is really not a
good fit for hardware. Personally I'd like to see a cambrian explosion of
HDLs, but the time is not yet right for that.

It sounds like the author favours the "C with classes" programming style,
maybe including smart pointers, and is probably not keen on lambdaization of
everything.

~~~
gpderetta
Can't really argue about 1.

About 2, in-order single instruction execution hasn't been an assumption for a
very long time; c and c++ optimizers (and programmers) have been able to take
advantage of these CPU features for a while. There are language extensions
(Cilk++, OpenMP), to take advantage of extra cores for fine grained
parallelism.

Regarding GPUs, arguably C and C++ have the most mature and transparent
offloading support all around (OpenACC, again OpenMP, whatever MS offloading
extensions are called) and the most popular GPU programming language (CUDA) is
a C++ dialect.

Regarding the flat memory model, for large scale programming the only sane
model is a flat, cache coherent one; those architectures that don't provide
that, either evolve to provide it or die (cf. CELL) supplanted by those that
do (yes, that doesn't mean that all memory is the same, but that is true with
your standard CPU anyway).

I don't have an opinion on FPGAs, I expect that, if they ever go mainstream,
initially people will just assemble predefined blocks via high level
languages, but who knows what the future reserves us.

~~~
pjc50
Parallelism: yes, although OpenMP isn't nearly as accessible as e.g. Swift
async closures. C++ only got proper language-native threading in C++11.

 _for large scale programming the only sane model is a flat, cache coherent
one_

Do google view their datacenters as a single flat cache-coherent memory space?
No, they built mapreduce instead. That's the point of view I'm coming from:
distributed systems engineering working downwards. Rather than a single large
program operating on a single memory space, a set of fragments whose
programmers are aware that there is latency when communicating between nodes.
DRAM is just another "node" that you have to send messages to and wait for a
response.

~~~
gpderetta
"Parallelism: yes, although OpenMP isn't nearly as accessible as e.g. Swift
async closures."

I'm not familiar with them, do you have a pointer? Cilk does have powerful
semantics and a very light weight syntax.

"C++ only got proper language-native threading in C++11."

Sure, but OpenMP and Cilk are significantly older.

"Do google view their datacenters as a single flat cache-coherent memory
space?"

No, but I'm pretty sure they whish they could. Many HPC clusters do present a
single memory image across thousands of machines.

"they built mapreduce instead."

Mapreduce (and its extensions) is not a general programming model though.

~~~
vmarsy
> I'm pretty sure they whish they could [view their datacenters as a single
> flat cache-coherent memory space]. Many HPC clusters do present a single
> memory image across thousands of machines.

No not really, at some point when you're dealing with PetaBytes of RAM and
millions of cores, the Law of Physics kicks in, your RAM is spreading across a
large physical area no matter how clever you are. If you want a flat memory
space you have to guarantee an access to _any_ memory address in less than X
cycles otherwise you have a NUMA architecture[1]

While this is true that HPC clusters present a single memory image _per
cluster node_ (where one node = 8-32 processors (maybe 64)), the other nodes's
memory has to be access with Message passing or other mechanisms.

You need a different programming model, MapReduce is too specific, that's why
Google is trying things like their "DataFlow" platform.

[1][https://en.wikipedia.org/wiki/Non-
uniform_memory_access#NUMA...](https://en.wikipedia.org/wiki/Non-
uniform_memory_access#NUMA_vs._cluster_computing)

~~~
gpderetta
"If you want a flat memory space you have to guarantee an access to any memory
address in less than X cycles otherwise you have a NUMA architecture[1]"

There is nothing wrong with NUMA (well, ccNUMA, but today that's a given).
Even a simple modern two socket server is a NUMA machine.

Anyways, as I've commented elsewhere, I'm not arguing that shared memory is
practical today on a large HPC cluser.

~~~
lomnakkus
> Anyways, as I've commented elsewhere, I'm not arguing that shared memory is
> practical today on a large HPC cluser.

I think the point that was being made was that it'll _never_ be practical
purely for physical reasons. Any physical separation means that light takes a
certain amount of time to travel and no known law of physics will let you
circumvent that... A distance of a foot will _always_ incur a latency of ~1ns
(at best), so our models _must_ account for latency. (At _some_ point -- it's
not obvious that we've reached the end of how compact a computer can be, but
there _is_ a limit where you just end up with a tiny black hole instead of
computer.)

~~~
gpderetta
I don't get it, our models have been accounting for latency for the last 30
years at least. We routinely use three level of caches and higly out of order
memory accesses for trying to make latency manageable.

Now it is possible that our best coherency protocols simply aren't effective
at high latencies, but that doesn't mean we can't come up with something
workable in the future. Is there any no-go theorem in the field?

------
hellofunk
This article is not very general. Much of what it tries to convince us is not
going to matter for most developers, and has the cost of suggesting modern
features are not good for _any_ developers. For example:

>It is not rare to see Modern C++ applications taking 10 minutes to compile.
With traditional C++, this number is counted in low seconds for a simple
change.

This is simply a bogus statement with respect to what at least 90% of c++
developers do on a daily basis.

I have benchmarked unique_ptr, auto, brace initialization, lambdas, range-
based-for and other modern idioms and found them all to be at least as fast,
and often faster, than their older counterparts. Now, if I were to instead go
off and write template-heavy code using new features, that would be different.
But in reality, the vast majority of c++ developers -- I'd wager at least 95%
-- are _not_ writing variadic templates on a daily basis (nor should they be).

The memory safety and many benefits from unique_ptr [0] is one of many modern
tools that is a non-brainer to use in nearly all contexts. No, not nearly all
contexts, allow me to rephrase: _all_ contexts. It just is, and if you compare
its use to manual new/delete code, the benefits are solid and faster.

The author further claims that modern C++ is less maintainable and more
complex. The absolute opposite is true in nearly all cases. Using unique_ptr
again as an example, it leads to less code, less complex code, more clear
code, and better maintainability and code readability. Uniform brace
initialization is another example that prevents many common older problems in
the language.

FYI the author keeps talking about high frequency trading as an example of why
modern c++ is a bad choice. Well, I worked at a HFT firm for a long time until
last year, the firm places millions of trades per day and is among the most
successful in the markets it trades. And what did we use? Only modern
features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere
in our code. This author is either naive or political.

I think the title of this article is highly misleading, and the contents are
not relevant. Overall, this article is just bad advice for most of us.

[0]
[https://news.ycombinator.com/item?id=11699954](https://news.ycombinator.com/item?id=11699954)

~~~
nmeofthestate
(re 10 minute compile times)

> This is simply a bogus statement

I work on a C++ project and believe me, 10 minute compile times would be great
:)

~~~
matthewaveryusa
have you tried ccache? the 5x to 10x improvement isn't a bogus statement in my
experience

~~~
plorkyeran
ccache eliminates spurious rebuilds, but doesn't make compilation of things
which actually need to be recompiled any faster. It's basically just a
workaround for that Make only uses file mtimes and not the contents to decide
what needs to be built.

~~~
matthewaveryusa
In theory, agreed. In practice I see a lot of re-compilation in large
projects.

------
n00b101
As has always been the case, effective use of modern C++ requires knowing
which subset of the language to use and which to avoid.

I agree with the author's criticisms of many C++ features. At the same time, I
think that a proper simple, modern subset of C++ exists that is much more
productive and safer than C, without sacrificing performance. You can also
optimize progressively, for example start with using std::string and
std::vector and then replace the stock implementations if they aren't
performant on your target architecture. I would not, however, recommend using
C++ for GPU kernel code - a mix of C++ for CPU code and C for GPU kernel code
works best. It is not ideal, but it's the best toolset available for serious
industrial development.

FPGAs are exciting, but they've also been the "next big thing" in general
purpose computing forever. Obviously it makes sense to use FPGAs for certain
HFT and embedded applications, but that's not the same as general purpose
computing which is what C/C++ is for. Not to mention, FPGA compile times can
take hours or even days, which pales in comparison to most C++ template
overhead. I would also say that for IOT, I'm not sure why it is obvious that
"$10 FPGAs" should dominate. Why not a $0.50 microcontroller? Or the $5
Raspberry Pi Zero board? Both of which are eminently programable in C and even
C++. Embedded devices have been around long before "IOT" became a buzzword,
and we can see that microcontrollers, FPGAs, SOCs, and custom ASICs all have a
role to play depending on the application.

~~~
nickpsecurity
"Not to mention, FPGA compile times can take hours or even days, which pales
in comparison to most C++ template overhead."

It's really funny to see him bitching about iterations with templates then
suggest using a FPGA and synthesis tools. I'm glad it amused someone else,
too. :)

------
typon
If he is complaining about C++ being bad and suggesting Verilog on FPGAs as an
alternative, boy do I have some bad news for him.

HDLs (yes including Systemverilog) have 10x worse design than the worst
software languages. This is why there are entire companies out there that make
high level synthesis tools or high level HDL specification languages (like
Bluespec).

And I haven't even said anything about the quality of FPGA tool chains.

~~~
deng
This, a thousand times. If you want minimum latency, FPGAs are of course a
possible solution, but selling it as a "better C++" is just laughable. When
you have an IP core which fits your problem, then by all means use it (and
hope it works as advertised), but otherwise: the less you need to do in an
FPGA, the better.

------
kangar00
> If you cannot figure out in one minute what a C++ file is doing, assume the
> code is incorrect.

This statement at first resonated with me, and then I thought about it: this
doesn't reduce the complexity of the overall application or service, it just
means that one file is simple. You could have 10,000 files instead of 1 much
shorter one; is that any more simple?

~~~
lmm
Yes. If each file makes sense in isolation then the whole will as well. Just
splitting code into lots of files won't necessarily produce files that you can
figure out in 1 minute though (you have to define the boundaries between files
such that they make sense).

~~~
susam
I disagree. A complicated function may be made of a bunch of statements where
each statement makes sense easily. The entire function may still be
complicated. The same argument can be extended for files and projects. Even if
each file is simple, if the code in those files interact with each other in a
complicated manner, the project becomes complicated. This can happen despite
having neat boundaries between files. Nothing stops a new programmer from
writing new simple files that interact with the existing files in a
complicated manner. Simplicity of source code in individual files or functions
is just one of the factors behind a simple project. Simplicity of design has
to go hand in hand with it.

On the other hand, a couple of files may be very complicated but the entire
project could still be simple if those complicated files hide the complexity
behind neatly exposed functions, and the remainder of the project does not
make use of those functions in a complicated manner.

~~~
lmm
> A complicated function may be made of a bunch of statements where each
> statement makes sense easily. The entire function may still be complicated.

 _Statement_ yes, but I avoid them where possible - the complexity comes from
their interactions because their interactions are unmanaged, implicit and
arbitrary. If you make each function an expression made up of expressions and
functions, then I think it becomes true that if each expression makes sense
easily then the whole will also make sense easily.

~~~
mannykannot
As far as the complexity of programs are concerned, there is a similarity
between statements at one level of abstraction and functions at a higher
level. I have seen many cases where small functions have been assembled into
complicated programs. These programs often have a proliferation of 'helper'
classes and functions, where you have to trace through long series of calls to
get to where the work is done. They often seem to come from a poor design that
has been repeatedly patched instead of fixed, or from programmers who write
functions because they think they will be part of the solution, but not
backing out and replacing them them when they find a complication they had not
anticipated.

Using small functions is a necessary, but not sufficient, condition for making
understandable code.

~~~
lmm
I think what you're describing is a case where you can't understand what those
helpers do, and therefore can't understand what the function that calls them
does. I maintain that if each individual function makes sense then the whole
will too.

~~~
pshc
This holds if the small functions are built around a coherent top-down design,
respecting each other's invariants. Once the project is too large to fit in
one's head, it is no longer sufficient for each function to be 'correct' in a
local sense.

------
jonathankoren
I know why I don't like C++ anymore, it's just no fun.Its slow to compile, the
errors are like 6 lines long full of template and class hierarchy that makes
it hard to understand what exactly happened, and then of course there's the
common coding shortcut of declaring everything auto. (What type is this list?
I don't know, it's auto all the way down.) Then there's the whole thing about
making constructors, but leaving the bodies empty because everything should be
on initialize lists now, and now there's wrapped pointers for some reason.

I hated writing modern C++. It was just so depressing and frustrating.

~~~
Ygg2
I had to write C++ yesterday and hated every second of it, from the clunky
header files, to the errors that make no sense, to the can't make a cyclic
dependency between classes i.e.

    
    
          class A { B: b}
          class B { a: A}
    

Long story short - I was to create a wrapper around a Poco::Runnable, so you
can use the wrapper as a Poco::Runnable (don't ask why, it's TEH LAW) but
without extending it.

~~~
xg15
C++noob here, but: Given that the members in your example are no pointers but
actual substructures within your data structure, wouldn't that result in an
infinite data structure? Therefore it sewms quite logical to me it's not
allowed.

Disallowing cyclical dependencies via pointer would make no sense though.

~~~
humanrebar
Right. C++ objects are values, not references to values like in almost all
other languages. So C++ needs to know the sizes of everything to construct
them. If you tried to write out the mathematical series describing the
ultimate size you would need to allocate for A or B, you would end up with a
value approaching infinity.

~~~
TheCoelacanth
This is probably my favorite thing about C++. Values are so much easier to
reason about than references.

------
halayli
This article is coming from a frustrated developer and lacks any scientific
evidence. The frustration (understandably) is coming from the overwhelming
complex new features and patterns that barely a compiler can understand.

C++11 onward revamped the language to make up for the lack of progress in the
past 10 years. The majority of C++ developers that aren't keeping up with the
new features because they are busy with their daily jobs feel that they are
falling behind and the language they thought they new has changed underneath
them.

C++03 already had a steep learning curve, but with C++11+ that learning curve
is orders of magnitude more.

On the upside, you can use C++11 without understanding most of the details and
it will do the right thing _most_ of the them. And I think that's the bet that
the language is making.

------
messel
Ok. Try a different language :)?

A single language needed to solve all problems is a fallacy.

I don't see FPGA programming ousting c++, but expect higher level languages
with strong parallel semantics to gain "market share". You can always call a
dedicated process written in optimized c for the hottest components. Compose
the rest in go, elixir, or any high level language (lisp).

Architectures will naturally gravitate to higher level languages that support
cleaner composition. The tools and interfaces will push towards higher
abstraction without impacting build or run time. Maybe this process is related
to Kevin Kelly's inevitable. I'm an optimist here.

~~~
coldtea
> _A single language needed to solve all problems is a fallacy._

Has that been proven and do we have pointers to any peer reviewed papers on
that?

Because else its just an old's wives tale.

I don't see any logical impossibilities for one language to solve all problems
(meaning, to work well for at least 4 domains: OS and drivers a la C,
apps/games a la C++, network programming a la Go, Java etc, and scripting a la
Python).

It's just cultural, monetary, design and community issues with most current
languages.

And even if we want to have different profiles for each domain, ideally I'd do
it with a 2-layered language implementation like this:

1) Base library: network, threads/fibers, UI, database etc of Python/Java SDK
proportions

2) A "close-to-the-metal" layer without a (big or any) runtime (can use RAII,
ARC, etc).

3) A "scripting" layer that is a GC'ed an easier to use superset of the (2)'s
syntax. Ideally all (2) code should also be valid (3) code.

Both use the same base library (written in the "metal" layer or C). (3) can be
embedded into (2) as a scripting engine, and (3) can call all (2) APIs
trivially (e.g. no need for declarations like for using C from Go, Python etc
-- just import and call).

Programmers can use (2) or (3), mix them, share code between the two. And
what's best, the core APIs are all the same -- which is most difficult part to
master in a language.

(Think like Java and Groovy, but with the parent language closer to the metal
and the later closer to C).

~~~
kibwen

      > I don't see any logical impossibilities for one 
      > language to solve all problems
    

There are no logical impossibilities in constructing a vehicle that can serve
as a passenger vehicle, dump truck, submarine, and airplane, but tensions in
design will very likely result in a compromise that is more complicated, more
expensive, and less capable than a dedicated solution. Not only that, but your
vehicle will be just as inadequate as every other once the landscape changes
and someone now needs a space shuttle.

All-in-one compromise solutions only excel when a market is both small (so
that niche solutions that serve only a percentage of the market don't pass the
absolute mindshare threshold for viability) and uncompetitive (so that there's
no competitive advantage to ditching generality in favor of efficiency in a
specific space). As long as the software market keeps growing and remains
competitive, specialization and fragmentation will only increase (in the long
run, anyway; we'll still be subject to the same bust-and-boom cycles, so it
will still be possible for fragmentation to decrease in the short-term).

~~~
NotRustAgain
> a passenger vehicle, dump truck, submarine, and airplane

The problem with using metaphors to make your argument is we generally have to
argue about whether the metaphor is even appropriate enough that the
conclusions apply to the original topic... It's simpler just to argue the
topic.

Within a huge class of problems, I don't need to get a new computer to solve
each new thing that comes up. That's a very general tool. Why do I need
different programming languages?

> All-in-one compromise solutions only excel [...]

Who asked for a compromise? I could make a short list of all the features I
want in a language, and while there isn't one single languages that currently
has all those features, I doubt you could make a proof that creating such a
language is impossible or would involve some horrible trade off. Your list
might be different than mine, but that's not the point.

~~~
kibwen

      > The problem with using metaphors
    

Then ignore the metaphor and focus on the longer paragraph that succeeds it.
:P

    
    
      > I don't need to get a new computer to solve each new 
      > thing that comes up
    

Except that, in practice, you do. I have a smartphone in my pocket, a laptop
in my bag, a desktop in my office, and two personal servers in the cloud. Just
because two computers are both effectively Turing machines does not
automatically invalidate the importance of form factors, power draw,
integrated peripherals, physical location, and other practical differences.
This also ignores the existence of domains that _actually_ demand dedicated
hardware, like supercomputing. We are never going to live in a world where
microcontrollers are just as capable at running weather simulations as the
TOP500, because the economics don't pan out. So no, you're right, it is not
logically impossible to construct a language that is capable of performing all
imaginable tasks, though that's not something that I've ever disputed. Rather
than being logically impossible, it's merely economically infeasible. :P

~~~
NotRustAgain
> Then ignore the metaphor and focus on the longer paragraph that succeeds it.
> :P

Your second paragraph had a bunch of economic pseudo-theory about what
sells... Maybe that explains why we don't have a good general purpose
programming language, but it said little about whether there could be one.

~~~
kibwen
Economics is a social science, pseudo-theory is the bulk of it. :P However, I
welcome you to prove me wrong by creating the language to end all others.

~~~
NotRustAgain
> :P However, I welcome you to prove me wrong by creating the language to end
> all others.

Yes yes, I'll be sure to let you know. In the meantime, I hope you'll keep up
the great work maintaining the status quo and contributing to a language which
avoids problems that experienced programmers don't really have.

:P

------
aninteger
I've come to the conclusion that one should "use C++ when they absolutely have
to and C when you can." There just aren't many areas where C++ is absolutely
required when plain old simple C can be used. (Not to mention using higher
languages if possible).

------
shanwang
Such rant appears once every few months on HN, this one is one of the least
convincing. Many problems he mentioned are not "Modern C++" problems, but
problems with C++ from beginning, some of them already have reasonable
solutions, for example ccache + distcc for speeding up compilation.

The real problem with C++ is the standard committee, the design by committee
approach for such a complex language is failing. If C++ is taken over by a
company, it will be a much better language.

------
DrBazza
“There are only two kinds of languages: the ones people complain about and the
ones nobody uses.” - Stroustrup.

C++30, might end up being D, today.

~~~
dkhenry
Or Rust, or Go.

C++ is like Java, it will pull in features of other languages years after they
have been proven to be valuable and useful, not that there is anything wrong
with that. There is something to be said for a slow moving target that you can
kinda rely on to work well. If anything I think C++ went off the rails when
they started innovating in the language space and tried to introduce all sorts
of novel features that other languages hadn't prototyped. I think of all the
nonsense around templates that they introduced and how generally the consensus
is not to use any advanced template features.

------
fsloth
This sounds like it's written from the point of view of implementing something
inhouse. I fail to see how FPGA programming will be relevant if one wants to
distribute software for consumers (or am I technologically clueless...).

~~~
ishi
According to the writer's LinkedIn profile, he comes from the world of High
Frequency Trading. That's an area where performance is so important, that it
might make sense to design your own hardware. And you certainly don't want to
distribute it to anyone...

I do agree with the writer that with each release, C++ has become more and
more complicated, seriously hurting the maintainability of C++ code.

~~~
entelechy
I also agree that c++ is getting more and more complex over time. However if
you build things from scratch and you cherry pick your language features, C++
can be quite pleasant.

~~~
TheOtherHobbes
That's the problem - C++ isn't one language, it's at least three generations
of languages living together in the same compiler, like an extended family
crammed into a tiny apartment.

No one has moved out since 1983, and new additions just keep on coming.

~~~
gpderetta
We c++ programmers are very welcoming. We have yet to meet a paradigm that we
didn't like.

On a more serious note, any old system acquires cruft with time and you don't
always have the luxury of throwing away pieces or even restarting from
scratch.

------
cpwright
I find the beginning and end of the article quite contradictory. Basically
that C++ is too complicated; and oh by the way we should start programming
FPGAs, which are much harder to get right.

I like modern C++, because I think it simplifies a lot of things (RAII for the
win here). Templates let you engage in duck typing, but with (if you are
careful) very performant results.

------
Philipp__
While some pretty good points were stated in this post, I cannot but feel OP
is a bit biased. Too narrow sort to say.

I feel totally opposite in terms of new Modern C++. I guess the thing is how,
where and when you use it will define your opinion/experience.

------
dahart
> Today the "Modern Technologist" has to rely on a new set of languages:
> Verilog, VHDL

That was a complete surprise ending! :)

I like surprise endings, and he makes a lot of good points, whether or not I
agree with them. But, I totally wasn't expecting "I'm done with C++ because:
hardware." I was expecting because web or because awesome new high performance
functional scripting language <X>.

A lot of what he's talking about there will still run compiled software
though... FPGA programming and C++ aren't exactly mutually exclusive, right?

~~~
pjc50
_FPGA programming and C++ aren 't exactly mutually exclusive, right?_

I would say yes: you can't really run C++ on an FPGA. There are all sorts of
tools which promise this (SystemC etc), but it requires you to stick to a
careful subset of language constructs. You don't have a heap, for example.

------
aspiringuser
20 year C++ programmer here. I work on multithreaded server code. Stopped
using modern C++ features 5 years ago. I'd compare my use of C++ to be roughly
equivalent to the use of C++ in the NodeJS project or the V8 project. I'm not
a user of Boost.

I have to agree with the author of the article. It takes longer to train
developers to write idiosyncratic modern C++ code and compilation times
explodes. Compiler support for bleeding edge C++ features is spotty at best.
Harder to reason about the correctness of modern C++ code.

~~~
Const-me
15 years here.

I'm mostly in MS ecosystem, so I don't have issues with C++ compiler support.
But I totally agree with the rest of your comment.

------
stormbrew
One of the biggest users (some would say abusers) of template metaprogramming
I know works on HFT software. He trades extremely long compile times for
performance at runtime and finds that C++ allows him to do this and maintain a
decent architecture (through what amounts to compile-time polymorphism as well
as RAII).

For him, it's actually the older features of C++ that have no use. He doesn't
use deep class inheritance and never touches virtual functions, for example.

------
Const-me
I never programmed HFT software, but I agree with the criticism of the modern
C++.

It’s bad the author hasn’t defined what exactly’s “modern” is. I saw some
comments compared boost with C++/14\. I think boost is also modern. Even
Alexandrescu’s Loki is also modern, despite the book was published in 2001.

I think that modern stuff was introduced in C++ because in end 90s-start 2000s
there was expectation C++ will remain dominant for some time. There was desire
to bring higher-level features to the language, to make it easier to learn and
safer to use — even at the cost of performance.

People didn’t expect C++ will lose its market that fast: very few people now
use C++ for web apps or rich GUI. However, due to the inertia and backward
compatibility, the features remain in the language.

Personally, I’m happy with C++.

C++ is excellent for system programming, also for anything CPU bound. For
those you barely need those modern features, and fortunately, they’re
completely optional: if you don’t like them, don’t use them.

But if you do need higher-level language for less performance-critical parts
of the project, I find it better to use another higher-level language and
integrate it with that C++ library. Depending on the platform, such higher-
level language could be C#, Lua, Python, or anything else that works for you.

------
thinkpad20
> After 1970 with the introduction of Martin-Löf's Intuitionistic Type Theory,
> an inbreed of abstract math and computer science, a period of intense
> research on new type languages as Agda and Epigram started. This ended up
> forming the basic support layer for functional programming paradigms. All
> these theories are taught at college level and hailed as "the next new
> thing", with vast resources dedicated to them.

This seems pretty dubious. Dependently typed languages and other projects
embracing advanced type theory are still the realm of niche enthusiasts. While
some of the more academic colleges might teach them in one or two courses, the
vast majority of education a CS college student receives will be taught in
traditional imperative languages. If "vast resources" have been devoted to
Agda and Epigram, then I'm not sure what kind of language should be used to
describe the resources devoted to C, C++, Java, etc. Also as the author
mentions, Intuitionistic Type Theory has been around since the 70's, in fact
the same year that C was introduced. Certainly it hasn't been taking over the
CS world by storm since its inception, as he seems to claim.

Beyond that, the author's argument seems to be a bit incoherent. He critiques
the readability of Modern C++, but C++ is notoriously hard to understand,
including or especially prior to the development of C++11. It's never going to
be an easy language to read except to seasoned developers. If anything, modern
C++11 seems to provide abstractions that increase readability and safety. He
critiques the performance of modern C++, but then he ends up recommending that
people ditch C++ entirely and learn VHDL/verilog instead. Not even vanilla C++
is fast enough for him, then why criticize modern C++ on the grounds of
performance?

------
cm3
I recently had to switch a project to -std=c++11 because a header I include
now uses C++11 files. This change alone made compilation at least twice if not
three times as slow. The new safety and convenience features are nice but
compile times seem to be out of focus and getting slower and slower every
year. I don't know how I feel with g++ 6.1 defaulting to -std=gnu++14.

~~~
netheril96
How do you know that it is c++11 switch that dramatically increased your
compile time, instead of the header file and the headers it includes? Two
conditions are changed, and you feel confident that _one_ of them is
responsible for the outcome. Why?

~~~
cm3
Because it's a C++11 header that doesn't exist in C++98. And I cannot compile
that version without switching to C++11 mode.

Edit: there appears to be some confusion here. I don't include the C++11
header myself. I include a public API header of another project which now
happens to include a C++11 header in their public include file. Unless I want
to stay on the old version of that API and accept the risks, I don't have a
choice whether I compile my consumer module in C++11 mode or not. And it's a
major fault of the C++ language that there are no modules and therefore this
include file overhead and mess. They appear to be working towards a module
system, thankfully.

~~~
netheril96
> Because it's a C++11 header that doesn't exist in C++98.

That is supposed to be a fault of C++11?

~~~
cm3
I don't understand your argument. An upstream project I use put a C++11 header
in their public API's header. Now, I also have to build in C++11 language
mode. If you're saying it's not the language standard's fault that CXX is
slower, sure, but how or why should I differentiate between ISO C++ and
g++/clang++ with libstd++/libc++?

~~~
vmarsy
I think what the GP was saying is: Ignore the new feature you're using from
that upstream project for a minute. Did you try compiling _your_ project with
-std=c++11 before doing any other change? That is the only way you'd clearly
see the difference of switching to C++11 in your project. Maybe it won't add
any compilation time, it could even be faster...

And then you add the new feature and your compile time goes crazy, that's
unfortunate indeed, but that's the price of not being penalized at run time
with the [cool new C++11 feature the upstream project is using], you have to
endure a longer compile time. But you can't blame C++11 for it, without it
that feature wouldn't even exist!

~~~
cm3
This makes more sense, but why is it not C++11's fault if compiler writers
have a hard time keeping the compile time overhead in reasonable bounds?
Compiler writers actually do not enable some optimization passes or cap some
passes at a certain search level because they know the algorithmic complexity
would be unacceptable for most users, although the performance benefit is
clear.

The planned C++ module system will most likely solve a large set of the pain
points.

------
ausjke
Just started to relearn c++ and QT for cross-platform GUI programs, c++ is not
easy, but its performance is still unbeatable and in certain use cases, e.g.
games or video-related-performance-critical-apps or GPU-opencl-etc, c++ seems
to be the sole candidate still.

------
jcbeard
I have a few problems with this article: >structure leads to complex code that
eventually brings down the most desired characteristic of a source code:
easiness of understanding.

If done well, the structure of things like variadic templates make libraries
easier to use, and make coding faster (granted, code bloat can be an issue
with N different function signatures).

>C++ today is like Fortran: it reached its limits

Not quite. Fortran died because well, object oriented programming came out and
lots of people like it. And well, C was always more popular regardless
so...C-like C++ was the obvious next choice. There is a lot of cruft in any
new library, so some things aren't as performant as if you wrote them in say
assembly, which is what the author seems to suggest. Yes, if I built a bare
metal iostream-like functionality it would be more performant (ha, used the
word :) ). People know iostream isn't that performant. Could it be better?
Perhaps. Is it safe? Yes! If you want perf, use the C interface directly. Is
that safe to use, probably not for the general careless user.

>To handle the type of speed that is being delivered in droves by the
technology companies, C++ cannot be used anymore because it is inherently
serial, even in massively multithreaded systems like GPUs.

Well, yes but so is just about every language. People are trained to write
sequentially (left to right, top to bottom), with many exceptions...but none
the less, sequentially. There are very few languages that do multithreading
natively. There are lots of additions/libraries to C++ that enable very nice
ways to consider parallelism (including w/in standard: std::thread), outside
of standard (raftlib.io,hpx([https://github.com/STEllAR-
GROUP/hpx](https://github.com/STEllAR-GROUP/hpx)),kokkos
([https://github.com/kokkos](https://github.com/kokkos)), etc.). There are
lots, some are quite easy to use. C++ is inherently serial, but there is no
better way to write. It is fairly easy to pull out "parallel" pieces of code
to execute. It is even easier if the programmer gets quick feedback (like the
icc loop profiler,etc.) on things like ambiguous references and loop bounds
that can be fixed quickly.

Interesting read, but don't agree at all.

------
bitL
I agree with the author; I still long for the not-overly-complicated C++ back
in the 00s I could write super-fast 3D rendering engine without much bloat. I
find it very appalling when C++ went from a poster child of imperative
programming to implementing monads in its libraries (mind you, monads are used
to "simulate" imperative programming in functional programming). Something
went wrong there...

------
syngrog66
I was once a C++ programmer but migrated first to Java, when I thought it was
better designed and more convenient, and then to Python when I wanted less
verbosity while having greater freedom to choose between a procedural style or
OO.

C++ may still be an ideal choice in some problem spaces but I think the number
and size of them has shrunk as more and better alternate choices have appeared
and ate away at the C++ share.

------
hackerweb
How are Verilog and VHDL a "new set of languages"? That set has been around 30
years, almost as long as C with classes.

------
progman
The problem with modern C++ is that it wants to be everything. Now this
behemoth is crushing under its own weight.

People who are not forced to use C++ should consider other languages which are
way cleaner and even more performant. Code written in Ada and Nim for instance
is much easier to maintain.

------
afsafafaf
Wonder if they tried IncrediBuild to reduce their compile time? They are right
that C++ - while faster than ever before - takes much longer to compile than
many other languages.

~~~
daemin
Things like Incredibuild can only help so much. You still need to have a
decent project structure, and enough computers to effectively cut down compile
time.

Though (as mentioned in another comment) if your project is a big ball of
inter dependencies then the link time will dominate.

I've seen compile times with Incredibuild that are less than a minute, but the
link takes 5-10 minutes. Or better yet crashes due to the pdb size.

------
koyote
Am I the only one being redirected to a linkedin sign up screen?

~~~
nas
I'm redirected too. I'm not signing up with Linked-In, agreeing to their
policies, letting them spam me, etc just so I can read this article.

------
sitkack
> "that is where the unicorns are born: by people that can see on both sides
> of the fence"

------
Nano2rad
Functional language programs have to run as interpreted. If compiled they will
be too bloated.

------
blux
Anybody got an idea to which video series of Chandler Carruth he is referring
to?

~~~
humanrebar
Here's a few:

\- "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!"
[https://www.youtube.com/watch?v=nXaxk27zwlk](https://www.youtube.com/watch?v=nXaxk27zwlk)

\- "Understanding Compiler Optimization"
[https://www.youtube.com/watch?v=FnGCDLhaxKU](https://www.youtube.com/watch?v=FnGCDLhaxKU)

\- "Optimizing the Emergent Structures of C++"
[https://www.youtube.com/watch?v=eR34r7HOU14](https://www.youtube.com/watch?v=eR34r7HOU14)

\- "Efficiency with Algorithms"
[https://www.youtube.com/watch?v=fHNmRkzxHWs](https://www.youtube.com/watch?v=fHNmRkzxHWs)

The first one is really good if you've never benchmarked anything in a Linux
environment. Or if you otherwise want to learn how to investigate how C++ gets
turned into machine code.

------
je42
Actually, the Author wants GO.

------
known
Kernel is my new home;

------
known
Me too :)

------
sickbeard
His argument about simplicity resonates with me. Sure you can learn variadic
templates and all that fancy stuff but in practice when you are working on
production software in any company involving more than one person using the
code base, it just pays in heaps to write the simplest easiest to understand
code; meaning all that nice fancy stuff is almost never used.

------
frozenport
Being an expert FPGA programmer is easy, the problem is that small things take
a really, really long time.

------
ensiferum
It just sounds like someone who couldn't handle C++ whining and making a bunch
of blanket statements without really having any proper understanding.

I agree that some of the features such as lambdas can use to hard to track
bugs (lifetime issues) and difficult to follow code when abused. When used
nicely though they can lead to simple, elegant and straightforward code
(anyone who tried to use the STL algorithms before lambdas knows what a pita
it was most of the time).

Bottom line, if your code base is a mess don't blame the tool. Blame the
programmers.

~~~
userbinator
_Bottom line, if your code base is a mess don 't blame the tool. Blame the
programmers._

The problem is the attitude towards these new features that naturally leads to
programmers abusing them, building overabstracted complex bloated behemoths to
accomplish the simplest of tasks.

I'd say the vast majority of programmers, for some reason, seem to have an
appetite for complexity --- they tend to feel that overly complicating things
somehow makes their code better, especially if they can use some new shiny
fancy features in the process. I don't think like that so I don't know exactly
why that is the case, but perhaps it has to do with the feeling of
accomplishment from having written something "big", solving simple problems
with complex solutions. Instead, I'm the opposite --- I like solving complex
problems with simple solutions, which means code that is usually very
straightforward and only occasionally makes use of some more advanced features
of the language, when it helps simplify the solution.

~~~
to3m
A common assumption among C++ programmers seems to be that if an ounce of
prevention is worth a pound of cure, the value of a ton of prevention must
scale up similarly. But a ton of anything is too much weight for most projects
to handle.

