
Edward C++Hands - T-zex
http://bartoszmilewski.com/2013/09/19/edward-chands/
======
copx
The H.264 encoder I use is written in a mixture of C and ASM. It is damn fast
and maxes out all cores.

I dare you to write a full-featured, functional H.264 encoder in Haskell with
equal or superior performance. Once such a thing exist I will start believing
the hype.

Until then I reserve the right to consider the current Haskell/FP craze the
biggest amount of bullshit since the Java/OOP craze.

Sorry people, I already know that magic language/paradigm which solves all
problems of software engineering. Which allows me to write 4 times less code,
said code being robust and maintainable practically by default, yet as fast as
C/C++ if not faster, all that thanks to this great new model of software
engineering! It is the (only) future!

Yes, the Java/OOP/design patterns/UML crowd made exactly the same bullshit
claims back then. Nobody talks like that about Java/OOP anymore because the
bullshit eventually hit the fan known as reality.

Oh and before Sun's viral marketing campaign there were the Lisp weenies.
Again, same bullshit. We already have a "30x less code" language which comes
bundled with an insufferable sense of intellectual superiority not backed by
practical results.

Oh, and I almost forgot the Python/Ruby "dynamic" and "human" craze.
"Programming for human beings" \- it's the (only) future! Back then the snake
oil addicts paraded around dynamic typing and code which read like plain
English as the salvation of the programmer. Nowadays they claim dynamic typing
is the software engineering equivalent of the Holocaust and that good code
must look like math!

I am so tired of it..

~~~
runT1ME
The main micro blogging site I use is a mixture of scala, java, with ruby/iOS
for the front end.

I dare you to write a fully featured, async web stack that handles many-core
on the scale of hundreds of millions of users in C and ASM within equal or
lesser man hours. Once such a thing exists i'll start believing the hype.

~~~
harrytuttle
BBC did this 15 years ago on tiny computers compared to now. They uploaded
static HTML onto Sun boxes and served them.

If you did a submission somewhere, it'd do offline processing and upload new
static pages.

Most of this was C and perl.

No Scala, no Java, no Ruby.

iOS does HTML too you know.

I agree with the OP - most of this new technology doesn't really solve any
problems. All it does is create an ecosystem you can feel superior being a
member of.

~~~
danielweber
Even better, it's a new ecosystem that interviewers will use as a screen to
keep out people who don't know the new ecosystem, and so will require people
to learn it, and then in 4 years it will be thrown away because no one does
that any more. See, they've moved onto the even newer ecosystem.

------
chetanahuja
Every time I read a "scathing" attack on C/C++ for being the worst language
the author ever heard of, I hear Jack Sparrow's reply in my head "Ahh... but
you _have_ heard of me".

Somehow despite the litany of crippling deficiencies, C/C++ have managed to be
the foundation of every piece of technology a consumer or a computer
professional will ever touch in their daily life. But yeah, other than that,
we should all be using.. oh, I don't know, scala or haskell or something like
that.

~~~
outside1234
C++ is a legacy language. It was indeed good, about 20 years ago, and we can't
shake it because it was good, about 20 years ago.

~~~
vinkelhake
We can't shake C++ because for a number of things, there's still nothing
better.

------
CCs
Not an easy read for C++ fans like me, especially since it is coming from
Bartosz Milewski.

I did try out several other languages and I keep coming back to C++11 for
anything that requires scalability and raw performance, like APIs. The same
basic server that gets 400 req/sec when implemented in C# ASP.NET achieves
7900 req/sec using C++.

So far I could not find a programming language that does not have similar (or
worse) scissors. It's more like "pick your poison" type of choice.

After I learned Scala, my C++ code started to look like functional
programming. According to Bartosz that's a good thing, and I did not have to
dive into Haskell (yet). :)

~~~
MichaelGG
With C#, did you try using manual memory management? I wrote a rather high-
performance (50K requests/sec, 550K ops/sec) daemon in F#. The big trick was
to use my own mem management when it makes sense. I had a ~1GB managed heap,
and 12+GB unmanaged.

For instance, you can stack-allocate many objects in C# (strings and arrays,
for instance), if you're willing to give up the safety (and if C++ is an
option, then you are willing). You can manually heap-alloc managed objects,
too, although it gets tricky if they are nested objects. After all, the JIT is
just taking pointers to objects and doing stuff with them - it doesn't care
where the memory came from (just remember the GC won't scan your manually
allocated stuff).

The CLR (unlike the JVM) has native capabilities built right into it. People
should take more advantage of such things instead of only trying fully safe C#
and then deciding to dump it all for no safety.

~~~
barrkel
I'm in violent agreement.

To avoid the costs of serialization and deserialization on a per-request
basis, I built a little object system that stored all its instance data in a
byte array. All internal references were offsets from the start of the array,
but what made it fast was that I could read and write in it using raw
pointers.

I then recycled the byte arrays to avoid GC pressure, as many were just over
the edge of the large object heap, only collectable with gen2 scans.

I had a version working with manually allocated buffers, but it wasn't any
faster - GC overhead was only 2% or so.

~~~
CCs
Great! Can you prove it?

There's a benchmark:
[http://www.techempower.com/benchmarks/#section=data-r6&hw=wi...](http://www.techempower.com/benchmarks/#section=data-r6&hw=winec2&test=db)

Go-lang is roughly 10 times faster than ASP.NET (implemented as recommended by
Microsoft). Skipping the ORM doubles the speed, but then you lose most of the
components, so I'm not sure that's a realistic optimization.

Can you do better?

~~~
MalcolmEvershed
In case anyone is curious what is going on with those ASP.NET TechEmpower
benchmarks:

The TechEmpower benchmarks are all about how much overhead you can eliminate.
I profiled the CPU-bound ASP.NET TechEmpower benchmarks and most of the time
is spent in IIS<->ASP.NET overhead. After I removed IIS and ASP.NET by just
using the .NET HttpListener class [0], the result (on my machine) gives Go a
run for its money. Hopefully these results will show up in the next Round that
TechEmpower runs.

I profiled the .NET TechEmpower tests that access PostgreSQL and MySQL and
found that the database drivers had a lot of unnecessary overhead, for example
the Oracle .NET MySQL provider does a synchronous ping to the MySQL Server for
every connection, even if it's pooled. Plus, it sends an extra 'use database;'
command. [1] The PostgreSQL provider also sends extra synchronous commands for
every connection. [2]

[0]
[https://github.com/TechEmpower/FrameworkBenchmarks/tree/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/HttpListener)
[1]
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/31...](https://github.com/TechEmpower/FrameworkBenchmarks/issues/315)
[2]
[https://github.com/TechEmpower/FrameworkBenchmarks/pull/325#...](https://github.com/TechEmpower/FrameworkBenchmarks/pull/325#commitcomment-3373154)

~~~
CCs
So C++ is bad because certain functions should be avoided and has no garbage
collector. (With C++11 STL normally there's no need for manual memory
management or naked pointers.)

C# is good, just you should avoid the garbage collector and 90% of the
standard library. Roll your own web server, raw sql commands, have manual
memory management, use undocumented calls, emit byte code and there you have:
almost 40% of the C++ performance.

I'm the only one who thinks this does not make any sense?

~~~
MalcolmEvershed
No, I'm not implying any of that. I'm just saying that an important step to
getting good performance is profiling to determine the cause of poor
performance, then deciding whether it makes sense to try to improve
performance in the found areas, then executing your decision.

Imagine the alternative if I didn't profile: I'd just declare that
C++/whatever is way faster than ASP.NET, and I'd switch to C++ and open myself
to a whole world of debugging memory corruption issues whatnot. Instead,
because I profiled, I found the areas that could be improved, I wrote the
HttpListener code and I can stay on the .NET platform for all of its other
benefits. By following this process I have more options than just "C# blows, I
gotta throw it out the window for C++".

In reality, I have probably written more useful, shipped, production C/C++
than C#, so I'm also a C++ advocate, but hey, when it makes sense, and for the
right reasons, you know?

------
wwweston
"Ask any C++ guru and they will tell you: avoid mutation, avoid side effects,
don’t use loops, avoid class hierarchies and inheritance. But you will need
strict discipline and total control over your collaborators to pull that off
because C++ is so permissive.

"Haskell is not permissive, it won’t let you — or your coworkers — write
unsafe code. Yes, initially you’ll be scratching your head trying to implement
something in Haskell that you could hack in C++ in 10 minutes. If you’re
lucky, and you work for Sean Parent or other exceptional programmer, he will
code review your hacks and show you how not to program in C++."

For those who aren't familiar with Parent, watch this talk:

[https://www.youtube.com/watch?v=4moyKUHApq4](https://www.youtube.com/watch?v=4moyKUHApq4)

which introduced me to Stepanov's "Elements of Programming" and the idea that
I could be working with better/more reliable/provable abstractions in C++.

That said... after a few weeks of shallowly digesting the material, I started
thinking pretty much like Milewski says above: it's great if I can learn to
write better C++, but if what I really want is to escape from its specific
dangers and general common programming pitfalls, something like Haskell seems
like a better bet.

------
hvs
Every time I look to use C++ for something, I come away with the knowledge
that I will never know the _right_ way to do it in C++. At least with C I know
where I stand.

~~~
simgidacav
I think this is why, for instance, the Linux kernel is written in C and not in
C++. And this leads to discussions on the Internet which are definitely funny!
:D

[http://stackoverflow.com/questions/520068/why-is-the-
linux-k...](http://stackoverflow.com/questions/520068/why-is-the-linux-kernel-
not-implemented-in-c)

~~~
sp332
That URL looks even funnier!

------
seanalltogether
John Carmack wrote a very interesting post about this same topic.
[http://www.altdevblogaday.com/2012/04/26/functional-
programm...](http://www.altdevblogaday.com/2012/04/26/functional-programming-
in-c/) To sum it up, he's still writing code in C++, but it has become less
and less like C++. His code has become less OOP and more functional.

------
lelandbatey
Big side note, but C++ for me is just... too much. I come to C++ from Python,
and they are just so far from each other it hurts. One of the fundamental
things about Python that I loved was:

    
    
        "There should be one-- and preferably only one --obvious way to do it."
    

Meanwhile, the sage wisdom I heard learning C++ is:

    
    
        "C++ is wonderful because there are so many ways to do anything."
    

These two methods of thought fly right in the face of each other, and it's
very hard to reconcile. With Python, I felt like I truly was learning a
language, a language of action where doing any one thing was consistently
defined. With C++ I feel like I'm in an ocean of choice and ambiguity, having
to carry around this huge load of tribal knowledge to get anything done.

Not saying it's a bad language, just sharing my feelings on it.

~~~
16s
Reversing a string in Python is not obvious nor consistent:

    
    
        str[::-1]
    

While it is in C++:

    
    
        std::reverse(str.begin(), str.end());
    

And Ruby is even more obvious and consistent:

    
    
        str.reverse()
    

Most people who rag on C++ have never seriously used it. It's not that bad.
Really, it isn't.

~~~
rictic
That's one point against python, and half a point for C++ [1]. By my
accounting that leaves the score still hugely in python's favor when it comes
to obviousness and consistency.

Python vs Ruby is another subject entirely. Largely I feel like it's mostly a
narcissism of small differences situation, and have only been doing more
Python than Ruby lately because more people around me are Pythonistas than
Rubyists at the moment.

1] ideally you wouldn't have to tell the reverse function the starting and
ending points for the string unless you only wanted the reverse of a substring

~~~
nly
begin() and end() return iterators. Python has iterators too, so this concept
shouldn't be alien to Python programmers. reverse() is a generic algorithm
that operates on any pair of bidirectional iterators.

Actually, in C++, you ideally wouldn't reverse the string at all... you'd just
call str.rbegin() and str.rend() to grab the _reverse_ iterators, and pass
them straight to your next algorithm.

The two approaches are just as consistent as one another, Python just uses a
terser syntax (which, in my opinion, isn't as obvious).

In C++14, if all things go according to plan with Concepts Lite, you'll be
able to write a short 3-5 line, reusable, version of reverse() that takes your
string (or container) as one argument, deduces at compile time, during
overload resolution, that begin() and end() return bidirectional iterators,
and then does the right thing. Failing that, C++14 may introduce Ranges. So
C++ is only getting terser.

~~~
com2kid
> Failing that, C++14 may introduce Ranges.

It would have been nice to have some sort of simple range syntax added to C or
C++ long ago. In the true spirit of C, don't even make it safe, just make it
bloody well work.

FirstArray[0..3] = SecondArray[4..7];

case 2..10:

for(int i : 0..ArrayLength)

for(int i : ArrayLength..0)

(hah that'd prove horrible if ArrayLength ended up being a negative number,
obviously some proper syntax would need to be determined :) )

I am always annoyed that such simple things are ignored in the language. Sure
they don't enable any "cool new abilities", but they make using the language a
lot more friendly (especially in comparison to having a ton of case fall
through statements!)

~~~
nly
The problem is your proposed [] syntax implies random access. Many algorithms
and data structures don't require or support random access.

------
Gupie
> C++ “solved” the problem of redundant casting and error- > prone size
> calculations by replacing malloc and free with > new and delete. The
> corrected C++ version of the code > above would be: > > struct Pod { > int
> count; > int * counters; > }; > > int n = 10; > Pod * pod = new Pod; >
> pod->count = n; > pod->counters = new int [n]; > ... > delete []
> pod->counters; > delete pod;

The 'corrected C++ version of the code' is actually:

std:vector< int > pod( 10 ); ...

I didn't bother reading any further.

------
austinz
Regarding his assertion about reference counting: Apple's implementation of
ARC for Objective-C doesn't handle cycles (it requires the engineer to reason
about weak and strong references in order to prevent memory leaks), but it is
certainly a "serious" implementation. The fact that it doesn't have facilities
for detecting and releasing cyclic references makes it definitely not the
"other side" of the GC coin.

That being said, ARC-style reference counting is more limited in terms of what
developers can do with their code, and it may not be a general enough solution
for a language like C++ (which strives to be all things to all people). Weak
pointers in Objective-C also require runtime support, which could probably not
be appropriate for high-performance applications in many cases.

------
millstone
> The power of multicore processors, vector units, and GPUs is being
> squandered by the industry because of an obsolete programming paradigm.

It’s worth addressing these in turn. In reverse order:

I dispute that the C paradigm squanders GPUs. OpenCL and CUDA are the two most
prominent languages written with GPUs in mind, and both have a lot more in
common with C than with Haskell. In particular they eschew functional
mainstays like recursion, linked lists, and garbage collection. So for GPUs,
it seems like the ideal paradigm is closer to C than Haskell.

For vector units, there’s some recent excitement about auto-vectorization in
Haskell. But I’m skeptical about the potential of auto-vectorization in
general, since it only seems to apply to simple loops, and can’t take
advantage of all instructions. Most effectively vectorized code uses either
compiler intrinsics or outright assembly, and I don’t see that changing any
time soon (I’d love to be proven wrong on this though).

Lastly, multicore processors. C++11 more or less standardized the creaky POSIX
threading model, with some additions like thread locals - hardly state of the
art. I wonder if the author is familiar with other approaches, like
libdispatch, which provide a much improved model.

One last observation. Parallelism is not a goal! Improved performance is the
goal, and parallelism is one mechanism to improved performance. If your
Haskell program runs at half the speed of a C++ program, but scales to twice
as many cores so as to make the overall time equal, that’s not a wash: it’s a
significant win for C++, which did the same thing with half the resources
(think battery drain, heat, other processes, etc.) Horse in front of cart.

~~~
lmm
>I dispute that the C paradigm squanders GPUs. OpenCL and CUDA are the two
most prominent languages written with GPUs in mind, and both have a lot more
in common with C than with Haskell. In particular they eschew functional
mainstays like recursion, linked lists, and garbage collection. So for GPUs,
it seems like the ideal paradigm is closer to C than Haskell.

I think the more obvious explanation is that it's much easier to write a
compiler for C than for Haskell. Thus the first tooling available for any new
system is almost always C. It doesn't mean C's the best tool.

>For vector units, there’s some recent excitement about auto-vectorization in
Haskell. But I’m skeptical about the potential of auto-vectorization in
general, since it only seems to apply to simple loops, and can’t take
advantage of all instructions. Most effectively vectorized code uses either
compiler intrinsics or outright assembly, and I don’t see that changing any
time soon (I’d love to be proven wrong on this though).

We'll have to wait and see on this.

>Lastly, multicore processors. C++11 more or less standardized the creaky
POSIX threading model, with some additions like thread locals - hardly state
of the art. I wonder if the author is familiar with other approaches, like
libdispatch, which provide a much improved model.

My experience is that alternative approaches to something as fundamental as
parallelism never take off, because they can't gather an ecosystem around them
(I'm looking in particular at the chaos of approaches available in perl and
especially python). I think a modern language _needs_ to have a single,
obvious, preferred way of achieving parallelism/concurrency that libraries can
build on top of (and that's why javascript has been so successful - while it's
a terrible language in many ways, there is one and only one way you handle
parallel-type problems, and it's (slightly) better than the standard way in
other languages)

------
rob05c
C++: an octopus made by nailing extra legs onto a dog. — Steve Taylor

[http://harmful.cat-v.org/software/c++/](http://harmful.cat-v.org/software/c++/)

------
codex
The author now works for a Haskell company; perhaps this helps explain his
changing attitudes towards C++. Or it could be a result of them.

~~~
limmeau
Do you mean Corensic? Their homepage is now "under construction", according to
Network Solutions.

~~~
BartoszMilewski
I did work for Corensic, which was bought by F5. It was a company that made a
very ingenious data race detector. I learned a lot about data races and how
they prevent companies from adopting concurrent programming.

I also used to work for a Haskell company, FP Complete, but I quit, so I'm no
longer representing them. I learned a lot about building actual applications
using Haskell at FP Complete.

------
njharman
I looked at the banner and could think of nothing but "Hello. My name is Inigo
Montoya. You killed my father. Prepare to die."

I'm sorry and deserve to be downvoted.

~~~
fetbaffe
Me too!

------
outside1234
I think Linus sums up C++ pretty well:

[http://thread.gmane.org/gmane.comp.version-
control.git/57643...](http://thread.gmane.org/gmane.comp.version-
control.git/57643/focus=57918)

~~~
detrino
Do you think anything in that link is relevant to this article or did you just
want to get in on the C++ bashing and had nothing to contribute of your own?

~~~
outside1234
I think its relevant, honestly. C++ is a pile of hacks, layer upon layer, that
only a small highly talented set of programmers can manage to master and the
rest shoot themselves in the foot with. Linus says that in his own way, but
that's the truth of the situation: C++ is a mess.

My takeaway from experience from all of that is to use C (as Linus argues).

------
dsego
C++, the language programmers love to hate :) If you'd like to read some more
C++ bashing: [http://yosefk.com/c++fqa/](http://yosefk.com/c++fqa/)

------
JanneVee
> Yes, initially you’ll be scratching your head trying to implement something
> in Haskell that you could hack in C++ in 10 minutes. If you’re lucky, and
> you work for Sean Parent or other exceptional programmer, he will code
> review your hacks and show you how not to program in C++.

Don't underestimate "instant gratification" even when it comes to programming
languages. Yes you might code yourself into a mess further down the line, but
most of us are in the business of shipping software not worrying about its
correctness.

~~~
chrisdone
I'd argue that Haskell (which is what I use professionally, can't speak for
Scala) lets you write your less well designed code for shipping, and then
refactor it later into something clean. There's a very low cost to changing
code, so you worry less about getting it right the first time.

~~~
JanneVee
But in some cases like mine you have legacy code. You still have to ship new
features and bug fixes. If I get to green field something really new, then
Haskell or Scala might be on the table but currently they are not options. Not
even embedding them.

------
MichaelMoser123
Haskell is creating a new value for every parameter / function return value;
That means that even for trivial tasks it is churning over non trivial amounts
of RAM; the footprint of a Haskell program is therefore rather large; I guess
that makes it rather impractical for real systems.

But of course, this one is an interesting observation: "If nature were as
serious about backward compatibility as C++ is, humans would still have tails,
gills, flippers, antennae, and external skeletons on top of internal ones —
they all made sense at some point in the evolution."

I guess C++ is like Perl: for many things there is more than one way to do a
thing; Don't like exceptions? Disable during compilation and code as in C with
error codes; That's what makes the whole thing malleable; the right kind of
flexibility is a winning feature.

------
eonil
"It’s a common but false belief that reference counting (using shared pointers
in particular) is better than garbage collection. There is actual research
showing that the two approaches are just two sides of the same coin."

Yeah. But RC is at least _deterministic and controllable_ side. Also, RC is
just one of options of C++, while you can implement GC yourself in C++ without
any overhead unlike any memory management method on GC.

~~~
pcwalton
Without compiler support for precise stack maps and barriers (and I don't know
if it's actually possible to implement them, given how malleable pointers are
in C++) the best you can do is conservative stop-the-world GC. That sacrifices
a lot of performance over the incremental or concurrent GC algorithms you see
in runtimes like the HotSpot JVM.

------
detrino
His entire rant about resource management is misplaced. It's not that
new/delete/pointers are "bad" or retained only for backwards compatibility,
they are primitives from which other abstractions are built. I don't think its
too much to ask professionals understand simple concepts such as RAII, which
renders his entire example trivial.

~~~
BartoszMilewski
I wouldn't have mentioned resource management at all if it weren't for
Bjarne's keynote: [http://channel9.msdn.com/Events/GoingNative/2013/Opening-
Key...](http://channel9.msdn.com/Events/GoingNative/2013/Opening-Keynote-
Bjarne-Stroustrup) . Apparently this is still not common knowledge.

~~~
CCs
Bartosz, Haskell was created 23 years ago, before Java. What is the reason it
did not catch on yet? Will that change?

I'm not aware of any even mildly successful system that uses pure functional
languages. Most of the implementations I know about is either C/C++/ObjC or
JVM. Some Python (Youtube, Dropbox), one .NET (StackOverflow) and pretty much
that's it. What do I miss?

[Edit] I'm looking for production code, not "in-house" or "internal tools". I
did read trough
[http://www.haskell.org/haskellwiki/Haskell_in_industry](http://www.haskell.org/haskellwiki/Haskell_in_industry)

~~~
steamboiler
FWIW Twitter and Coursera make heavy use of Scala (Twitter also uses Clojure,
if you consider Storm etc.)

~~~
CCs
Scala is running on top of JVM (right now).

It is used in production by Twitter, Netflix, Foursquare, LinkedIn, Simple,
HealthExpense, Klout, 47 Degrees, Box and a long list of other companies [1].

[1] [http://www.quora.com/Startups/What-startups-or-tech-
companie...](http://www.quora.com/Startups/What-startups-or-tech-companies-
are-using-Scala)

------
xerophtye
C++ Novice Question: Why are for loops bad? (as mentioned in the article).
What's the alternative?

~~~
BartoszMilewski
You should use standard algorithms whenever possible. Sean Parent talked about
it at Going Native. He was able to transform a horrible set of for loops he
found in Google's code into a few lines of modern C++ using standard
algorithms. But he _is_ Sean Parent!

------
mtdewcmu
"...avoid class hierarchies and inheritance..."

I love hearing this from OOP insiders. Virtual functions are particularly
awful to track down, and are probably worse than GOTO ever was. At least GOTOs
only went to one place.

------
zvrba
> But you will need strict discipline and total control over your
> collaborators to pull that off because C++ is so permissive.

Except it's not. The key is proper encapsulation into classes and generous use
of private instance variables.

The real problem is social: people want to get something done quickly, so
instead of talking to the original author[1], they put a new method in the
class (or declare a friend) and happily continue hacking, without thinking of
far-reaching consequences of whether _unrestricted_ (that's what public is)
use of the newly introduced method violates class invariants in some way.

[1] I'm aware that the original author might not be there anymore. Which makes
a good case for documenting design and intent of the code. A QA matter.

How does Haskell solve the social problem (directly hacking something into a
module you don't own or fully understand)?

> What you don’t realize is that it will take you 10 years, if you’re lucky,
> to discover the “right way” of programming in C++

IMO, only novices and beginners look for "THE right way". Advanced people
realize that the "right" way depends on the broad context and look for (the)
best compromise solution among a spectrum of solutions.

I believe this holds for other computing-related things (designing a LAN,
database tables, etc.), as well as real life (sports, martial arts, even
everyday things like sitting and walking as any person who has had a lower
back problem will tell you).

The problem is again social: novices and beginners who refuse to "grow up" and
who know just enough to be dangerous: they can produce working but messy code,
and refuse to learn other ways of doing the same thing in an appropriate
context. (E.g. when to use while vs for, or why copy-pasting large chunks of
code between different functions is generally "bad".)

Somehow I'm not convinced that Haskell is the magic bullet which solves the
underlying social/human issues.

> Haskell is not permissive, it won’t let you — or your coworkers — write
> unsafe code.

He never defines what "unsafe" is. Mutation is not unsafe per se.

> Don’t be fooled: accessing atomic variables is expensive.

I've seen recent slides where the measured cost of an uncontended locked
cmpxchg on Haswell was ~20 cycles vs ~5 cycles unlocked. This is NOT
expensive, unless unnecessarily you replace all memory accesses with their
atomic equivalents.

> Most importantly though, threads are not a good abstraction for parallel
> programming

No, but they are an essential building block.

> Haskell is way ahead of the curve with respect to parallelism

First, a computation can be abstraceted into a data dependency graph.

Now, the reason that most of today's applications don't benefit a lot from the
vast number of processors available is not that the underlying programming is
somehow unsuited for parallelism. It is because that there IS NO parallelism
available in these applications: data dependency graph is mostly serial, and
if parallelism is available, it is on such a small scale that superscalar CPUs
already make use of the large part of it. Also, with overly fine-grained
parallelism you WILL end up in a situation where the overhead of atomic
operations becomes non-negligible, even if it is only ~20 cycles.

I believe that if people more often thought about their designs in message-
passing terms, the dependency graph would also emerge, and they would see that
there often is very little parallelism to extract through explicit
parallelization. (Note that writing to a shared variable is just an extremely
simple and efficient method of sending a message).

This was also remarked on by Knuth: During the past 50 years, I’ve written
well over a thousand programs, many of which have substantial size. I can’t
think of even five of those programs that would have been enhanced noticeably
by parallelism or multithreading.
[[http://www.informit.com/articles/article.aspx?p=1193856](http://www.informit.com/articles/article.aspx?p=1193856)]

So, IMO, easier coding for shared-memory parallelism is one of the worst
reasons to switch programming languages.

That's why I personally prefer Erlang-style concurrency (actors).
Communication patterns and parallelism granularity is explicit, and
communication _cannot_ be decoupled from synchronization. [With shared
variables, these two are decoupled.]

------
general_failure
Brilliant, just brilliant. I found myself laughing and nodding all the way.

The sad situation is we have no worthy competitor. Go?

~~~
qznc
D [0], according to the author and me. Maybe Rust at some point.

Go is not flexible enough: Required garbage collection, no type-safe generic
programming (yet), opinionated concurrency model.

[0] [http://dlang.org/](http://dlang.org/)

------
roel_v
I'd consider myself a 'C++ programmer' \- I've used it for years, it works
very well for me for what I do with it, etc. However, what I find most
frustrating is that _nothing_ is intuitive. And there is always a reason for
_why_ it is like that, I know. My favorite example is the erase/remove idiom.

Anyway, here is a question about C++. Last week I tried to implement a parser
for a subset of CSV - one-byte-per-character, fields separated by comma, no
quoting, fixed line ending. The only requirement was speed. So I started with
a simple C++-style implementation, read one line at a time with std::getline,
split with boost tokenizer, copy into vector of vector of string. But it was
too slow, so I reimplemented it C-style - boom, first try, 30 times faster.
Through some micro-optimisations I got it 30% faster still - copying less,
adding some const left and right, caching a pointer. So if anyone is so
inclined, how would I make the following more C++-ish and still get the same
speed? Using fopen and raw char* to iterate over the memory buffer are what
I'd consider the non-C++ aspects of it, but feel free to point out other idiom
violations...

    
    
        bool CsvParser::OpenCStyle(boost::filesystem::path path)
        {
          FILE* fh = NULL;
          ::fopen_s(&fh, path.string().c_str(), "rb");
          if (fh == NULL) {
            return false;
          }
    
          const int BUFSIZE = 1024 * 256;
          char buf[BUFSIZE];
    
          m_Records.reserve(m_EstimatedRecordCount);
          bool at_end_of_file = false;
          std::string carryover_field_data;
    
          std::vector<std::string>* current_line = NULL;
          bool at_new_line = true;
    
          const char separator = m_Separator;
    
          while (!feof(fh)) {
            size_t size_read = ::fread(buf, sizeof(char), BUFSIZE, fh);
            char* start_of_field = buf;
            size_t i = 0;
            for (i = 0 ; i < BUFSIZE && i < size_read ; i++) {
              if (at_new_line) {
                m_Records.push_back(std::vector<std::string>());
                current_line = &m_Records.back();
                at_new_line = false;
              }
              if (buf[i] == separator || (at_new_line = (buf[i] == '\n'))) {
                current_line->push_back(std::string());
                if (carryover_field_data.size() != 0) {
                  std::string field(start_of_field, &buf[i] - start_of_field);
                  m_Records.back().push_back(carryover_field_data + field);
                  carryover_field_data = "";
                } else {
                  current_line->back().assign(start_of_field, &buf[i] - start_of_field);
                }
                start_of_field = &buf[i + 1];
              }
            }
            carryover_field_data = std::string(start_of_field, &buf[i] - start_of_field);
          }
    
          fclose(fh);
    
          return true;
        }

~~~
JanneVee
As for your getline being slow. It is a known problem.

[http://stackoverflow.com/questions/9025093/stdcin-really-
slo...](http://stackoverflow.com/questions/9025093/stdcin-really-slow)

------
sanskritabelt
Whenever I see people comparing C to assembly I automatically assume they
don't know anything about (a) C or (b) any kind of assembly.

~~~
nkurz
I think your 'or' might be misleading, with the (a) incidental and (b)
required. I deal with a number of smart C (and even C++) programmers who view
it as assembly with macros. But I've yet to meet any assembly programmers who
view C as just a more succinct way to write assembly. Maybe it can be phrased
as "Anyone who conflates C and assembly has probably never compared the output
of their compiler with the code they think they wrote".

------
cLeEOGPw
If he replaced "int * _counters;" with "vector<int> _counters;" he would have
lost nothing and gained a lot of control. As for the pointer to the Snd object
itself, in many cases it is sufficient to create object, not pointer, and pass
the reference of the object, then in case of exception memory is always freed
no matter how many ints he has push_back'ed and where the exception occurred.
And if he really needs to have pointer, then he should create that pointer not
in naked function that could return at any time, but make it a member of a
class and delete the pointer in destructor. Then make that class as object,
which will always execute its destructor, therefore always freeing the memory.

------
static_typed
The problem is not that C++ is hard, it's that Haskell doesn't solve the same
kind of problems.

We would all love programming to be easier, for the compiler or interpreter to
just know what we really meant, rather than what we actually typed, but
sometimes the tools are sharp, are a bit more manual, exactly because they
offer more control.

The constant sell of Haskell or Go as the panacea for all programming problems
is like someone else telling us all to adopt socks made of rubber - a few make
like that, may even blog and evangelise about it, but the rest of us find it
uncomfortable and annoying.

------
AsymetricCom
I'd rather Edward than Samuel Boxingglovehands

~~~
rob05c
You would, but the person shaking Edward's hand would not.

To stretch the analogy, scissors are dangerous, but they're also very useful.

I prefer a language that lets me put away the scissors when I don't need them,
so I don't cut someone's hand off. For example, Python with C bindings. The
problem with C++, IMO, is that it doesn't let you put the scissors away.

~~~
AsymetricCom
Yes, much better to punch your users into submission so they don't even think
of using their hands.

------
fleitz
Fucking backwards compatibility, let shitty code keep using shitty compilers
while the rest of us move forward.

There comes a time when no one uses a feature anymore so it might as well be
dropped. How many of you still have floppies on your computer?

~~~
simgidacav
The fact is that there's a very huge amount of legacy code which provides
irreplaceable functionality. What would you do if you had a complex data
crunching library in C to be integrated into your program?

Rewriting stuff from scratch is sometimes good, but usually you prefer to have
some well-tested routines, and be sure they will not fuck up.

Edit: you may also argue that projects like LLVM make it possible to compile
stuff in IR, and still keep the old code... but there may be situations in
which choosing a different compiler is not an option.

~~~
CCs
Both CLang and GCC have "C++11 compatibility" switch, so it's a matter of a
compiler parameter if they accept or not the new standard.

I would introduce a "backward compatibility" switch that has to be turned ON
per source file to accept old constructs. All the new code from now own would
be nice and shiny. There's an (easy) way to still keep around old code.

Would that work?

~~~
BruceIV
Only if you like writing all your libraries yourself.

To answer a little less flippantly, you'd need to add some sort of "unsafe"
construct to the language, because most of the nice, high level abstractions
have nasty low-level stuff buried in them, and because of the way C++
templates are compiled, that's all code that will be included in any program
you write. In short, all "new" code is built on top of an awful lot of "old"
code, and if you were going to make a huge breaking change like this, you may
as well write a new language and libraries ground up (like Go or Rust).

~~~
CCs
It's not that black and white.

Scala has the same debate - there are a couple constructs only library writers
use (normally). So they compile the library with "enable all". Everybody else
gets a warning and they refactor the code or turn on the switch to allow.

~~~
BruceIV
The difference is that you can separately compile your Scala libraries from
the code that calls them - if you do that in C++, you give up templates. All
those #includes at the top of your C++ code are basically just dumping code
into your source files, and if you're using templated code, it macro-expands
with parts you're putting in from your own code. There are some separately
compiled C++ libraries, but most of the standard library and Boost code you
use is recompiled every time you compile your program. You could probably
define a new pragma that would tell the compiler that all the code from a
given file, class, or function is unsafe, but it would take source-level
modifications to every library you use to properly implement the flag you
suggest.

~~~
CCs
Yes, the point was "from now on", as a clean break from the past without
giving up legacy code.

~~~
BruceIV
And my point was that, due to the way C++ is designed (specifically, not
having a proper module system and having this massively powerful macro-
replacement templating engine), there is no way for the compiler to
distinguish between "legacy" code and "new" code.

Consider the following case study: you have a library including a template
class that uses raw pointers as an iterator type. You accidentally write `iter
+ 9` instead of `*iter + 9` when you've included it (this will likely compile,
though there should be a warning). Now, even assuming there's some language
extension added to mark that library as "legacy" (maybe #include-legacy
<mylibrary>), can you tell me whether the "new" code you just wrote using raw
pointers counts as "legacy" or not?

I think this is basically Bartosz' point as well - that C++'s legacy design
features are too baked in to make positive changes easily, or even tractably
(even small changes like your --no-legacy compiler flag).

~~~
CCs
Yes, there's a way. Precompiled headers already do a pretty powerful analysis.

Thinking a bit more I realized I just reinvented the wheel: this is how
Microsoft phased out the sprintf, strcpy and so on.

So you can choose between 3 compiles modes: --new-cpp --advanced-library and
--allow-legacy You can turn it on with pragma before including an .h file and
turn it off after, if you have to.

By default "\--new-cpp" is on, throws error on naked pointers, new, delete,
pointer arithmetic and so on. Basically C++ would become a safe language,
almost like a CLR-type managed code. It could even have a "const by default"
variable declaration, like Scala.

The "\--advanced-library" would allow a lot more (e.g. manual memory
management), and "\--allow-legacy" would be for full backward compatibility.

Does it worth it? I think it does: currently with meta programming, lambdas,
named(!) closures, deterministic constructors etc. C++11 is one of the best
programming languages for anybody who cares about performance.

