
Less is Exponentially More (2012) - korethr
https://commandcenter.blogspot.com/2012/06/less-is-exponentially-more.html
======
emtel
Rob Pike's confusion about why C++ programmers didn't want to move to go is
partially explained by his focus on a particular problem domain at the time Go
was being developed. Google in those days was primarily concerned with writing
highly concurrent request processing software, and at that time the vast
majority of that software was written in C++. Writing highly concurrent C++ in
the days before the language had things like lambdas, std::function,
std::future, etc was definitely gross. In practice, a lot of the concurrent
C++ code from that time period took on the callback-hell flavor of node.js
code, but without the concise nature of JS. It was gross.

Go is _obviously_ a superior language to C++ for writing concurrent programs
in every way except possibly when you need to achieve maximum possible
throughput (as in google's search clusters) or minimum possible latency (are
HFT shops switching to go? I honestly don't know). However, outside of a few
companies that have huge capacity problems, people weren't writing concurrent
request serving programs in C++. They were using Python, Java, etc., and
later, Node.js. So its no surprise that people flocked to Go from those
languages and not C++, because there wasn't a huge population of C++
programmers in this problem domain looking for a better language in the first
place.

The other C++ programmers were down the hall, working on Chrome and Android
(which has a ton of C/C++, despite Java being the primary app development
language). The reason they didn't, can't, and won't switch is that managed
languages like Go are not well suited to highly resource constrained
environments like phones, or browsers that are supposed to run on win XP
machines. (To say nothing of games).

The advantage of C++ is not, as Pike pompously suggests, that C++ programmers
have too much fun playing around with templates and other navel-gazing
pursuits. The advantage is that it is a high-level language that lets you do
what you need to do with very, very close to the minimum possible resource
usage. And there are programming environments where that really matters, and
those environments are never going away.

~~~
geokon
Being a C++ dev, I think you're on-point about the reason people still choose
C++ but my suspicion is that it part truth, part group-think and it won't
really hold out.

I could be totally wrong, but have you had to optimize a tight loop in C/C++?
It sucks and a lot of stuff is missing : simd, likely/unlikely branches, the
compiler has a lot of trouble knowing when lines of code are independent and
can be done in parallel (b/c const =/= immutable), in-lining can't be forced
when you _know_ it gives better performance (if you use GCC extensions you can
alleviate a lot of this..but that's tying you to a compiler). Yeah C++ is
generally faster than Go, but there is a lot of performance typically left on
the table. The language kinda start to work against you when you start to dig
down. So you slap some shit together and the compiler will do a decent job,
but most C++ devs aren't even looking at the compiler output

But the elephant in the room is that more and more of our available flops are
on the GPU and C++ isn't helping you there at all. Not only that, but the GPU
is giving you way more operations per Watt (and that's what a lot of those
people care about). And finally, when you throw stuff onto the GPU you are
also leaving the CPU available to do other things. So there are a lot of
"wins" there. As you illustrate, the areas of C++'s relevance is shrinking,
and shrinking into the area that is very GPU friendly.

So the way I see it, C++ folks will start to write more OpenCL kernels for the
performance critical pieces and the rest won't matter (Go or Clojure or
whatever). The GLSL is kinda lame and too C99ish... so maybe someone will
write a better lang that compiles to SPIR-V, and it's not exactly write-once-
run-everywhere, but it _could_ be much better than writing optimized C++ and
it can run everywhere. It's more of the cross-platform-assembly C/C++ wants to
be

~~~
yuushi
> SIMD

Intrinsics are directly callable from C++.

> likely/unlikely branches

Most compilers have extensions that will allow you to do this
(__builtin_expect and so on).

> in-lining can't be forced when you know it gives better performance

Again, most compilers have this, not just GCC, e.g. __forceinline.

> the compiler has a lot of trouble knowing when lines of code are independent
> and can be done in parallel (b/c const =/= immutable)

This is true, as aliasing is a real issue. The hardware itself has some say
over this anyway, dependent on its instruction scheduling and OOE
capabilities.

What you don't mention, however, is the fact that almost no other languages
offer any of these, let alone all of them. Rust may be the exception here,
although some of this is still in the words (SIMD, I'm not sure about the
status of likely/unlikely intrinsics).

For GPU programming, if you're using CUDA, you're almost certainly using C or
C++, or calling something that wraps C/C++ code. Not everything is suited to
GPU processing anyway, there's still a lot of code that's not moving off the
CPU any time soon that needs to be performant.

~~~
geokon
right, so things that are not part of the language, not crossplatform and not
crosscompiler. That's called fighting the language in my book :)

I'm not saying you can't get C++ to output the assembly you want - it just
sucks trying to coerce it to do things that are honestly not that complicated.
And even when you do get what you want you find you can't use the code
anywhere else. To me that feels like a language failure...

> is the fact that almost no other languages offer any of these

I guess you missed my point. It seems to me that we're at a point where you no
longer need these features as part of your core application language. The idea
is that with OpenCL/SPIR-V we'll be able to

1- be more explicit and not fight the language (so even if you're 100% on the
CPU it makes sense)

2- target every platform (you can finally write code for your GPU)

3- can be called from any parent language

You're right that not all performance critical problems boil down to tight
shared-memory loops that can be thrown onto an OpenCL kernel - but my
experience so far tells me that that's the vast majority of performance
problems. So C++'s usefulness will shrink. But maybe my experience is biased
and I'm off base. I haven't done much OpenCL myself - but I'm definitely
planning to use it more in the future

~~~
cma
> right, so things that are not part of the language, not crossplatform and
> not crosscompiler

You just have a header with different #defines for the different platforms you
are going to ship on, or use a premade open source one.

If you want to ship on everything, you won't get full optimization stuff
everywhere. It would be better if some of these features were in the standard,
but in practice it isn't such a big issue for those two in particular.

------
peterevans
Huge caveat: This article was written in _2012_. So do yourself and fellow
reader a favor and not argue over content that is now six years-old.

Obviously, Go has not replaced C++ usage. And, these days, I would see Rust as
the more likely step from C++.

(Although, I still feel that it remains to be seen whether Rust will make a
huge dent there; is memory-safety the killer-app feature that makes people
want to use Rust? Do enough people feel that they Need To Use Rust to make it
stick? I'm interested to see how that plays out.)

What Go has done, I think, is replaced interpreted language use (PHP, Python,
Ruby) in backend code. Which makes sense, to me--those are already GC
languages, so you're pretty familiar with the lay of that land. Generics may
not make a huge deal for you because there were no generics to use in those
other languages. And Go is quite a bit faster than any of the aforementioned
interpreted languages.

~~~
nordsieck
> What Go has done, I think, is replaced interpreted language use (PHP,
> Python, Ruby) in backend code. Which makes sense, to me--those are already
> GC languages, so you're pretty familiar with the lay of that land.

One of the less talked about reasons Go is successful at replacing these
languages is the devops story - essentially no runtime dependencies.

~~~
candiodari
This can be easily done in C and C++, by static linking. In java, by building
custom jars. This is in fact common practice in large companies, for exactly
the reason you mention.

~~~
zbentley
It cannot be done as easily with interpreted languages (Python, PHP, Ruby,
Node.js), though--and those are precisely the languages from which Go has been
stealing users.

~~~
vinceguidry
Vendoring in your dependencies is definitely not hard in a dynamic language,
just for some reason most projects never bother. I doubt it's a significant
reason why they've been moving to Go.

I think it's more prosaic reasons, like that's the language their job uses.
People who use dynamic languages tend not to think of themselves as
specialists in their stack.

~~~
zbentley
> Vendoring in your dependencies is definitely not hard in a dynamic language,
> just for some reason most projects never bother.

Not to take a side in static-vs-dynamic linking or the language argument, but
that is absolutely incorrect. Single-static-binary is a very significant
reason for lots of people moving to Go: that, plus a good cross-compilation
story make a lot of problems go away.

Vendoring dependencies is hard in general. It's _much_ harder in dynamic
languages.

Some evidence/examples:

\- Look at how many different ways of packaging things that Python has.

\- Ruby, which most people consider to be one of the scripting languages that
got vendoring right out of the gate, still struggles with system libraries
used to bootstrap the Bundler process on deployment targets.

\- Node.js, another one which is considered to have gotten vendoring right out
of the gate, has massive problems with its implementation: package assets in
node_modules take forever to fetch/inflate deployment times and artifact
sizes, and put strain on systems. People argue that the difference between "my
node_modules directories have so many files I ran out of inodes" and "my
golang binary is really big" is just a difference of degree, but it's a _big_
difference regardless.

\- Vendoring/deploying compiled/native dependencies are a massive hassle in
dynamic languages as well: better make sure that you compiled those deps in a
way compatible with your target system (a big hassle if you are, say, building
an old Perl C/XS extension on OSX and targeting Linux for deployment), and
make sure they all link correctly once there, and, if they link, hopefully
they link with system libraries that don't have behavior differences from
wherever you tested the code. And a _lot_ of popular libraries have a native
component.

\- There's also the problem of dependency resolution. Several dynamic
languages have hard-coded system library paths, which means that if your
vendoring misses a spot, you might be loading an unexpected version of
something, or failing to start. The "just put everything in the system lib
path" ignores the reality of multitenant/multi-use systems, and as a whole
'nother piece of expertise.

\- The popularity of Docker/containers is largely driven by the fact that they
let you "statically link" your whole stack. That demand indicates that some
folks, at least, found the vendoring story for dynamic languages difficult.

> People who use dynamic languages tend not to think of themselves as
> specialists in their stack.

This sounds suspiciously like "if you use $language you're an idiot/inferior".
Spare me your arrogance and language elitism, please. There are specialists,
generalists, experts, and idiots on every platform ever invented--in very,
very similar proportions.

------
dahart
> C++ programmers don't come to Go because they have fought hard to gain
> exquisite control of their programming domain, and don't want to surrender
> any of it. To them, software isn't just about getting the job done, it's
> about doing it a certain way.

For the first time, I think I'm motivated to learn Go. My big beef with C++ is
there's no simple mode. Every single decision ever made in the syntax and
libraries is geared for only the most complicated, high performance needed
situations.

My problem is that 90% of my C++ code doesn't need performance at all, and
could be Python for all I care.

To be fair, some of the modern changes are bringing some simplicity back. But,
it still feels like wading through mud to write what should be easy things
like file i/o, compared to Python or JavaScript. Using hashes and mixing
hashes with other containers is so verbose and nit-picky in C++ compared to
today's scripting languages that I just long to mix languages most of the time
I'm writing general C++.

~~~
mynegation
There is easy mode. It is called C. I am not trying to be flippant. A lot of
people started in C++, writing essentially C code and then going to classes,
inheritance, templates, wider usage of standard library and boost, little by
little.

~~~
bitwize
But you're not supposed to program that way in C++ at all. Ask a veteran C++
programmer, and they'll tell you that buck n00bs should start out using the
STL, smart pointers, and generic algorithms, make sparing use of inheritance,
and never touch bare pointers.

~~~
TheOtherHobbes
This former buck noob took about three months to get to that exact place,
because C++ is literally a messy archaeology of random CS ideas, and the
guiding philosophy seems to “shovel these funky features in, maybe someone
will use them.”

Beginner books meanwhile will be five, ten, or fifteen years out of date, and
reliably give you a correspondingly wrong view of current best practices.

At this point C++ is almost an oral tradition. You _need_ someone who knows
what they’re doing to tell you which books to read and which parts of the
language to ignore.

It’s horrendously difficult to learn if you do the usual thing and try to
self-teach - and needlessly so, because the core of the language isn’t that
complex.

------
RcouF1uZ4gsC
A counterpoint is that more can be simpler when it allows better abstraction.

For example, calculus is more complicated that arithmetic and algebra.
However, calculus can unify and show relations between different algebraic
representations. For example, when I learned physics in middle school, I
thought that remembering

delta position = velocity * time

but struggled with

delta position = initial velocity * time + 1/2 (acceleration * time^2)

When, I learned about derivatives and integrals, the formulas all made sense.

In my opinion Go suffers from the ability to abstract which is most
represented in its lack of generics. Instead, of being able to represent
operations in a generic way, you are forced to cut and paste, and cannot
express the relationship accurately, much as how I considered the relationship
between acceleration and position to be opaque before I understood calculus.

~~~
barrkel
That formula makes sense when you draw a velocity vs time graph and realise
the area under the curve is the distance travelled. With linear acceleration,
the area is a rectangle with a triangle on top, and everything starts becoming
clear, even without calculus.

~~~
kbwt
You can also visualize the meaning of d/dx x^n = n x^(n-1) geometrically:
There are n faces being extruded which contribute a volume of x^(n-1) dx each.

------
xg15
I'm one of the people who never understand the absence of generics in Go and,
to be honest, I don't find his reasons here very convincing.

> _Early in the rollout of Go I was told by someone that he could not imagine
> working in a language without generic types. As I have reported elsewhere, I
> found that an odd remark. [...] What it says is that he finds writing
> containers like lists of ints and maps of strings an unbearable burden. I
> find that an odd claim. I spend very little of my programming time
> struggling with those issues, even in languages without generic types._

Does he suggest we write containers from scratch every time we need them?
There is a lot of nontrivial logic, which I'd prefer not to have to get right
again every time. (Not to mention that would feel very much the opposite than
"programming in the large")

> _But more important, what it says is that types are the way to lift that
> burden. Types. Not polymorphic functions or language primitives or helpers
> of other kinds, but types. [... long rant why types are for mediocre people
> and interfaces are awesome... ]_

I very much agree that interfaces are awesome and composition is way more
useful than inheritance, but what does that have to do with generics?

If you have a function that takes a list of things and returns another list of
the same things in sorted order, wouldn't you still want to have a way to keep
track that the returned list contains the same things as the list you passed
to it? That seems independent of whether the things are specified through
types or interfaces.

Languages primitives for containers are a good idea, but the way Go eventually
implemented that (hardwiring the few most-used containers into the language
and making it really cumbersome to use custom containers) seems very
unsatisfactory.

------
vinkelhake
> C++ programmers don't come to Go because they have fought hard to gain
> exquisite control of their programming domain, and don't want to surrender
> any of it. To them, software isn't just about getting the job done, it's
> about doing it a certain way.

Yeah, because it cannot possibly be that many C++ programmers saw Go as step
_down_.

Pike thought that C++ programmers would flock to Go in large numbers. That
didn't happen, which means that Pike didn't really understand what motivated
these C++ programmers. I don't think this summary gets that much closer to the
truth.

~~~
AceJohnny2
> _Yeah, because it cannot possibly be that many C++ programmers saw Go as
> step down._

You just paraphrased his point.

~~~
vinkelhake
I realize that it might sound like that. But I would paraphrase his point like
this:

"C++ programmers spent a ton of time learning C++ so they're not going to
switch to a language that isn't C++"

Which is basically saying that none of the design decisions that went in to Go
matter to C++ programmers because at the end of the day, Go isn't C++. Which
in turn is pretty damn uncharitable towards C++ programmers. A lot of
technical criticism was offered, but it's easier to ignore that and focus on
something you cannot change.

~~~
RcouF1uZ4gsC
What is interesting is that Rust has seen a lot of interest from C++
programmers.

Many C++ programmers like the fact that it takes their informal rules of safe
memory/thread management (use RAII, don't use raw pointers, don't use mutable
shared state, be careful of iterator invalidation in containers) and
formalizes them in the type system so the compiler can check them.

In addition, Rust has some nice features like pattern matching, algebraic data
types, and Traits that are not yet available in C++.

Go on the other hand, was not as compelling to C++ programmers.

------
dang
Linearly more discussions at
[https://news.ycombinator.com/item?id=4158865](https://news.ycombinator.com/item?id=4158865)
(2012) and
[https://news.ycombinator.com/item?id=6417319](https://news.ycombinator.com/item?id=6417319)
(2013).

------
alexandercrohde
I guess I don't get it. I agree the abstract idea that a simpler solution is
usually preferable. But I don't see how this translates for a language like
C++ or Go.

I understand the backlash against "magic," but this more a concern of "spooky
action at a distance" rather than overhead.

Could somebody who understands the sentiment help articulate what type of
solutions can be more easily expressed in Go than in say Scala (short of
performance-centric cases? Or is this solely about performance-centric
cases?).

~~~
Veedrac
The claim isn't that it helps specific things, but that the more detail work
you have to do, the less attention you can pay in the large. This is why, for
example, Go has the proverb "a little copying is better than a little
dependency"; you make a downpayment in the small to make your architecture
cleaner.

This is something C++ makes exceedingly difficult (Linus Torvald's famous rant
is half on this topic). This is also something Scala doesn't do well at,
opting to provide a lot of features that make bits of code "prettier" at the
cost of being able to see what is being built.

------
shalabhc
"Programming in the large" is mentioned a few times in the article but we're
still thinking programming one OS process at a time, fiddling with low level
data structures - is this really 'in the large'?

~~~
chewxy
programming in the large in the sense of many people being involved in a large
program

------
WesternStar
So what are the best examples of large programs written in go?

~~~
btym
etcd, kubernetes, docker, rkt, grafana, cockroachdb, elastic beats

------
mark-r
Is it coincidence that this is up on the front page at the same time as "C++
Core Guidelines" (rules for modern C++) and I started reading this just after
glancing at that? The churn and complexity introduced into C++ in the name of
simplicity is simply mind-boggling.

