
On comparing languages, C++ and Go - abelsson
http://blog.grok.se/2013/10/on-comparing-languages-c-and-go/
======
tptacek
I've been following this whole business card raytracer story and wonder if
people might be missing the forest for the trees.

It would be a little nutty to suggest that Golang 1.1 is going to give
optimized C code a run for its money. Nobody could seriously be suggesting
that.

What is surprising is that the naive expression of an "interesting" compute-
bound program in both languages are as close as they are.

Most C/C++ code --- the overwhelming majority, in fact --- is not especially
performance sensitive. It often happens to have performance and memory
footprint demands that exceed the capabilities of naive Python, but that fit
squarely into the capabilities of naive C.

The expectation of many C programmers, myself included, is that there'd still
be marked difference between Go and C for this kind of code. But it appears
that there may not be.

This doesn't suggest that I'd want to try to fit Golang into a kernel module
and write a driver with it, but it does further suggest that maybe I'd be a
little silly to write my next "must be faster than Python" program in C.

~~~
npalli
That's interesting. I had the opposite reaction regarding Golang capabilities
while following this saga. Not sure how 'idiomatic' the Golang code is, at
first glance it just seems less expressive (more lines of code) than either
c++ or java!. I didn't think that was possible.

So whenever people talk about expressiveness of Golang it just seems like a
design gone bad. The designers wanted a programming language with the
expressiveness of python and the speed of C, they ended up with a language
with the expressiveness of C and the speed of python.

~~~
pbsd
I agree, I honestly don't see where the expressiveness claims of Go come from.
I've always put it in the Java-like bin of languages (which is not necessarily
a bad thing).

I suppose the one thing that Go does well (compared to C++ or Java) is builtin
concurrency and communication across tasks.

~~~
tptacek
My first impression of it was that it was like a cross between Java and
Python. It is unmistakably similar to Java conceptually and syntactically;
they are sibling languages, both designed to streamline, simplify, or
modernize C.

I'm a Java-literate C/C++ programmer. I would avoid writing straight Java code
at all costs; I find it immiserating. Here are some reasons off the top of my
head that Golang is more pleasant to work in:

* The syntax is deliberately streamlined, including implicit declarations, semicolon insertion, lightweight expression syntax, the capital-letter-exports-a-symbol thing

* It has fully functional idiomatic closures

* Interfaces get rid of adapter class BS

* The table type (maps, in Golang) is baked into the language, like Python, not a library, like C++ and Java

* Clearer, more direct control over memory layout; data structures in Golang feel like C

I don't know if Golang's standard library is that much better than Java's, but
it was obviously designed carefully by and for systems programmers, so I find
it remarkably easy to work with.

It also feels like a much smaller system than Java. Almost every time I write
a significant amount of Golang code, I find myself answering questions by just
reading the standard library source code. It's easy to get your head around.
I've written compiler/runtime-level code for the JVM and I still don't have a
great grip on all of Java.

~~~
georgemcbay
I agree with all of the above and I write Java code for a living currently
(Android/Dalvik though, not for the JVM).

Another cool aspect of the last point (Go being small and lightweight) is that
if you've got gcc and mercurial on a supported platform, building latest go
from source is as easy as:

hg clone [http://code.google.com/p.go](http://code.google.com/p.go) cd go/src
./make.bash

Got to build a local copy of the JVM and/or JDK for some reason? Good luck
with that (even ignoring all the licensing, OpenJDK vs closed, etc)

------
frozenport
Leave Britney alone!

I don't see why people feel that C++ needs to be replaced, when I write C++ I
have many levels of scope - and while dangerous it is not impossible and the
empowerment makes me feel like a god.

Programming is not incremental. If we spend all day writing a python back-end
and when it doesn't give the performance numbers that day was a complete
waste. When I think about C++ I know that a code written in C++ will take me
100% of the way - even if it takes longer to write.

~~~
betterunix
"I don't see why people feel that C++ needs to be replaced"

Here are some of my reasons:

1\. It is impossible to write high-level code without dealing with (and often
getting bogged-down by) low-level issues in C++. Why should I be _forced_ to
choose between different "smart" pointer types? Why should I be _forced_ to
decide how variables should be captured by a lexical closure? Sure, such
decisions might make sense when you want to squeeze out a constant-factor
improvement in performance, but they do nothing to help you _get things done
in the first place_.

2\. Error handling and recovery is needlessly and pointlessly complicated. You
can throw exceptions, except for the places where you cannot, and once caught
you there is not much you can do to fix the problem. It is so bad that the C++
standard library actually requires certain errors to not be reported at all.

3\. Extending the language is impractical. Look at what it took just to add a
simple feature, lexical closures, to the language: modifications to the
compiler. At best C++ gives you operator overloading, but you do not even have
the ability to define new operators. Lisp, Scala, and numerous other high-
level languages give programmers the ability to add new syntax and new
features to the language without having to rewrite the compiler.

I am not familiar enough with Go to say that it addresses any of this, but I
know why I stopped using C++ and why I have not regretted that decision. All
the above make writing reliable code difficult. I actually switched away from
C++ when I needed my code to scale _better_ , because improving the
scalability required a high-level approach and I did not have time to debug
low-level problems. Even C++ gurus wind up having to deal with dangling
pointers, buffer overflows, and other needless problems with their code --
that takes time and mental effort away from important things in most cases.

"When I think about C++ I know that a code written in C++ will take me 100% of
the way - even if it takes longer to write."

The same is true of any programming language if the amount of time spent on
the program is irrelevant. I am not sure what sort of work you do, but for
what I have been working on, getting things done is considered higher-priority
than squeezing out a constant factor improvement. Nobody complains about
faster code, but everyone complains about late, buggy, and incomplete code.

~~~
nly
_\-- Why should I be forced to choose between different "smart" pointer types_

If you don't want to decide then write all your types with value semantics and
pass by value. How types are going to behave when passed should be decided
before you write 'class{}'. It's a semantic decision. For types that you're
borrowing, and not writing yourself, pass a shared_ptr or refer to the
documentation.

 _\-- Why should I be forced to decide how variables should be captured by a
lexical closure?_

Same thing applies. Auto capture[=] everything by value. If you're type
doesn't have any (sane) value semantics, use a shared_ptr or a reference.

 _\-- You can throw exceptions, except for the places where you cannot, and
once caught you there is not much you can do to fix the problem._

You can throw an exception anywhere safely in correct code. The default
assumption in the language is "anything can throw, any time, anywhere", so if
your code doesn't at least provide the basic or weak exception guarantee
you're swimming against the tide. Doing so usually improves the encapsulation
and structure of code imo anyway.

 _\-- once caught you there is not much you can do to fix the problem._

Exceptions are more like hardware exceptions or page faults than typical error
states. You should only throw when you cannot reach a state expected by the
caller. Ultimately, it comes down to API design, not philosophy.

    
    
        // Clearly the only sane thing to do here if you 
           can't stat() the file is throw an exception.
        size_t get_file_size(string filename);
    
        // Some flexibility. Could probably avoid throwing. 
        optional<size_t> get_file_size(string filename);
    
        // Better still, and easy to overload with the above
        optional<size_t> get_file_size(string filename, error_code&);
    

... the better your API the better you can avoid having to throw. This isn't a
new problem either, if you look at the C standard library there are many
deprecated functions that provide no means of reporting an error at all except
to return undefined results.

 _\-- extending the language is impractical._

Writing STL-like generic algorithms is trivial. Writing new data structures is
trivial. Existing operators can be overloaded to augment scalar types or, more
ambitiously, re-purposed to create DSLs. You have user-defined literals.
Initializer lists and uniform intialization.

How would you _like_ to extend further without it being completely alien to
the existing language?

 _\-- I actually switched away from C++ when I needed my code to scale better,
because improving the scalability required a high-level approach and I did not
have time to debug low-level problems_

You should write more about this.

 _\-- C++ gurus wind up having to deal with dangling pointers, buffer
overflows, and other needless problems with their code_

Not really bad pointers and buffer overflows these days. More slogging through
pages and pages of compiler errors and hunting for library solutions to
problems that should really be solved in the standard library (For me lately:
ranges, matrices, more complex containers).

In any case, all languages have their share of friction. Look at that new
Bitcoin daemon written in Go that hit the front page a few hours ago. The
author had to debug 3 separate Go garbage collector issues.

~~~
fauigerzigerk
Combining different libraries with their respective memory management and
error handling ideas is one of my biggest issues with C++. You have to keep so
many things in mind to use every API according to its own peculiar rules. One
slip of the mind and you're in big trouble.

Also, getting all those libraries to compile with a particular compiler/stdlib
combination is a big hassle. Things break in non obvious ways because of weird
implicit template instantiations that are basically untestable by the creator
of the library.

These types of integration issues are never going to go away and therefore C++
will occupy a stable niche forever but will never become mainstream for
application development again, regardless of any language improvements.

~~~
eliasmacpherson
Mainstream again? What's the majority of code being written in nowadays?

~~~
fauigerzigerk
Most new application development is done in Java and C# (and perhaps
JavaScript).

C++ is used for a long list of specialized tasks like embedded systems, games,
in-memory computing, database systems, high performance scientific stuff, some
low latency trading algos and a few core UI things like browsers. I don't
consider any of those mainstream application development.

------
icambron
> Personally, I’m hoping for Rust.

That was my thought while reading article; Rust seems like the answer here.
I'm coming from the opposite direction than the OP: I'm unwilling to give up
the expressiveness of Ruby and friends in order to write micro-optimized C++
code, and I'm hoping Rust will give me the best of both worlds.

~~~
evincarofautumn
Yeah, everyone’s got their eye on Rust as far as low-level expressive
programming goes. Nimrod[1] also looks quite good:

> Nimrod is a statically typed, imperative programming language that tries to
> give the programmer ultimate power without compromises on runtime
> efficiency.

In my spare time, I’m working on a statically typed concatenative language
called Kitten[2] with similar goals.

[1]: [http://nimrod-code.org/](http://nimrod-code.org/)

[2]:
[http://github.com/evincarofautumn/kitten](http://github.com/evincarofautumn/kitten)

~~~
yoklov
Speaking as a C++ programmer, Rust appeals to me a great deal over Nimrod
because you can turn off (or just not use) Rust's GC and still have memory-
safe code.

I believe you can turn off Nimrods GC as well, but you lose any guarantee of
memory safety when you do. Not to mention -- aren't all pointers in Nimrod
reference counted? That's going to take a fairly significant performance toll
due to cache effects alone.

~~~
rbehrends
Nimrod borrowed Modula-3's idea of having both traced and untraced pointers
(using _ref_ and _ptr_ as keywords, respectively). As long as you only use ptr
references, no overhead for reference counting (or other forms of garbage
collection) is generated.

Of course, in order to use untraced references, you also have to use manual
memory management (though you can use the macro/metaprogramming system to
reduce the pain somewhat).

For traced references, Nimrod uses deferred reference counting; i.e. reference
counts are only updated when they are stored on the heap or in a global
variable (similar to write barriers for generational/incremental garbage
collectors). If a reference count reaches zero, the reference is stored in a
zero count table. At intervals, a separate pass checks if any references in
the zero count table are still referenced from the stack (and does cycle
detection where needed).

Deferred reference counting avoids the biggest problem with naive reference
counting, which is that just assigning a pointer to a local variable (to
inspect the contents of an object, say), can trigger up to two reference count
updates. Conversely, with deferred reference counting, a read-only traversal
of a data structure will not perform any reference count changes at all.
Similarly, purely temporary allocations will not require any reference count
changes, either.

------
pjmlp
"On comparing languages, gcc vs 6g"

Yet again the fallacy that comparing implementations equals to comparing
languages.

On the left corner the 6g compiler toolchain, which the authors admit that has
yet to suffer lots of improvements in the optimizer.

On the right corner, the battle tested gcc optimizer, with circa 30 years of
investment, aided by language extensions not part of any ANSI/ISO standard,
that are not language specific.

Of course "C++" wins.

People, just spend time learning about compiler design instead of posting such
benchmarks. And before someone accuses me of Go fanboy, I think my complaints
about Go's design are well known by some here.

------
AYBABTME
I think all there's to get from these posts is that:

 _Presumption_ : Writing Go code is more fun than C++ code.

 _Demonstration_ : You can write performance Go code that's not too far from
C++ code.

 _Result_ : Cool, here's a more fun than C++ language I can use as a step down
the complexity path when I need performance.

Or like _tptacek_ said.

~~~
kybernetyk
Fun is really very subjective.

------
z3phyr
C++ cant be so easily replaced by Rust, even though rust seems to be a better
language. The more possible case is that Rust's great ideas get ported into
C++NewX standard. Even if Rust is to be our new systems labguage, we will have
to wait for 12-15 years for it to actually happen.

------
eonil
I don't think ray-tracer is a good example for Go.

Ray-tracer workload can be fully split into each hardware core. Only input
data needs to be shared, and even the input mostly doesn't make any race
condition. The algorithm even can run on GPU.

This is handicap for Go. Go wants to solve - safe and easy concurrency without
race condition for complex logic. So (IMO,) Go needs to make some overhead (or
sacrifice some performance feature) for its goal. But in ray-tracer example,
this ability mostly not required.

------
tinco
Ofcourse, if you want your site to be up when it has 25 upvotes on hacker
news, it's best not to worry if your web application is written in C++, Go or
in Ruby (or sadly but likely, PHP), but that it doesn't actually have to spawn
hundreds of processes and hundreds of connections and allocate hundreds of
megs of ram to accomodate your visitors.

So, please just use nginx to host some static html files for your blog, and
fetch your discussion boards asynchronously..

~~~
corresation
_So, please just use nginx to host some static html files for your blog_

Wordpress + W3 Total Cache can handle a hundred HNs with ease. There is
absolutely no reason to go to the past, and poorly configured blogs don't
justify that Luddite argument.

------
Jayschwa
If you want to "poke at the processor" with Go, its toolchain makes it pretty
easy to use assembly in your package.

~~~
pbsd
Go uses an assembly syntax completely different from everyone else. That's
mighty annoying.

~~~
RamiK
Ha? Doesn't it work through the 'import "C"' and then it's just regular C?

~~~
pbsd
I meant something like this, i.e. not going through C at all:
[http://golang.org/src/pkg/syscall/asm_freebsd_amd64.s](http://golang.org/src/pkg/syscall/asm_freebsd_amd64.s)

~~~
mseepgood
It looks like AT&T assembly without the %s. Not really that weird.

------
zxcdw
The article doesn't mention reasoning for using -O2 instead of -O3 or -Ofast.
It seems that they give better performance in this case, as does adding
-funroll-all-loops(when combined with -Ofast gives almost 10 % speedup). I am
compiling with GCC 4.8.1, so perhaps authors 4.6.3 (what?) behaves
differently.

